re: Axiowave Queues in the CoreIncreasing link utilisation is more important for access than it is for the core. As I understand it the only real per hop forwarding issue for core networks is consistency, i.e. using boxes from different vendors introduces per hop delay variations and interoperability issues based on the QoS mechanisms supported, e.g. 4 queues vs. 8 queues, strict priority vs weighted fair queuing. Using a single vendor solution obviously avoids these issues.
Even if the Axiowave solution is capable of offering improved per hop performance to increase link utilisation, this is useless if a provider's core network is running below 50% capacity for the majority of the time anyway (which most are).
The ability to guarantee bandwidth for premium services under failure is what's important in the core, this requires very careful network modelling/capacity planning along with traffic engineering and strict admission control - not improved per hop queuing/forwarding.
Axiowave will need a lot more than this to survive in the core router market...
re: Axiowave Queues in the CoreGiven that core boxes these days support OC-3 and OC-12 interfaces, guaranteeing latency bounds on these interfaces when traffic is exiting the core is pretty important according to your claim. Core is not just OC-48 and OC-192 interfaces, core also connects to the aggregation and edge platforms through low-speed interfaces and maintaining latency and jitter bounds on those interfaces is pretty important.
If you need "to guarantee bandwidth, latency and jitter for premium services" you need platforms in the end-to-end path to honor those guarantees. Careful TE and admission control alone will not address these problems. They are necessary but not sufficient.
I don't know much about preferred stock and all that finance jazz so I cannot give an intelligent answer to your first two points.
Regarding your third point, dark fiber is cheap, usable bandwidth is NOT cheap. Please remember that you need to put in expensive OC-48 and OC-192 router ports to light up dark fiber and make it usable. I am not even talking about optronic costs like transponders etc.
I don't know about "proprietary" hardware since I am not an axiowave employee but your assertion that bandwidth is cheap is completely false and reveals your complete ignorance about the carrier business.
Many carriers are indeed providing MPLS Layer 2 tunnels traversing their network alone. Level3 has been doing this for years wherein they encapsulate GigE and ATM traffic over their M160 MPLS backbone. So this is not something new. The problem is that M160 reorders and Level3 is unable to support any latency and jitter bounds. From what I can read from the Aleron press release, they have taken it one step further and they are able to guarantee latency and jitter bounds which could be pretty interesting for voip and video services.
Priority Queueing alone does not solve the problem. If you have only two traffic classes, say, high and low and if you can guarantee that the total "high" traffic from all entry ports will never exceed the line speed of the exit port, then maybe if the switch is non-blocking and does not suffer from contention issues, latency bounds are possible. While this sounds theoretically possible, I have not yet seen implementations from Cisco and Juniper that can support this. Cisco has had "LLQ" for the longest time and the 12000 does have "VOQ" to avoid HOLB, but try congesting an exit port with background traffic, Cisco 12000 will not able to guarantee any kinds of latency bounds. What's more, the line card starts dropping "high" packets too. The same is true atleast for the T-series platforms that I have tested in my lab.
The main problem with PQ is that if you have multiple classes of service, say, voip, business class VPNs and BE, then PQ will not be able to guarantee that the business class VPNs will be serviced and there is no way one can assign bandwidth gurantees to business class VPNs and BE. PQ is just a small subset of the total tool set you need.
The practical solution is NOT to overprovision the network. Tell me, which industry makes a profit by over-provisioning its assets? I think this is the fundamental issue that carriers need to understand. The hotel industry does not say, "the practical solution is to build lots and lots of expensive rooms and fill it to 30%". The airline industry does not say "the practical solution is to build huge jumbo jets and configure it with lots of premium business class seats and fill 20% of the seats". Only in this industry carriers can get away with such a mindset. And that's the reason why Cisco and Juniper have huge market caps and all the IP carriers are either bankrupt, forced into making fraudulent accounting shenanigans, or just losing money quarter over quarter.
There are two issues here. One is the low IP utilization and the other is premium pricing for real-time and business applications. By over-provisioning the network, carriers are degrading the value of their premium offerings. For example, one RBOC colleague of mine told me that they have two DSL offerings with different performance guarantees. But everyone is choosing the lower priced offering. The reason, they figured out is because his network is so over-provisioned that customers figure that they will get the same QoS treatment irrespective of what service they buy from the RBOC. This is because there is plentiful bandwidth and there is no contention. You have to create a situation where one service is NOTICEABLY different from another lower priced service. Otherwise, what you get is the "tragedy of the commons"
re: Axiowave Queues in the Corecarrierguy makes two points: 1) Overprovisioning is expensive and other industries (airlines, hotels) do not overprovision. 2) Overprovisioning makes non-premium services so good that customers will not buy premium services. Therefore overprovisioning is bad.
I disagree with both points.
Overprovisioning airplanes and hotels is bad, because it is more expensive than the alternative, which is careful and correct scheduling. This is not true in today's internet core, so the analogy is invalid. Starting with the fiber glut of the late 1990's, provisioning and terminating bandwidth has been cheaper than the scheduling needed to properly provide "quality" in the core. Please note that the last mile is different.
The second argument is that carriers should choose a more expensive way to provide "premium" traffic because the more expensive approach makes the best-effort traffic worse while maintaining the quality for the premium traffic. This only works for monopolies. It fails in the core because there is competition. If a carrier tries to maintain this artificial distinction, a competitor will simply build a cheaper network using overprovisioning.
Basically, the concept of a premium service exists in the minds of the marketing people. The customers want fast, high-quality cheap service.
Consider the following: If your goal is to provide high-quality premium service that really is better than the best-effort service, the cheapest way to do this is to overprovision so all the packets get great service, and then artificially pessimize the best-effot service by occasionally dropping or delaying packets. The drawback with this approach is that is makes ti blatantly obvious that the carrier is abusing its monopoly power, while the alternative of choosing an inferior technical approach makes it less obvious.
I've been in t he industry for 35 years or so. The relative cost of bandwidth versus scheduling intelligence swings back and forth over time, and differs with speed. Right now, better scheduling wins where the cost per bit is artificilally high, e.g., in the last mile.
Please note: I'm assuming "perfect local scheduling" in each core router. To the extent that a router cannot perform perfect local scheduling, the router is broken. with perfect local scheduling, packets are transmitted on the egress line in FIFO-within priority order, and packets are not dropped unless the presented rate exceeds the line rate. The problem with SLAs is that they depend on non-local, connection-based information, and this is too expensive to add to today's internet.
re: Axiowave Queues in the CoreIn the "real" worlds of fishing and hunting, one of the best ways to catch anything big is to use the right bait. In fishing its called "chumming." (Another technique involves tossing dynamite into the waters, but that probably is not relevant to the Axiowave et al thread ;-)
So mightn't Mukesh just being acting as a very shrewd fisherman by chumming the waters with some of his spare cash, to better draw in VC fish? VC's are notorious for not wanting to be first into the waters anyway.
Mukesh is also very shrewd -- putting his own money into the company should tend to reassure otherwise startup-wary potential "big fish" customers.
It sure beats management draining a company's cash (via egregious bonuses if not selling their stock back to the company) as I've seen/heard happens at other dying startups.
Knowing a lot about datagram reassembly: 2 POINTS!
Conjuring the Borg: 4 POINTS!
Using the word "paradigm": Priceless.