& cplSiteName &

Did WorldCom Puff Up the Internet Too?

Light Reading
News Analysis
Light Reading
7/18/2002
50%
50%

The plot thickens. A number of experts are now charging that WorldCom Inc. (Nasdaq: WCOME) didn't just puff up its earnings with questionable accounting -- the company may have helped drive the Internet bubble itself with misleading traffic growth figures.

The new charges are important because the company's technical leaders, including John Sidgmore, the new CEO, were considered experts on the Internet and were frequently consulted on general Internet growth issues during the late 1990s. WorldCom's UUNET division runs what is considered one of the largest, if not the largest, Internet backbones in the world. Sources charge that Sidgmore and his technical team were responsible for inflating bandwidth growth numbers that supported much of the rationale behind the growth in the Internet.

Sources include several academic experts, as well as one former WorldCom employee who worked with both Sidgmore and former WorldCom Chief Scientist Michael O’Dell. The ex-WorldCom employee, speaking to Light Reading under the condition that he not be named, insisted that WorldCom executives including Sidgmore intentionally boosted internet traffic growth numbers to make the industry look more lucrative than it in reality was.

"If you do the math, all the growth they were claiming was physically impossible,” he says, referring to Sidgmore’s claims starting in 1998 that internet traffic was doubling every 3.5 months, or growing at a rate of 1,000 percent a year. “It’s been bullshit from day one... It was all about manipulating the stock market. In reality, what was growing was connectivity,” he says.

WorldCom's Internet statistics were often quoted throughout the industry, and Sidgmore has cited 1,000% annual growth in several public forums and reports. The controversy is important because Sidgmore has distanced himself from WorldCom's ex-CEO Bernie Ebbers and is casting himself as the leader that will clean WorldCom up.

Here's how it worked, according to the former WorldCom employee: WorldCom would hook up new customers with connections capable of handling, say, up to 1.5 Mbit/s of data, knowing that for most of the time the lines would only carry a fraction of this amount. WorldCom would then use the 1.5 Mbit/s figures, not the actual traffic figures, when citing Internet traffic growth statistics.

"There was massive connectivity growth, but UUNET’s business wasn’t growing as much, "says the former employee.

Several studies on Internet traffic growth, including one by Andrew Odlyzko and Kerry Coffman of AT&T Research Labs from 1998, show that Internet traffic was growing 1,000 percent a year over a short period of time between 1995 and 1996. But following that timeframe, studies show that Internet traffic growth remained fairly consistent, doubling every year -- a far cry from Sidgmore's now-famous 1,000% growth rate. A 100 percent growth rate might be a far cry from the huge numbers that WorldCom was touting, but, Coffman insists, it is still quite impressive.

"Doubling every year was still massive," Says Coffman. "People didn’t understand what growing at 10 times a year meant,” he says, talking of all the companies and investors who believed that such enormous growth could continue. “Nobody did sanity checks.”

Odlyzko questions the idea of the Internet's runaway growth as depicted by WorldCom.

"The myth of Internet traffic doubling every 100 days seemed to be based on (i) the fact that such growth rates really did hold during the two-year period 1995-1996, and (ii) WorldCom making misleading claims in subsequent years,” Odlyzko, now with the Digital Technology Center at the University of Minnesota, writes in an e-mail to Light Reading.

Other industry observers, however, say that they’re not so sure that the huge numbers were pure manipulation. “There were some fairly aggressive estimates of growth rates,” says Rick Wilder, principal scientist at Masergy Communications Inc.. “They may have been a bit exaggerated… [but] I think those numbers were factual for UUNet for a couple of years.”

Some experts say that in general, during the bubble years, measurements of Internet growth were subject to widespread abuse. Often, small snapshots of Internet growth were cited, without regard to long-term impact.

"Basically folks were using growth numbers that may have been true for a specific piece of the Internet, e.g., certain links on the NSFNET backbone, for a short period of time, and likely using them to their advantage when it would increase stock price projections (or egos),” Kimberly Claffy of the Cooperative Association for Internet Data Analysis, CAIDA, writes in an e-mail.

Of course Sidgmore and the rest of the WorldCom crew weren’t the only ones pushing the big numbers. “It is hard to hang it all on John Sidgmore,” says one analyst, asking to remain unnamed. “It was pretty widespread. It was definitely questionable, but I feel bad for Sidgmore to get stuck with all the blame. We all did it.”

The former employee, however, says that while much of the industry is guilty of wanting the numbers to be higher than they were, WorldCom was a leader in touting its Internet figures. He says this led to the company building out infrastructure it didn’t need to uphold the impression that traffic was growing as fast as it claimed it was. “They had decided to build x amount of ports each month,” he says, “whether there were customers for them or not.”

Was the Internet growth hype premeditated? It's hard to tell. It may be WorldCom executives thought the traffic was growing as fast as they claimed

One thing is for sure, the controversy points to a central problem in the industry -- that it's tough to get good Internet traffic statistics. Government-sponsored traffic numbers haven’t been released since the NSFNet was was decommissioned and Internet backbone services transitioned to the commercial sector in 1995, according to Claffy. Most carriers don’t even systematically measure their traffic, she says, and those that do use different and often dubious methodology, and are careful to keep the results close to their chest.

Neither Sidgmore nor O’Dell returned numerous calls and requests for comments.

Light Reading is planning a project that would collect actual traffic statistics from hundreds of Internet access lines to enterprise users - giving everybody a much better reading on what's actually happening to business traffic volumes on the Internet. Corporations will be given a free trial of a monitoring service in exchange for participating in the scheme (see Track Your Traffic).

— Eugénie Larson, Reporter, Light Reading
http://www.lightreading.com

(119)  | 
Comment  | 
Print  | 
Oldest First  |  Newest First  |  Threaded View        ADD A COMMENT
Page 1 / 12   >   >>
52442
50%
50%
52442,
User Rank: Light Beer
12/4/2012 | 10:02:09 PM
re: Did WorldCom Puff Up the Internet Too?
Please contact me directly at 212-714-7906.
Steve Young, CNN, New York. Thanks.
erbiumfiber
50%
50%
erbiumfiber,
User Rank: Light Beer
12/4/2012 | 10:02:09 PM
re: Did WorldCom Puff Up the Internet Too?
Roy posted an e-mail address way back in post 90...
Roy_Bynum
50%
50%
Roy_Bynum,
User Rank: Light Beer
12/4/2012 | 10:02:35 PM
re: Did WorldCom Puff Up the Internet Too?
beowulf888: "Access traffic vs. core traffic will have an 800-to-1 ratio only if you consider the core network as being a single pipe. In reality, LOB is correct when he states that, since the core is a complex mesh, the traffic will drop off proportional to the logarithm of network size (as measured in hops). Unicast traffic has a path between the source and destination that traverses only a subset of the total possible network hops in the core."
________________________________________________

I believe that we continue to be talking "apples and oranges". I will discuss a network architecture model to demonstrate what I am talking about. I also want to show how easy it is to G«£manipulateG«• the perception of what G«£growthG«• is when we have been talking about it relative to the Internet. For those that donG«÷t want to wade through the network architecture model, skip down to point (B). (I also recognized as I was doing through this that I was calculating the physical plant bandwidth growth rather than the ratio of the growth but still using the growth ratio labels.)

Perhaps, you may also be talking about the actual effectively delivered data packets that will traverse the multiple architectures of the Internet, while I am talking about the physical plant that has to be in place and provisioned in the different levels of architectures in order to provide for the effective delivery of those data packets. The traffic growth might be at a one for one ratio. The physical plant would not.

What I have not really talked about and may not be understood is that the efficiency of the G«£facilitiesG«• and packet transmission bandwidth G«£usageG«• improves greatly as the data traffic moves toward the core of the network. It is this improved efficiency that is reflected by the G«£gainG«• factors at the different points in the network. I have tried to use a simplified model to express that and the architectural provisioning models that are used to size the provisioned facilities that are used to carry Internet data transmission services.

For those that have worked on very large campus LANs, this will be familiar territory. (I use very large campus LAN because it is a closed system and can be used to demonstrate model without a lot of esoteric G«£traffic modelsG«• that inexperienced people tend to use.) The campus is made up of multiple multi-story buildings. The physical links between the desktop systems today is often by Category 5 twisted pair cable. This cable will support Ethernet at full duplex 100Mb speeds. The applications that are used on the desktop systems seldom saturate that link for greater than a few ms at a time, which is seen as a percentage of utilization per second. The peek of that utilization is often reported at less than 50%, while the majority of the time, the reported utilization is close to 1%.

The links from the desktop systems are aggregated on a floor aggregation switch, which may support more than one floor of a building. I am going to refer to these switches as the first the level aggregation in this model. The links from the desktops to the first level switches, I will refer to as the first level links. Newer facilities will have non-blocking GbE switches with optical GbE between the first level aggregation switches and a building aggregation switch, or router. I will refer to the building aggregation switch/router as the second level of aggregation in this model. I will refer to the links between the first level aggregation switches and the second level aggregation switch/routers as the second level links. The different building aggregation switch/routers are linked by optical GbE in a semi-mesh architecture to provide redundancy between the buildings and any server farms that may be in the different buildings. I will refer to the links between the buildings as third level links.

LetG«÷s get back to architecture of the physical facilities. For simplicity, we will say that there are 6 buildings with 12 floors each with 100 desktop systems on each floor. To simplify the discussion, any server farms in any of the buildings will be terminal network G«£spurG«• off of the second level building aggregation facilities of each building. Each first level aggregation switch supports 2 floors. Each first level switch only has a single GbE link into the second level aggregation. The second level aggregation switch/routers have GbE links to three other second level aggregation switch/routers in different buildings.

The relationships of bandwidth at the different levels of links are defined by the granularity characteristics of the physical link technology at each level. The implementations at the second and third level links are based on not only on the granularity of the links, but the G«£bandwidth gainG«• that is provided at each aggregation level, and the architecture of the link deployments.

The total number of desktop links in each building is 1200 at an aggregate bandwidth of 120,000Mbps per building. The total amount of aggregate first level link bandwidth for the campus is 720,000Mbps. The total number of links between the first level aggregation switches in each building to the second level aggregation switch/routers is 6 at an aggregate bandwidth of 6,000 Mbps per building. The total aggregate bandwidth second level links for the campus is 36,000 Mbps. The total number of third level links between the buildings is 9, for an aggregate third level link bandwidth of 9,000Mps.

To relate to my Internet model, the first level links are equivalent to G«£AG«•, the second level links are equivalent to G«£BG«•, and the third level links are equivalent to G«£CG«•. In the very large campus LAN model, the ratio of A to B links is 20:1. The ratio between B and C links is 4:1. The in this model, total gain of the entire network is only 80:1. This is a very high quality network.

The actual data packet traffic that goes between the buildings, when measured at the different levels of the model, may remain at close to a one for one ratio because it is limited by the aggregate of the third level links in the model. The aggregate bandwidth of physical implementations at each level of the model is, however, very different. The third level traffic model does not include any traffic in the model that may never leave the second level of aggregation. This the fallacy of looking at the traffic models based on the traffic at the third level in the model and not recognizing the limitations created by the characteristics of the physical plant deployment and provisioning at the other levels of the model.

(B)
If I wanted to G«£growG«• the network, the lowest granularity that the first level links can support is 100Mbps. Adding additional desktops to each floor may not, in reality, effect the inter-building traffic as seen at the third level links. The ratio of the different levels might, however, remain the same. However, only when the growth at one level of the network is compared to itself does the ratio of growth of the physical remain the same for each of the levels, ie. the third level can be at a growth ratio of 2:1, the growth of the second level can also be at 2:1, and the growth of the first level can also be at 2:1. More often they are not.

Given the above model, if I were to increase the physical plant bandwidth for the third level links of the network, I would have to add 40Mbps of aggregate bandwidth at the first level of links for each single Mbps of bandwidth that I want to see in the third level of links. The growth of the first level, when compared to the third level would still be a 40:1 ratio. For each GbE link that I added to the third level links, I would have be seeing a growth of 40 Gbps, or 400 additional desktops per single building in the campus.

If I wanted to G«£manipulateG«• the perception of the growth of the network, numbers it would be easy for me use to the desktop connectivity count growth instead of the bandwidth growth of the network. I could also compound the growth at the different levels by saying that each of the three had a growth of 2X, making it a total of 6X. This would be G«£fictionalG«• of course, but if you are a senior manager or stock analyst and donG«÷t know how networks are actually built, you would not realize the fallacy.

If the architecture of the third level links were different, as a star into a single campus aggregation switch/router, there would only be single third level link out of each building. This would make the ratio between the aggregation of the first level links and the third level links to 720:1. This architecture is not only more expensive to implement, it provides a lower quality of service between the buildings in this model. Politicians tend to like this architecture because it G«£centralizesG«• the data network through the single campus aggregation switch/router. You might be surprised how many city wide metro enterprise networks are implemented this way. There are even some Internet facilities that are implemented this way. You can also see how easy it was for some people to manipulate the growth numbers for the Internet.

In the real world, for each Mbps of aggregate traffic growth that is seen on the backbone of the Internet, with a 400:1 ratio, the access would have to see an aggregate growth of 400Mbps. If I were to grow the Internet by 2Mbps, then I would have to see 800Mbps at the access. (This is where I was making a mistake in my earlier posts. Sorry about that everyone.)

Roy Bynum
beowulf888
50%
50%
beowulf888,
User Rank: Light Beer
12/4/2012 | 10:02:46 PM
re: Did WorldCom Puff Up the Internet Too?
Well I couldn't resist responding to flanker's troll...

flanker wrote:
>This is why toll quality VOiP over IP networks
>is an oxymoron: you cant manage the packets over
>the backbone.

You can run VoIP on IP networks with toll quality voice (well, MOS of 3.7 and greater) quite well, thank you very much. But you either have to make sure (A) that you give your voice traffic higher precedence bits than data, or (B) you build a parallel VoIP network. In both cases you have to keep bandwidth issues in mind when you design your VoIP network.

At Cisco, we had 30,000 employees world-wide connected by VoIP -- on the same backbone that our data was traveling over. Of course, the initial deployment was not glitch free, but, once the bugs were smoothed out, Cisco had a carrier-class voice that cost substantially less than their older circuit-switched infrastructure.

Also, China Unicom's pre-paid calling service network is all VoIP. It services over 300 cities in China, and we had better MOS scores than the China Telecom circuit-switched network (of course, that's not saying much in China ;-). In this case, the VoIP traffic was segregated on it's own IP network (so the only data traffic was from snmp, ntp, and routing updates). Guess what? We not only got better MOS scores than the incumbent carrier's circuit-switched infrastructure, but it cost 70 percent less to implement than the old-style big-iron circuit-switched network.

cheers,
--Wulf

beowulf888
50%
50%
beowulf888,
User Rank: Light Beer
12/4/2012 | 10:02:46 PM
re: Did WorldCom Puff Up the Internet Too?
Roy, although I like your math, I think you may be missing an important issue that LOB raised (or perhaps you're just simplifying your arguments for the non-math masses).

Access traffic vs. core traffic will have an 800-to-1 ratio only if you consider the core network as being a single pipe. In reality, LOB is correct when he states that, since the core is a complex mesh, the traffic will drop off proportional to the logarithm of network size (as measured in hops). Unicast traffic has a path between the source and destination that traverses only a subset of the total possible network hops in the core.

So in reality the 800-to-1 ratio for dial-up to core traffic is probably an order of mangnitude too small if you were to use dial-up traffic as indicator of how to scale your core's bandwidth.

best regards,
--Beo

lob wrote:
>Of course, the larger the network is, the
>more circuits a packet has to travel from
>source to destination. This is number of
>hops, or "diameter of the network". It is
>roughly proportional to logarithm of network
>size, and so introduces rather insignificant >correction to end-to-end delivery cost per
>bit.
PhotonGolf
50%
50%
PhotonGolf,
User Rank: Light Beer
12/4/2012 | 10:02:53 PM
re: Did WorldCom Puff Up the Internet Too?

Well, a very interesting thread. Took me a long time to digest!

Since most of us are not in the business of selling fiber (I hope!), I suspect most of us will still see business recovering as the core traffic continues to increase ... routers, transport, electronics, optics, software, services, etc...

So .. given all of this data, when will the long haul start to recover? (relates to the current utilization of existing systems)

Forget 40 .. care to guess how many 10G linecards (1 per lamda) will be consumed in 03, 04, and 05?

What will be the price points? ($20k today, falling).

At what point will it be cheaper for carriers to buy a new transport system (like Ceyba) instead of more linecards for Nortel? (Capex, Opex)

I suspect this will be a better forecast than you could ever get from RHK!

- P
Roy_Bynum
50%
50%
Roy_Bynum,
User Rank: Light Beer
12/4/2012 | 10:02:55 PM
re: Did WorldCom Puff Up the Internet Too?
lob: "We have network A, with backbone, access and everything - all properly scoped and loaded to capacity. Now, some day we got exactly 2 times more identical customers. We build _identical_ network B - backbone, access and everything. It is also loaded to capacity (because it is identical to network A, and customers are the same). Plus, we need to interconnect those networks."
_________________________________________________

lob,

I think I begin to see where there is a misunderstanding. We are describing two totally different network architectures.

The G«£AG«• in my examples are the access links only, not a network in and of themselves. The dial up access links are independent facilities until the output of the aggregation router at the ISP. Until then, they can be compared to individual Ethernet 100BaseT links between any two ports on any two different systems. It is not like a cable system or a 10Base5 LAN where the first level of aggregation, along with the first level of collisions/blockages, occurs within the G«£etherG«• of the coax facility.

The systems on the different links can not communicate with each other except after the aggregation and routing functions that are in place after the traffic is aggregated over a common G«£planeG«• in the ISP aggregation router. It is at that point that the traffic that originates on the links, G«£AG«•, becomes part of the 100:1 traffic that is pushed up to the NSP peering point to become part of the 4:1 traffic in the backbone G«£CG«•.

In my previous examples, the links G«£AG«• do not become part of a network until after they are part of G«£BG«•, not until the buffered output to the metro link to the Internet backbone. At that point, with the peering facilities between the ISP and the NSP, each access link is part of the Internet, not a separate G«£networkG«•.

For the purposes of this discussion it does no good to evaluate each network as a separate entity because it is the backbone bandwidth of the Internet that we are concerned with, not the additional aggregate bandwidth within each autonomous system, some of which may never get to the backbone. If there is additional traffic with a metro ISP that never gets to the Internet backbone, then the overall aggregate traffic for the metro networks is even higher than the G«£BG«• that I was using, which in turn increases the G«£AG«• as well.

lob
50%
50%
lob,
User Rank: Light Beer
12/4/2012 | 10:02:56 PM
re: Did WorldCom Puff Up the Internet Too?
Before I give up on math education, I'll make one more try.

We have network A, with backbone, access and everything - all properly scoped and loaded to capacity. Now, some day we got exactly 2 times more identical customers. We build _identical_ network B - backbone, access and everything. It is also loaded to capacity (because it is identical to network A, and customers are the same). Plus, we need to interconnect those networks.

Now we have 2x customers, and 2x backbone traffic. Cannot put more customers on A's backbone - it is already loaded. Gluing A and B together doesn't reduce traffic - the traffic patterns in both networks are the same.

Grows 2x in customers (with each customer offering the same load as older customers) yields
2x growth in backbone traffic, and that's it.

I can't explain it any simpler, sorry.

Your mistake is that you calculate absolute growth (which is in our example 400bps for 1bps in backbone) and then for some reason confuse it with relative growth (which is 2x in both backbone and access - because 800/400 = 2 - definitely _not_ 800x!).
Roy_Bynum
50%
50%
Roy_Bynum,
User Rank: Light Beer
12/4/2012 | 10:02:57 PM
re: Did WorldCom Puff Up the Internet Too?
lob: "The significance of that is that pure self-similar traffic has zero multiplexing gain, i.e. stochastical multiplexing does not reduce burstiness of traffic. Aggregation of several Poisson streams yields smoother traffic flow, thus
reducing the need for extra capacity needed to accomodate traffic bursts (this is known as multiplexing gain)."
________________________________________________

This is nice.

It is too bad that it does not work at "busy hour" within any one demographic facility environment. What you really wind up with, is that regardless of the "gain" you have, is an effective aggregate bandwidth that can never exceed the lowest common facility bandwidth.

Empirically, from watching a whole lot of data analyzers over many years, with a high number of statistical simultaneous attempts to us the same bandwidth within a specific "window" there are large numbers of "collisions" or queue access G«£blockagesG«•, which in turn causes losses of original data packets, which in turn causes the data packets to get retransmitted after a period of G«£random delayG«•, which causes more "collisions" and "blockages" that tend to G«£spikeG«• and G«£quietG«• until all the aggregate of data packets are all correctly transported, or the applications time out. The access links tend to reflect that G«£spikeG«• and G«£quietG«• effect. This is not due to just the delay between the users hitting the G«£sendG«• function in an application.

It is the effect of G«£blockagesG«• with the randomness of retransmission that creates the Poisson streams that you are referring to. The effect of smoother traffic flow is a result of a reacting to failures to transport real time combined with extensive buffering prior to a transmit queue at the egress of an aggregation router. This works very well for low performance applications and users that have a low expectation of delay performance, like dialup users on the Internet. It is the reason that there is a committed information rate that is directly reflected in backbone bandwidth for services that might have slightly higher expectations. It is the reason that there are exclusive use of the bandwidth facility for real time applications and performance expectations.

Voice steams are real time and can not be statistically buffered and delayed relative to the load a voice network. This is the reason that voice services G«£blockG«• calls during busy hour over loads.

This a distinction between voice networks that are sized directly for G«£busy hourG«• loads with exclusive use of the IMT facilities and data networks that are more often sized relative to the port density of the vendors routers, or the cost for transmission services that a budget will allow.
Roy_Bynum
50%
50%
Roy_Bynum,
User Rank: Light Beer
12/4/2012 | 10:02:57 PM
re: Did WorldCom Puff Up the Internet Too?
lob: "Ok. Before: A0 = 100*B0, B0 = 4*C0,
After: A1 = 100*B1, B1 = 4*C1, C1 = 2*C0

C0 = A0 / 100 / 4
A1=100*4*(2*C0) = 100*4*2*(A0/100/4) = 2*A0.

I.e. growth 2x in backbone matches growth 2x in
access. This is not even a university math, this
is middle school.
_________________________________________________

lob,

I begin to realize why I could not understand the accounting that was being presented by the WorldCom accountants at the last all hands meeting. It should not take a complex formula to figure out what a given growth in the backbone would require for growth in the access. I will do it G«£long handG«• with G«£long proofG«• as we learned in 8th grade algebra.

To find out what a 2X growth in the backbone would require for growth in the access:
C represents the given growth of the aggregate bandwidth in the backbone
B represents the aggregate bandwidth in the metro relative to the aggregate bandwidth in the backbone
A represents the aggregate bandwidth in the access relative to the aggregate bandwidth in the metro
Given: C = 2
B = 4*C
A = 100*B
Find A
Replace G«£BG«• with G«£4*CG«•: A = 100*(4*C)
Replace G«£CG«• with G«£2G«•: A = 100*(4*2)
First calculation: A = 100*(8)
Final calculation A = 800

Results: For growth 2X in the backbone, the access will have 800X growth.
Page 1 / 12   >   >>
Featured Video
Flash Poll
Upcoming Live Events
September 17-19, 2019, Dallas, Texas
October 1-2, 2019, New Orleans, Louisiana
October 10, 2019, New York, New York
October 22, 2019, Los Angeles, CA
November 5, 2019, London, England
November 7, 2019, London, UK
December 3, 2019, New York, New York
December 3-5, 2019, Vienna, Austria
March 16-18, 2020, Embassy Suites, Denver, Colorado
May 18-20, 2020, Irving Convention Center, Dallas, TX
All Upcoming Live Events