<<   <   Page 9 / 12   >   >>
stephenpcooke 12/5/2012 | 1:49:42 AM
re: Siemens Sees Ethernet Everywhere Are you sure that you are not comparing a 2 Fiber UPSR versus a 4 Fiber BLSR??

Nope. The case described is bidirectional both Eastbound and Westbound. This is a commonly known limitation of UPSR.
pig3head 12/5/2012 | 1:49:41 AM
re: Siemens Sees Ethernet Everywhere As explained to me by a close friend who is a capacity planner for a Tier One network, the average utilization in the core is 8% to 12%, and never will get much higher.
it should be clearer to state that CURRENT average utilization of the core is 8% to 12%, maximum usage is not 8% to 12%.

The reason for the low utilization has to do with topology of the network and 100% network uptime. For instance, an efficient network design can have a topology where the failure of one trunk in the core results in the convergence of multiple trunks on an alternate route.
i think it means SDH/Sonet ring protection which uses a unused link to backup the active link.

for a Stat. Mux ip networks, TE , MPLS FRR, MPLS FT and many l3 protocol are able to make it achieve 100% network uptime without a leaving unused link.
pig3head 12/5/2012 | 1:49:40 AM
re: Siemens Sees Ethernet Everywhere To a first approximation, QoS is affected by available absolute excess bandwidth, not available excess percentage bandwidth. An OC-192 that peaks at 95% has 500Mbps excess bandwidth. A T1 that peaks at 50% has 768Kpbs excess bandwidth. The OC-192 introduces less jitter than the T1. THis is why QoS matters in the last mile but not in the core.
complete right.

though whole networks should be Qos enabled to achieve end2end Qos, MAN and access link are where is the Qos key point.

always does the core has enough switch capacity and the access(MAN) link is poor of bandwidth.
pig3head 12/5/2012 | 1:49:39 AM
re: Siemens Sees Ethernet Everywhere Hi,
I had both cable modem service and DSL lines. I work from home and I have VoIP. Cable modem is no matched compare to DSL line in reliability and consistency of performance.. It shows up especially when you use a VoIP service like Packet8.

My friend did online gaming via IDSL (ISDN DSL) -> 128Kbps.. When he switched to cable modem, he can no longer play game consistently eventhough the bandwidth supposed to be higher.. Now, he is switching back to DSL..


i think it is a special case.

if the bandwidth is enough, camble modem can run faster than isdn surely.

your case maybe occurs for poor uplink bandwidth. several end-users use the same uplink which run at 20Mbps in CMTS .
pig3head 12/5/2012 | 1:49:39 AM
re: Siemens Sees Ethernet Everywhere IMHO the only way that IP traffic can get QoS is when the applications mark IP traffic with certain priorities (obviously via the TOS or Precedence bits). If IP packets are marked this way, IP routers can do the prioritization themselves. Nothing ATM can add.

for case of
"PC(Eth NIC) <--> adsl modem <-Atm cell-> Dslam",

atm just has Qos capacity for the "link" which connects the modem to the dslam( maybe to the BRAS), atm can not identify any ip-based application and then do nothing more for the application.
OSXman 12/5/2012 | 1:49:37 AM
re: Siemens Sees Ethernet Everywhere Does anyone out there have a view on the potential market size for a EoS access device like the soon-to-be-announced Cisco ONS15310 box?
aswath 12/5/2012 | 1:49:37 AM
re: Siemens Sees Ethernet Everywhere Msg. #63
... ATM had two problems, one being its cost and the other being that it was never really designed to scale down to the DS-0 or even the DS-1 level and therefore was really only appropriate for the core and maybe the feeder segments.

I'd be interested in some input on that.

I would add that no effective LAN technology was developed (DQDB was a faint attempt), while Ethernet progressed with star topology and wider bandwidth capacity.
Flower 12/5/2012 | 1:49:37 AM
re: Siemens Sees Ethernet Everywhere Thanks for your response Dreamer. But I still don't understand how ATM can give my IP traffic QoS.

1) Everyone is on a separate VCC/connection.

Mmmm, that's not going give QoS. First of all, if all my traffic goes through one VC, then how can I get QoS for my VoIP, when I am downloading a movie at the same time ? Or when my kid plays a game. What your solution gives is fairness between subscribers. Not between applications, and not even between users (if multiple users behind a NAT box). And fairness is not QoS. You need to start thinking end-to-end, where endpoints are applications. Not the entities where you send your phonebill.

But, then, you have use a add on box to provide traffic shaping/policing and so on.

So ? We already have many of these boxes, they are called routers. (I'm not discussing whether this feature is implemented, or implemented in a usable way. I'm just stating that a router can do packet queuing/forwarding in the same way an ATM switch can).

Now my second question is: from where to where does this per-subscriber VC run ? From my ADSL modem at home to ... ? If I want to call person A on ISP network B, and my kid plays a game on server C on ISP network D, where does the VC end ? If there is only one VC, it must end fairly close to the edge of my ISP's network. Where is the QoS inside my ISPs network ? Where is the QoS on the peering points ? Where is the QoS in the destination networks ? I agree that the bigger problem is making reservations for my traffic, policing it, and billing. But even if those problems were solved, ATM is not going to give me end-to-end. Only on a small part of the track, the acces network. That might be a little helpfull, but it isn't good enough for me. E.g. I have noticed that the biggest bottlenecks are usually peering points.

ATM chopping of packets into cells allow better control of jitter.

Agreed. But as has been pointed out before, this is only helpfull on slower links. Suppose I have a 1 Mbps DSL link, a 1500 byte packet can only give 12 extra milliseconds delay. As the access link is probably the slowest part of the end-to-end track, that isn't much. 4 Mbps will reduce the increase in delay to 3 ms at most. We only need a little bit more speed in the last mile, and the added complexity of cellifying isn't needed anymore.

BTW, I agree that DSL is much nicer for fairness than cable. But I have always thought that that was because with DSL, I have my own copperwire to the DSLAM, and behind the DSLAM there is enough bandwidth. That's already a huge advantage for DSL to have over a shared medium like cable.

BTW, I'm no expert in layer2 technologies, but I recall having seen documents for new cable technologies, where the lowest layer of encapsulation is also ATM. When that happens, isn't cable going to get the same fairness as DSL has today ?

ATM switches coming from WAN legacy are designed with huge buffer that allow it to handle network congestion without dropping IP packet -> less retransmission -> better throughput..

I know. Routers have the same large amounts of bufferspace. I know this is the right thing to do. However, personally I think large buffers are good for throughput, but bad for RTT. VoIP and gaming will be bothered more by constant large RTTs than an occasional packetdrop. There is definitely a need for fair queuing in packet forwarding devices, whether they are switches or routers.

6) Most of connection between ATM switch are done through SPVC. ATM can reroute the connection and respect the QOS guarantee and the same time before IP router layer know this. Less link down -> less re-transmission -> better performance for user.

Again, you are doing fairness between subscribers. Not QoS between applications. If there was a good way to use TOS/precedence bits in IP, then when an IP network reroutes, the QoS will be applied on the new route just as well.

About link down: IP IGPs can reroute within a network within a second. So there is no reason why link failures will cause more downtime in an IP network than in an ATM network.

Do they prefer cable modem or DSL line if they have a choice??

I prefer DSL. But not because of ATM. I prefer DSL because 1) it guarantees my own piece of copper to the DSLAM. And 2) DSL network are built by telcos. And historically telcos have enough wire in the ground to build large backbones. Cable companies have to either rent long-distance bandwidth, or built their own long-distance network. That's a severe handicap to start with. Therefor it seems ISPs that are owned by the old telcos will have faster backbones than other ISPs (like cable companies). At least that is my experience.

"The truth is out there"

Yep, dream on ..... :p
sgan201 12/5/2012 | 1:49:36 AM
re: Siemens Sees Ethernet Everywhere Hi,

1) Have you check how big is the buffer in a router versus an ATM switch? Until you do, please do not say router have the same amount of buffer as an ATM switch..

2) All you need is fairness. You can do prioritization via your cheap DSL router -> check this out www.sveasoft.com

3) You vote with your feet to DSL too..

4) IGP can re-route within one second? Is this before or after it detects a link failure?? How long does it takes an IGP to detect link failure??

Flower 12/5/2012 | 1:49:33 AM
re: Siemens Sees Ethernet Everywhere 1) Have you check how big is the buffer in a router versus an ATM switch? Until you do, please do not say router have the same amount of buffer as an ATM switch..

Indeed, I don't know how much bufferspace an average ATM switch has nowadays. I do know that you only need about 200 ms worth of buffering for good throughput for filetransfers. Anything over 200 ms is overkill. E.g. for a 10 Gbps linecard you need 256 MB of buffering on the linecard. For a box with 480 Gbps (non-cisco math) needs 12 Gigabyte worth of buffering. I know there are routers that have this amount of buffers.

2) All you need is fairness. You can do prioritization via your cheap DSL router -> check this out www.sveasoft.com

Anything I can configure on my homerouter will only influence fairness for packets upstream. For downstream, the DSLAM must make the queuing/forwarding decision. We have no control over that today.

3) You vote with your feet to DSL too..

Not fully true. I vote with my feet for the guy with the fastest backbone, and the guy who owns copperwires to each house. Something we (the public) gave that guy when our national telco got privatised many years ago.

If I could get ethernet encapsulation over the copperwire to my house, I would prefer that. The ATM/AAL5 overhead is strongly noticed when sending small packets over low-speed upstream. (This is the problem for gaming. Upstream packets get 150% overhead on top of the raw gaming data. 64 bytes of gaming data turn into 3 cells: 64 -> 159).

4) IGP can re-route within one second? Is this before or after it detects a link failure?? How long does it takes an IGP to detect link failure??

After link failure. Link failure can be detected both by layer1 as by the IGP. Of course layer1 is usually faster. This has not much to do with ATM vs IP link failure detection, I guess ATM depends on layer1 detection just as well.
<<   <   Page 9 / 12   >   >>
Sign In