x

Force 10 Aims for the Data Center

Force10 Networks Inc. today unveiled the S-50, its first fixed-configuration data center switch, as it attempts to up the ante in its battle with Hewlett-Packard Co. (NYSE: HPQ).

Until now, Force10’s main product offering was the E-Series of chassis-based switches. The S-50 represents a move into a new part of the market: the server edge of the data center. The 1-rack-unit box comes with 48 ports of Gigabit Ethernet (also capable of 10- and 100-Mbit/s operation) and two 10-Gbit/s Ethernet ports.

The Milpitas, Calif., company has already got around half a dozen beta customers for the S-50, says Andrew Feldman, vice president of marketing.

This is all about cost. The high price per port of traditional chassis-based switches means users typically connect them to large server clusters. Force10, then, is going after those users that can’t afford to hitch the likes of Web servers and software infrastructure servers to a large chassis-based box. Users can connect the servers to the S-50 and use its 10-Gbit/s uplinks to feed E-Series boxes.

It's a shrewd move, says Steven Schuchart, senior analyst for enterprise infrastructure at Current Analysis. “The S-50 is a response to needs from customers that like their Force10 gear and are saying that they want an aggregation switch," he says. “For Force10, it’s a good evolutionary step. Their smallest switch to date is the E-300, which is too big for a lot of applications."

Indeed, the S-50 could be good news for companies wanting 10-Gbit/s links at less than E-series prices. The 10-Gbit/s Ethernet list price per port on the E-Series is around $12,000, compared to $3,250 on the S-50.

But the S-50 will have to contend with HP’s ProCurve 3400 device, which was launched last year (see HP Launches Gigabit Switch Series). Like the S-50, the ProCurve 3400cl is a 1-rack-unit box that supports up to 48 10/100/1000 ports and two 10-Gbit/s Ethernet ports.

Naturally, Force10 says it has a performance edge over its rival. Feldman told NDCF that the S-50 offers a switching capacity of 192 gigabits per second. Throughput on HP's 3400cl is 136 Gigabits per second.

However, Darla Sommerville, HP's vice president and general manager for the Americas, highlighted the fact that the 3400cl is available in both 24-port and 48-port configurations. This, she says, offers users a great degree of flexibility. "If you are putting out Gigabit to the desktop you may not want to pay for additional ports," she adds.

Force 10’s core technology has already led to some big wins in areas including high-performance computing, where HP is a major player (see Argonne Picks Force10 and Force10 Scores Record Q1 Sales).

HP’s switch technology is seen as a key weapon in its assault on the networking market. The company is using the switches to entice users over to its Adaptive Enterprise strategy (see HP Tightens Up ProCurve Story and HP ProCurve Expands Security).

But as HP adds to its arsenal, so Force10 needs to expand its footprint, according to Schuchart. “Force10 needs to decide on the next market to tackle. If I was to speculate, I would say that they would probably go after the enterprise next," he says. "But if they do that, then they have to broaden the platform.”

This is a distinct possibility. Feldman told NDCF that Force10 is likely to expand its S-Series family at the high end.

— James Rogers, Site Editor, Next-Gen Data Center Forum

chook0 12/5/2012 | 3:20:21 AM
re: Force 10 Aims for the Data Center I forgot one other factor why you might want a faster fabric than the total throughput: You don't want the fabric to be the bottleneck for reasons of QOS.

Take as an example a switch where you have 8 levels of priority on the output queues, but there are only 2 priority levels in the fabric. (Pretty common. Nortel's PP8600 at least in its previous incarnation was like this).

Further assume that input patterns were such that one of the output ports was heavily congested but the input ports were all uncongested. (e.g. 2 input ports trying to send traffic to a single output port. Not hard to do in an aggregation scenario.)

If the fabric is sped up (and presumably the ports on the fabric) then that traffic load is all transferred to the output queues and dropping is performed under an 8-queue regime.

If the fabric is not sped up, then either you have the fabric dropping packets under a 2-priority regime or (more likely) you get backpressure into the input queues which will affect traffic to uncongested ports as well (HOL blocking).

Scrub the previous comment about wanting the vendors to tell us about their fabric speed. I want to know the architecture as well. It matters. I'll forgive them for telling me where they have over-engineered as long as they tell me where they have under-engineered as well.

--Chook


chook0 12/5/2012 | 3:20:22 AM
re: Force 10 Aims for the Data Center Lightheaded,

I think we definitely agree on 2 things.

1. The most important stat is total non-blocking performance. This can be less than the sum of the throughputs of the ports, but it obviously can't be more.

2. Quoting any number larger than the sum of the port speeds and implying that it gives more switch throughput than the sum of the ports is reprehensible, if the vendor tries to tell you a switch with 100G non-blocking throughput is somehow better if it has 800G of switching capacity instead of 200.

But I have been in the biz of buying switches for a long time as well, and I am not of the opinion that fabric capacity and architecture are meaningless numbers. There are cases where you definitely want to know that as well. There are lots of cases where the fabric capacity has limited the throughput of the switch.

examples:

1. Switching capacity is enough for the service ports but not enough for a whole stack to get non-blocking throughput because the stack links are a bottleneck. All the Cisco "stackables" used to suffer from this. Dunno if they still do or not.

2. The fabric does not have the capacity to give non-blocking throughput for all the service ports. This has not been a problem recently on aggregation switches because of the rise of cheap and high-performance commercial silicon, but go back 4 years, and the fabric capacity was not sufficient to have all ports going at the same time on almost all aggregation switches. The fabric capacity was a vital thing to know. In some applications having a lower fabric capacity was not an issue because traffic was coming in the front and out the uplink port so the uplink was the bottleneck. But you had to know what you were dealing with. (Nortel's stackables were a good example of this. Fine for the closet, but don't try to use them as a data centre core.)

And a switch with 48 ports of FE and 2 ports of GE that had a 6.8G fabric was not non-blocking under all traffic conditions. You need about 14G for that. (unless it was quoted as 6.8G of FULL DUPLEX capacity in which case it probably was.)

As a buyer, the fabric capacity is an important number. Maybe I wouldn't give extra credit for someone who had 3 times as much as was required, but I'd definitely want to know it and I want the vendor to quote the number. Let me decide if they are playing marketing games or not. Stop assuming all the customers out there are dumb and should have this spec hidden from them.

3. Sometimes the switch Architecture means you need more fabric capacity than would be suggested by the port throughput totals. Crossbar is a good example of this. You need a 2x speedup at least on the output ports to get close to non-blocking. Remember the Stratacom/Cisco ATM switches? Aggegate port capacity was 9.6G on the biggest one but they had a 19.2G switch capacity. They actually needed that speedup to be non-blocking. It wasn't a meaningless statistic.

Good example of a blocking crossbar on the market today is the OSR7600.

Have a nice day,

--Chook

----------------------
WHAAAAAAAT??? Once again, we do understand and we are all telling you that ACTUAL SWITCH THROUGHPUT (Goodput) is ALL THAT MATTERS! Fabric throughput does not matter... i don't care if they use crossbar, output buffered, output buggered, shared memory, don't share well with others memory, "lasers" (cue Dr. Evil), smoke and mirrors or magic to make the packets switch and route.

I think that we are arguing the same thing. I am essentially saying that in the context of this announcement (marketing and sales), fabric capacity has no meaning. If you are engineering and designing then it does (but that is out of context of this announcement - this is not IEEE journal). I would argue that the fabric capacity is not even the most important design issue of the fabric with price, features (QoS, HA) and reliability being very important.
ceoati 12/5/2012 | 3:20:45 AM
re: Force 10 Aims for the Data Center Force 10 is same the company whose brain-dead marketing VP said that they weren't laying off people since they offered to let them keep the same jobs,,,, in India.
wwatts 12/5/2012 | 3:20:46 AM
re: Force 10 Aims for the Data Center Mr Z.
-------------------
Firstly, I don't know all that much about switching architectures, so I'm looking to learn.

Aren't most "line rate" calculations done based on an assumption of a 1 to 1 unicast traffic between pairs of ports ? If a percentage of the traffic was multicast, IOW, 20% multicast (to all ports for simplicity) and 80% unicast, wouldn't the backplane capacity have to be greater than just enought to support "line rate" unicast to between pairs of all ports concurrently ? I realise that there would have to be buffering on output ports to hopefully minimise packet drops due to output bandwidth not being big enough; I'm curious if the buffer capacity (converted to bits per second) would be included in these backplane capacity figures.
----------------------

It depends on the architecture and how multicast is handled. If you have 20% multicast traffic and 50% + load on ingress you are going to have congestion in your switch, regardless of architecture. Say you have 8 ports @ 1Gbps, all with 50% load, with 20% of the load MC (or 10% of capacity), that means that at eggress, 8 (ports) x 10% (ingress load of MC) or 80% of output capacity is devoted to MC. In addition you have the 40% load of unicast traffic at ingress, assuming even distribution to output ports, gives you 120% of load at eggress. You are going to have to drop something. What switching speed up does for you (in an architecture where multicast is handled by ingress retransmission, if you have a common memory architecture) is push the discard decision to the eggress queues (where you can generally make a better discard decision). Without switching speed up the congestion will be pushed to the ingress queues, causing traffic bound for uncongested output ports to be blocked by traffic bound for congested output ports. Usually this is the reason for internal switching speeds being greater for multicast, as multicast will cause congestion at outputs based purely on output load regardless of switching architecture.

Mr. Z
----------------
In other words, is the backplane capacity a pure bit per second rate between the ports, or is it a combination of pure bit per second rate plus the buffering capacity, also measured in bits per second ? Are they measuring how many bits per second they can shove into it before they start dropping packets at all input ports, rather than "true throughput" ?

I realise these questions a probably very much answered with "it depends", I suppose I'm curious to find out if there is a common convention.
------------------

Backplane capacity is generally pure raw bit rate between all the ports. Buffering is not counted as capacity. There is more than one reason for backplane speed up. Segmentation of packets applies to most switch architectures. If you are breaking packets into 64 byte chunks to push them through your switch fabric (most architectures do this as it is very hard to arbitrate switching at randomly on a per port basis when a packet ends), what happens when you have 65 byte packets? You need a backplane/switch capacity speed up to
handle this and switch 65 byte packets at line rate. Arbitration is not perfect so you need backplane/switch speed up to handle that, etc.

Unfortunately there is no common convention and determining a switch or switch fabrics true capacity requires reading the fine print. Double counting (counting ingress and eggress traffic) is common but not universal. Some will use backplane capacity, even though it really isnt relevant to the end user.

Mr. Z
----------------------

Some other guy:
-------------------------
Not right now and not EVER.

These are 1U fixed configuration switches. There is no room for expansion.
--------------------------


Well, I'd think the room for expansion would be there for future products, not for the customers of this product. Obviously designing in extra capacity is wise if the product line is going to grow.

That being said, it is a marketing trick to announce it as a product feature. It isn't a new trick though - how many people have driven their car to the maximum speed shown on the speedo, or even know if their car is capable of it ? I'd think a lot of people who buy performance sports cars compare the maximum speeds shown on the speedos, even if they'll never take them to those speeds.
-----------------

You are correct, some other guy is way off base. Many 1U fixed configuration switches have stacking ports which allow expansion. For a 24 port GbE switch, you might see 1 or 2 10-12Gbps stacking ports that allow you to connect multiple switches. If the switches in question support stacking they would need more switch capacity to support it (real line rate switch capacity not internal switch speed up capacity).


light-headed 12/5/2012 | 3:20:47 AM
re: Force 10 Aims for the Data Center Lightheaded,

OK. You go ahead and buy a switch with 48 Gig Ports and a 48gig fabric.

But don't go blaming anyone but yourself when you find the switch as a unit only gets 24G of throughput. That would be your fault for not understanding the difference between fabric throughput and switch throughput.
---------------------------------------------------

WHAAAAAAAT??? Once again, we do understand and we are all telling you that ACTUAL SWITCH THROUGHPUT (Goodput) is ALL THAT MATTERS! Fabric throughput does not matter... i don't care if they use crossbar, output buffered, output buggered, shared memory, don't share well with others memory, "lasers" (cue Dr. Evil), smoke and mirrors or magic to make the packets switch and route.

I think that we are arguing the same thing. I am essentially saying that in the context of this announcement (marketing and sales), fabric capacity has no meaning. If you are engineering and designing then it does (but that is out of context of this announcement - this is not IEEE journal). I would argue that the fabric capacity is not even the most important design issue of the fabric with price, features (QoS, HA) and reliability being very important.
light-headed 12/5/2012 | 3:20:48 AM
re: Force 10 Aims for the Data Center Chook,

I understand it. I have been building switches and routers for the last 6 years and working with them for the last 12 years. Switches and Routers I have worked on have sold over $2 Billion USD in revenues worldwide. I just do not condone the practice of advertising capacity that does not matter. NO one SHOULD care what the fabric capacity is... EVERYONE should care what the non-blocking perfomance is.

Don't try and muddy the waters with markitecture. If I can build a switch that has 100 non-blocking, wire-rate gig ports using a 100 gig fabric than good for me. If someone else has to use 800 gigs of fabric to build the same switch than good for them. In the end no one cares about that. All we care about is price, forwarding performance and features. Having more unuseable capacity is NOT a feature and it is not relevant unless that excess capacity can somehow be turned into extra useable ports or is needed for some HA scheme (such as redundant Switching Elements).
Phiber_Phreak 12/5/2012 | 3:20:48 AM
re: Force 10 Aims for the Data Center "OK. You go ahead and buy a switch with 48 Gig Ports and a 48gig fabric.

But don't go blaming anyone but yourself when you find the switch as a unit only gets 24G of throughput."

That is correct.

"That would be your fault for not understanding the difference between fabric throughput and switch throughput."

That is NOT correct.

It is not relevant if user understands switch internals. User only gets the capacity available from ports, not fabric.

User buys switch, not fabric.

When vendor claims that switch has 2X or 3X capacity of what ports deliver, it is nonsense. Vendors know it is nonsense. LR should call them on it.
chook0 12/5/2012 | 3:20:53 AM
re: Force 10 Aims for the Data Center Lightheaded,

OK. You go ahead and buy a switch with 48 Gig Ports and a 48gig fabric.

But don't go blaming anyone but yourself when you find the switch as a unit only gets 24G of throughput. That would be your fault for not understanding the difference between fabric throughput and switch throughput.

--Chook
James Rogers 12/5/2012 | 3:21:02 AM
re: Force 10 Aims for the Data Center I'm the author of the article, you can give me a call on the number below if you want to discuss it.

212 925 0020 x104

James Rogers
Phiber_Phreak 12/5/2012 | 3:21:05 AM
re: Force 10 Aims for the Data Center "Who cares if you have a Fabric with 100x, 10x, 2x or 1x the bandwidth required. "

Yes, exactly. Customer does not care if they have 1 billion petabits of capacity on the fabric. Customer only gets to use the bandwidth available from the ports.

That number is less than 68 gigabits per second -- not any of the nonsense numbers quoted in the LR article.

Also it is the same 67.8 gigabits regardless of unicast or multicast. Port speed is fixed. Switch configuration is switch. So any number greater than 67.8 gigabits is nonsense; customer cannot get that.
light-headed 12/5/2012 | 3:21:06 AM
re: Force 10 Aims for the Data Center Chook,

Who cares if you have a Fabric with 100x, 10x, 2x or 1x the bandwidth required. It is an architectual requirement to transmit the useable BW (i.e. the data i can actually send end to end). As you should well know, everyone agrees you cannot count "architectual or theoretical" BW, only useable port to port unicast BW! This game is only played in poor marketing to confuse customers or look more impressive then you really are, just like the double counting, west-coast math, also known as cisco-math.
chook0 12/5/2012 | 3:21:07 AM
re: Force 10 Aims for the Data Center Actually, depending on the architecture, the claims are not necessarily bullshit.

1. for 1Gbps of full-duplex switch throughput, you actually need 2 Gbps of switching capacity. This is because a switch fabric is typically a half-duplex device. Count 'em, You have 2x input ports to the fabric, each carrying 1Gbps. That's 2Gig of switching to carry it.

2. Depending on fabric architecture, you may need even more switching capacity that that to avoid blocking. For example, many crossbar fabrics have a 2x speedup to avoid HOL blocking. If you don't have any speedup, at least on the output ports then a Crossbar fabric will achieve about 67% of line rate throughput under uniform random load.

Now I doubt very much that this switch uses a crossbar fabric. Probably shared memory. So anything in excess of 2x (sum of line rates) is probably superfluous. No doubt the switch fabric can handle it, but there are ports not connected (for example stacking ports).

--Chook

-----------------------------
"The numbers refer to switching capacity -- the capacity of the switch fabric. That's often greater than the maximum capacity of all combined interfaces (or so you hope.)

It shows the boxes' architectures have room to grow - but you're right; you can't put all that capacity to use right now."

Not right now and not EVER.

These are 1U fixed configuration switches. There is no room for expansion.

This is marketing nonsense. A user cannot get at capacity today or ever. In fact the capacity probably does not exist in the first place.

Next time please use calculator before repeating nonsense vendor claims.
phar-sighted 12/5/2012 | 3:21:08 AM
re: Force 10 Aims for the Data Center Looking at the S50 datasheet on their site, they have 48 GE ports + 2 10GE ports and upto 2x10GE stacking ports, ie 48+20+20 = 88 GE * 2 = 176 Gbps of total switching capacity. Not sure how they get to 192 Gbps even with double counting. As previously noted, this is a fixed config box, so there is no upgrades possible. Maybe someone from LR can find out how the math works.

One thing I saw on the Force10 site that puzzles me and maybe some one can throw some light on this. They mention that you can build line rate clusters from 48 to 384 ports using the S50s. How does one do that?

= PS =
mr zippy 12/5/2012 | 3:21:10 AM
re: Force 10 Aims for the Data Center Firstly, I don't know all that much about switching architectures, so I'm looking to learn.

Aren't most "line rate" calculations done based on an assumption of a 1 to 1 unicast traffic between pairs of ports ? If a percentage of the traffic was multicast, IOW, 20% multicast (to all ports for simplicity) and 80% unicast, wouldn't the backplane capacity have to be greater than just enought to support "line rate" unicast to between pairs of all ports concurrently ? I realise that there would have to be buffering on output ports to hopefully minimise packet drops due to output bandwidth not being big enough; I'm curious if the buffer capacity (converted to bits per second) would be included in these backplane capacity figures.

In other words, is the backplane capacity a pure bit per second rate between the ports, or is it a combination of pure bit per second rate plus the buffering capacity, also measured in bits per second ? Are they measuring how many bits per second they can shove into it before they start dropping packets at all input ports, rather than "true throughput" ?

I realise these questions a probably very much answered with "it depends", I suppose I'm curious to find out if there is a common convention.

Not right now and not EVER.

These are 1U fixed configuration switches. There is no room for expansion.


Well, I'd think the room for expansion would be there for future products, not for the customers of this product. Obviously designing in extra capacity is wise if the product line is going to grow.

That being said, it is a marketing trick to announce it as a product feature. It isn't a new trick though - how many people have driven their car to the maximum speed shown on the speedo, or even know if their car is capable of it ? I'd think a lot of people who buy performance sports cars compare the maximum speeds shown on the speedos, even if they'll never take them to those speeds.

light-headed 12/5/2012 | 3:21:13 AM
re: Force 10 Aims for the Data Center PP is right. The only measurement that matters is how many ports at wire-speed, full-mesh forwarding performance per RFC 2544. Anything else is marketing bullshit and has no relevance to anything. 3com used to list all of the traces on the backplane of the corebuilder and their theoretical capability then double it. They claimed something like 960 Gigs when they could only support 32 gig ports or something like that.

Craig and LR should know better...

Phiber_Phreak 12/5/2012 | 3:21:13 AM
re: Force 10 Aims for the Data Center "The numbers refer to switching capacity -- the capacity of the switch fabric. That's often greater than the maximum capacity of all combined interfaces (or so you hope.)

It shows the boxes' architectures have room to grow - but you're right; you can't put all that capacity to use right now."

Not right now and not EVER.

These are 1U fixed configuration switches. There is no room for expansion.

This is marketing nonsense. A user cannot get at capacity today or ever. In fact the capacity probably does not exist in the first place.

Next time please use calculator before repeating nonsense vendor claims.
Pete Baldwin 12/5/2012 | 3:21:13 AM
re: Force 10 Aims for the Data Center "Feldman told NDCF that the S-50 offers a switching capacity of 192 gigabits per second. Throughput on HP's 3400cl is 136 Gigabits per second."

In fact, neither number is possible on box with 48 x1G and 2x10G interfaces.


The numbers refer to switching capacity -- the capacity of the switch fabric. That's often greater than the maximum capacity of all combined interfaces (or so you hope.)

It shows the boxes' architectures have room to grow - but you're right; you can't put all that capacity to use right now.
Phiber_Phreak 12/5/2012 | 3:21:14 AM
re: Force 10 Aims for the Data Center "Feldman told NDCF that the S-50 offers a switching capacity of 192 gigabits per second. Throughput on HP's 3400cl is 136 Gigabits per second."

In fact, neither number is possible on box with 48 x1G and 2x10G interfaces.

With 9000-byte jumbo frames, theoretical maximum is 67.8 gigabits per second. Numbers claimed here are nonsense.

Cisco double counting was bad enough -- is LR now believing in triple counting?

Does anybody actually check these claims.

PP
truthteller99 12/5/2012 | 3:21:16 AM
re: Force 10 Aims for the Data Center F10 does not use ODM's. All their boxes and Asic's (for that matter)were designed in house.


No on this box. This is ODM.
chipsales 12/5/2012 | 3:21:17 AM
re: Force 10 Aims for the Data Center F10 does not use ODM's. All their boxes and Asic's (for that matter)were designed in house.

Chipsales.
icenine 12/5/2012 | 3:21:27 AM
re: Force 10 Aims for the Data Center Who is the ODM which popped this switch out? I don't think it was designed in Milpitas...
HOME
Sign In
SEARCH
CLOSE
MORE
CLOSE