x

Force 10 Aims for the Data Center

Force10 Networks Inc. today unveiled the S-50, its first fixed-configuration data center switch, as it attempts to up the ante in its battle with Hewlett-Packard Co. (NYSE: HPQ).

Until now, Force10’s main product offering was the E-Series of chassis-based switches. The S-50 represents a move into a new part of the market: the server edge of the data center. The 1-rack-unit box comes with 48 ports of Gigabit Ethernet (also capable of 10- and 100-Mbit/s operation) and two 10-Gbit/s Ethernet ports.

The Milpitas, Calif., company has already got around half a dozen beta customers for the S-50, says Andrew Feldman, vice president of marketing.

This is all about cost. The high price per port of traditional chassis-based switches means users typically connect them to large server clusters. Force10, then, is going after those users that can’t afford to hitch the likes of Web servers and software infrastructure servers to a large chassis-based box. Users can connect the servers to the S-50 and use its 10-Gbit/s uplinks to feed E-Series boxes.

It's a shrewd move, says Steven Schuchart, senior analyst for enterprise infrastructure at Current Analysis. “The S-50 is a response to needs from customers that like their Force10 gear and are saying that they want an aggregation switch," he says. “For Force10, it’s a good evolutionary step. Their smallest switch to date is the E-300, which is too big for a lot of applications."

Indeed, the S-50 could be good news for companies wanting 10-Gbit/s links at less than E-series prices. The 10-Gbit/s Ethernet list price per port on the E-Series is around $12,000, compared to $3,250 on the S-50.

But the S-50 will have to contend with HP’s ProCurve 3400 device, which was launched last year (see HP Launches Gigabit Switch Series). Like the S-50, the ProCurve 3400cl is a 1-rack-unit box that supports up to 48 10/100/1000 ports and two 10-Gbit/s Ethernet ports.

Naturally, Force10 says it has a performance edge over its rival. Feldman told NDCF that the S-50 offers a switching capacity of 192 gigabits per second. Throughput on HP's 3400cl is 136 Gigabits per second.

However, Darla Sommerville, HP's vice president and general manager for the Americas, highlighted the fact that the 3400cl is available in both 24-port and 48-port configurations. This, she says, offers users a great degree of flexibility. "If you are putting out Gigabit to the desktop you may not want to pay for additional ports," she adds.

Force 10’s core technology has already led to some big wins in areas including high-performance computing, where HP is a major player (see Argonne Picks Force10 and Force10 Scores Record Q1 Sales).

HP’s switch technology is seen as a key weapon in its assault on the networking market. The company is using the switches to entice users over to its Adaptive Enterprise strategy (see HP Tightens Up ProCurve Story and HP ProCurve Expands Security).

But as HP adds to its arsenal, so Force10 needs to expand its footprint, according to Schuchart. “Force10 needs to decide on the next market to tackle. If I was to speculate, I would say that they would probably go after the enterprise next," he says. "But if they do that, then they have to broaden the platform.”

This is a distinct possibility. Feldman told NDCF that Force10 is likely to expand its S-Series family at the high end.

— James Rogers, Site Editor, Next-Gen Data Center Forum

<<   <   Page 2 / 3   >   >>
light-headed 12/5/2012 | 3:21:06 AM
re: Force 10 Aims for the Data Center Chook,

Who cares if you have a Fabric with 100x, 10x, 2x or 1x the bandwidth required. It is an architectual requirement to transmit the useable BW (i.e. the data i can actually send end to end). As you should well know, everyone agrees you cannot count "architectual or theoretical" BW, only useable port to port unicast BW! This game is only played in poor marketing to confuse customers or look more impressive then you really are, just like the double counting, west-coast math, also known as cisco-math.
Phiber_Phreak 12/5/2012 | 3:21:05 AM
re: Force 10 Aims for the Data Center "Who cares if you have a Fabric with 100x, 10x, 2x or 1x the bandwidth required. "

Yes, exactly. Customer does not care if they have 1 billion petabits of capacity on the fabric. Customer only gets to use the bandwidth available from the ports.

That number is less than 68 gigabits per second -- not any of the nonsense numbers quoted in the LR article.

Also it is the same 67.8 gigabits regardless of unicast or multicast. Port speed is fixed. Switch configuration is switch. So any number greater than 67.8 gigabits is nonsense; customer cannot get that.
James Rogers 12/5/2012 | 3:21:02 AM
re: Force 10 Aims for the Data Center I'm the author of the article, you can give me a call on the number below if you want to discuss it.

212 925 0020 x104

James Rogers
chook0 12/5/2012 | 3:20:53 AM
re: Force 10 Aims for the Data Center Lightheaded,

OK. You go ahead and buy a switch with 48 Gig Ports and a 48gig fabric.

But don't go blaming anyone but yourself when you find the switch as a unit only gets 24G of throughput. That would be your fault for not understanding the difference between fabric throughput and switch throughput.

--Chook
light-headed 12/5/2012 | 3:20:48 AM
re: Force 10 Aims for the Data Center Chook,

I understand it. I have been building switches and routers for the last 6 years and working with them for the last 12 years. Switches and Routers I have worked on have sold over $2 Billion USD in revenues worldwide. I just do not condone the practice of advertising capacity that does not matter. NO one SHOULD care what the fabric capacity is... EVERYONE should care what the non-blocking perfomance is.

Don't try and muddy the waters with markitecture. If I can build a switch that has 100 non-blocking, wire-rate gig ports using a 100 gig fabric than good for me. If someone else has to use 800 gigs of fabric to build the same switch than good for them. In the end no one cares about that. All we care about is price, forwarding performance and features. Having more unuseable capacity is NOT a feature and it is not relevant unless that excess capacity can somehow be turned into extra useable ports or is needed for some HA scheme (such as redundant Switching Elements).
Phiber_Phreak 12/5/2012 | 3:20:48 AM
re: Force 10 Aims for the Data Center "OK. You go ahead and buy a switch with 48 Gig Ports and a 48gig fabric.

But don't go blaming anyone but yourself when you find the switch as a unit only gets 24G of throughput."

That is correct.

"That would be your fault for not understanding the difference between fabric throughput and switch throughput."

That is NOT correct.

It is not relevant if user understands switch internals. User only gets the capacity available from ports, not fabric.

User buys switch, not fabric.

When vendor claims that switch has 2X or 3X capacity of what ports deliver, it is nonsense. Vendors know it is nonsense. LR should call them on it.
light-headed 12/5/2012 | 3:20:47 AM
re: Force 10 Aims for the Data Center Lightheaded,

OK. You go ahead and buy a switch with 48 Gig Ports and a 48gig fabric.

But don't go blaming anyone but yourself when you find the switch as a unit only gets 24G of throughput. That would be your fault for not understanding the difference between fabric throughput and switch throughput.
---------------------------------------------------

WHAAAAAAAT??? Once again, we do understand and we are all telling you that ACTUAL SWITCH THROUGHPUT (Goodput) is ALL THAT MATTERS! Fabric throughput does not matter... i don't care if they use crossbar, output buffered, output buggered, shared memory, don't share well with others memory, "lasers" (cue Dr. Evil), smoke and mirrors or magic to make the packets switch and route.

I think that we are arguing the same thing. I am essentially saying that in the context of this announcement (marketing and sales), fabric capacity has no meaning. If you are engineering and designing then it does (but that is out of context of this announcement - this is not IEEE journal). I would argue that the fabric capacity is not even the most important design issue of the fabric with price, features (QoS, HA) and reliability being very important.
wwatts 12/5/2012 | 3:20:46 AM
re: Force 10 Aims for the Data Center Mr Z.
-------------------
Firstly, I don't know all that much about switching architectures, so I'm looking to learn.

Aren't most "line rate" calculations done based on an assumption of a 1 to 1 unicast traffic between pairs of ports ? If a percentage of the traffic was multicast, IOW, 20% multicast (to all ports for simplicity) and 80% unicast, wouldn't the backplane capacity have to be greater than just enought to support "line rate" unicast to between pairs of all ports concurrently ? I realise that there would have to be buffering on output ports to hopefully minimise packet drops due to output bandwidth not being big enough; I'm curious if the buffer capacity (converted to bits per second) would be included in these backplane capacity figures.
----------------------

It depends on the architecture and how multicast is handled. If you have 20% multicast traffic and 50% + load on ingress you are going to have congestion in your switch, regardless of architecture. Say you have 8 ports @ 1Gbps, all with 50% load, with 20% of the load MC (or 10% of capacity), that means that at eggress, 8 (ports) x 10% (ingress load of MC) or 80% of output capacity is devoted to MC. In addition you have the 40% load of unicast traffic at ingress, assuming even distribution to output ports, gives you 120% of load at eggress. You are going to have to drop something. What switching speed up does for you (in an architecture where multicast is handled by ingress retransmission, if you have a common memory architecture) is push the discard decision to the eggress queues (where you can generally make a better discard decision). Without switching speed up the congestion will be pushed to the ingress queues, causing traffic bound for uncongested output ports to be blocked by traffic bound for congested output ports. Usually this is the reason for internal switching speeds being greater for multicast, as multicast will cause congestion at outputs based purely on output load regardless of switching architecture.

Mr. Z
----------------
In other words, is the backplane capacity a pure bit per second rate between the ports, or is it a combination of pure bit per second rate plus the buffering capacity, also measured in bits per second ? Are they measuring how many bits per second they can shove into it before they start dropping packets at all input ports, rather than "true throughput" ?

I realise these questions a probably very much answered with "it depends", I suppose I'm curious to find out if there is a common convention.
------------------

Backplane capacity is generally pure raw bit rate between all the ports. Buffering is not counted as capacity. There is more than one reason for backplane speed up. Segmentation of packets applies to most switch architectures. If you are breaking packets into 64 byte chunks to push them through your switch fabric (most architectures do this as it is very hard to arbitrate switching at randomly on a per port basis when a packet ends), what happens when you have 65 byte packets? You need a backplane/switch capacity speed up to
handle this and switch 65 byte packets at line rate. Arbitration is not perfect so you need backplane/switch speed up to handle that, etc.

Unfortunately there is no common convention and determining a switch or switch fabrics true capacity requires reading the fine print. Double counting (counting ingress and eggress traffic) is common but not universal. Some will use backplane capacity, even though it really isnt relevant to the end user.

Mr. Z
----------------------

Some other guy:
-------------------------
Not right now and not EVER.

These are 1U fixed configuration switches. There is no room for expansion.
--------------------------


Well, I'd think the room for expansion would be there for future products, not for the customers of this product. Obviously designing in extra capacity is wise if the product line is going to grow.

That being said, it is a marketing trick to announce it as a product feature. It isn't a new trick though - how many people have driven their car to the maximum speed shown on the speedo, or even know if their car is capable of it ? I'd think a lot of people who buy performance sports cars compare the maximum speeds shown on the speedos, even if they'll never take them to those speeds.
-----------------

You are correct, some other guy is way off base. Many 1U fixed configuration switches have stacking ports which allow expansion. For a 24 port GbE switch, you might see 1 or 2 10-12Gbps stacking ports that allow you to connect multiple switches. If the switches in question support stacking they would need more switch capacity to support it (real line rate switch capacity not internal switch speed up capacity).


ceoati 12/5/2012 | 3:20:45 AM
re: Force 10 Aims for the Data Center Force 10 is same the company whose brain-dead marketing VP said that they weren't laying off people since they offered to let them keep the same jobs,,,, in India.
chook0 12/5/2012 | 3:20:22 AM
re: Force 10 Aims for the Data Center Lightheaded,

I think we definitely agree on 2 things.

1. The most important stat is total non-blocking performance. This can be less than the sum of the throughputs of the ports, but it obviously can't be more.

2. Quoting any number larger than the sum of the port speeds and implying that it gives more switch throughput than the sum of the ports is reprehensible, if the vendor tries to tell you a switch with 100G non-blocking throughput is somehow better if it has 800G of switching capacity instead of 200.

But I have been in the biz of buying switches for a long time as well, and I am not of the opinion that fabric capacity and architecture are meaningless numbers. There are cases where you definitely want to know that as well. There are lots of cases where the fabric capacity has limited the throughput of the switch.

examples:

1. Switching capacity is enough for the service ports but not enough for a whole stack to get non-blocking throughput because the stack links are a bottleneck. All the Cisco "stackables" used to suffer from this. Dunno if they still do or not.

2. The fabric does not have the capacity to give non-blocking throughput for all the service ports. This has not been a problem recently on aggregation switches because of the rise of cheap and high-performance commercial silicon, but go back 4 years, and the fabric capacity was not sufficient to have all ports going at the same time on almost all aggregation switches. The fabric capacity was a vital thing to know. In some applications having a lower fabric capacity was not an issue because traffic was coming in the front and out the uplink port so the uplink was the bottleneck. But you had to know what you were dealing with. (Nortel's stackables were a good example of this. Fine for the closet, but don't try to use them as a data centre core.)

And a switch with 48 ports of FE and 2 ports of GE that had a 6.8G fabric was not non-blocking under all traffic conditions. You need about 14G for that. (unless it was quoted as 6.8G of FULL DUPLEX capacity in which case it probably was.)

As a buyer, the fabric capacity is an important number. Maybe I wouldn't give extra credit for someone who had 3 times as much as was required, but I'd definitely want to know it and I want the vendor to quote the number. Let me decide if they are playing marketing games or not. Stop assuming all the customers out there are dumb and should have this spec hidden from them.

3. Sometimes the switch Architecture means you need more fabric capacity than would be suggested by the port throughput totals. Crossbar is a good example of this. You need a 2x speedup at least on the output ports to get close to non-blocking. Remember the Stratacom/Cisco ATM switches? Aggegate port capacity was 9.6G on the biggest one but they had a 19.2G switch capacity. They actually needed that speedup to be non-blocking. It wasn't a meaningless statistic.

Good example of a blocking crossbar on the market today is the OSR7600.

Have a nice day,

--Chook

----------------------
WHAAAAAAAT??? Once again, we do understand and we are all telling you that ACTUAL SWITCH THROUGHPUT (Goodput) is ALL THAT MATTERS! Fabric throughput does not matter... i don't care if they use crossbar, output buffered, output buggered, shared memory, don't share well with others memory, "lasers" (cue Dr. Evil), smoke and mirrors or magic to make the packets switch and route.

I think that we are arguing the same thing. I am essentially saying that in the context of this announcement (marketing and sales), fabric capacity has no meaning. If you are engineering and designing then it does (but that is out of context of this announcement - this is not IEEE journal). I would argue that the fabric capacity is not even the most important design issue of the fabric with price, features (QoS, HA) and reliability being very important.
<<   <   Page 2 / 3   >   >>
HOME
Sign In
SEARCH
CLOSE
MORE
CLOSE