& cplSiteName &

Force 10 Aims for the Data Center

Light Reading
News Analysis
Light Reading
3/28/2005

Force10 Networks Inc. today unveiled the S-50, its first fixed-configuration data center switch, as it attempts to up the ante in its battle with Hewlett-Packard Co. (NYSE: HPQ).

Until now, Force10’s main product offering was the E-Series of chassis-based switches. The S-50 represents a move into a new part of the market: the server edge of the data center. The 1-rack-unit box comes with 48 ports of Gigabit Ethernet (also capable of 10- and 100-Mbit/s operation) and two 10-Gbit/s Ethernet ports.

The Milpitas, Calif., company has already got around half a dozen beta customers for the S-50, says Andrew Feldman, vice president of marketing.

This is all about cost. The high price per port of traditional chassis-based switches means users typically connect them to large server clusters. Force10, then, is going after those users that can’t afford to hitch the likes of Web servers and software infrastructure servers to a large chassis-based box. Users can connect the servers to the S-50 and use its 10-Gbit/s uplinks to feed E-Series boxes.

It's a shrewd move, says Steven Schuchart, senior analyst for enterprise infrastructure at Current Analysis. “The S-50 is a response to needs from customers that like their Force10 gear and are saying that they want an aggregation switch," he says. “For Force10, it’s a good evolutionary step. Their smallest switch to date is the E-300, which is too big for a lot of applications."

Indeed, the S-50 could be good news for companies wanting 10-Gbit/s links at less than E-series prices. The 10-Gbit/s Ethernet list price per port on the E-Series is around $12,000, compared to $3,250 on the S-50.

But the S-50 will have to contend with HP’s ProCurve 3400 device, which was launched last year (see HP Launches Gigabit Switch Series). Like the S-50, the ProCurve 3400cl is a 1-rack-unit box that supports up to 48 10/100/1000 ports and two 10-Gbit/s Ethernet ports.

Naturally, Force10 says it has a performance edge over its rival. Feldman told NDCF that the S-50 offers a switching capacity of 192 gigabits per second. Throughput on HP's 3400cl is 136 Gigabits per second.

However, Darla Sommerville, HP's vice president and general manager for the Americas, highlighted the fact that the 3400cl is available in both 24-port and 48-port configurations. This, she says, offers users a great degree of flexibility. "If you are putting out Gigabit to the desktop you may not want to pay for additional ports," she adds.

Force 10’s core technology has already led to some big wins in areas including high-performance computing, where HP is a major player (see Argonne Picks Force10 and Force10 Scores Record Q1 Sales).

HP’s switch technology is seen as a key weapon in its assault on the networking market. The company is using the switches to entice users over to its Adaptive Enterprise strategy (see HP Tightens Up ProCurve Story and HP ProCurve Expands Security).

But as HP adds to its arsenal, so Force10 needs to expand its footprint, according to Schuchart. “Force10 needs to decide on the next market to tackle. If I was to speculate, I would say that they would probably go after the enterprise next," he says. "But if they do that, then they have to broaden the platform.”

This is a distinct possibility. Feldman told NDCF that Force10 is likely to expand its S-Series family at the high end.

— James Rogers, Site Editor, Next-Gen Data Center Forum

(21)  | 
Comment  | 
Print  | 
Related Stories
Newest First  |  Oldest First  |  Threaded View        ADD A COMMENT
Page 1 / 3   >   >>
icenine
icenine
12/5/2012 | 3:21:27 AM
re: Force 10 Aims for the Data Center
Who is the ODM which popped this switch out? I don't think it was designed in Milpitas...
chipsales
chipsales
12/5/2012 | 3:21:17 AM
re: Force 10 Aims for the Data Center
F10 does not use ODM's. All their boxes and Asic's (for that matter)were designed in house.

Chipsales.
truthteller99
truthteller99
12/5/2012 | 3:21:16 AM
re: Force 10 Aims for the Data Center
F10 does not use ODM's. All their boxes and Asic's (for that matter)were designed in house.


No on this box. This is ODM.
Phiber_Phreak
Phiber_Phreak
12/5/2012 | 3:21:14 AM
re: Force 10 Aims for the Data Center
"Feldman told NDCF that the S-50 offers a switching capacity of 192 gigabits per second. Throughput on HP's 3400cl is 136 Gigabits per second."

In fact, neither number is possible on box with 48 x1G and 2x10G interfaces.

With 9000-byte jumbo frames, theoretical maximum is 67.8 gigabits per second. Numbers claimed here are nonsense.

Cisco double counting was bad enough -- is LR now believing in triple counting?

Does anybody actually check these claims.

PP
light-headed
light-headed
12/5/2012 | 3:21:13 AM
re: Force 10 Aims for the Data Center
PP is right. The only measurement that matters is how many ports at wire-speed, full-mesh forwarding performance per RFC 2544. Anything else is marketing bullshit and has no relevance to anything. 3com used to list all of the traces on the backplane of the corebuilder and their theoretical capability then double it. They claimed something like 960 Gigs when they could only support 32 gig ports or something like that.

Craig and LR should know better...

Phiber_Phreak
Phiber_Phreak
12/5/2012 | 3:21:13 AM
re: Force 10 Aims for the Data Center
"The numbers refer to switching capacity -- the capacity of the switch fabric. That's often greater than the maximum capacity of all combined interfaces (or so you hope.)

It shows the boxes' architectures have room to grow - but you're right; you can't put all that capacity to use right now."

Not right now and not EVER.

These are 1U fixed configuration switches. There is no room for expansion.

This is marketing nonsense. A user cannot get at capacity today or ever. In fact the capacity probably does not exist in the first place.

Next time please use calculator before repeating nonsense vendor claims.
Pete Baldwin
Pete Baldwin
12/5/2012 | 3:21:13 AM
re: Force 10 Aims for the Data Center
"Feldman told NDCF that the S-50 offers a switching capacity of 192 gigabits per second. Throughput on HP's 3400cl is 136 Gigabits per second."

In fact, neither number is possible on box with 48 x1G and 2x10G interfaces.


The numbers refer to switching capacity -- the capacity of the switch fabric. That's often greater than the maximum capacity of all combined interfaces (or so you hope.)

It shows the boxes' architectures have room to grow - but you're right; you can't put all that capacity to use right now.
mr zippy
mr zippy
12/5/2012 | 3:21:10 AM
re: Force 10 Aims for the Data Center
Firstly, I don't know all that much about switching architectures, so I'm looking to learn.

Aren't most "line rate" calculations done based on an assumption of a 1 to 1 unicast traffic between pairs of ports ? If a percentage of the traffic was multicast, IOW, 20% multicast (to all ports for simplicity) and 80% unicast, wouldn't the backplane capacity have to be greater than just enought to support "line rate" unicast to between pairs of all ports concurrently ? I realise that there would have to be buffering on output ports to hopefully minimise packet drops due to output bandwidth not being big enough; I'm curious if the buffer capacity (converted to bits per second) would be included in these backplane capacity figures.

In other words, is the backplane capacity a pure bit per second rate between the ports, or is it a combination of pure bit per second rate plus the buffering capacity, also measured in bits per second ? Are they measuring how many bits per second they can shove into it before they start dropping packets at all input ports, rather than "true throughput" ?

I realise these questions a probably very much answered with "it depends", I suppose I'm curious to find out if there is a common convention.

Not right now and not EVER.

These are 1U fixed configuration switches. There is no room for expansion.


Well, I'd think the room for expansion would be there for future products, not for the customers of this product. Obviously designing in extra capacity is wise if the product line is going to grow.

That being said, it is a marketing trick to announce it as a product feature. It isn't a new trick though - how many people have driven their car to the maximum speed shown on the speedo, or even know if their car is capable of it ? I'd think a lot of people who buy performance sports cars compare the maximum speeds shown on the speedos, even if they'll never take them to those speeds.

phar-sighted
phar-sighted
12/5/2012 | 3:21:08 AM
re: Force 10 Aims for the Data Center
Looking at the S50 datasheet on their site, they have 48 GE ports + 2 10GE ports and upto 2x10GE stacking ports, ie 48+20+20 = 88 GE * 2 = 176 Gbps of total switching capacity. Not sure how they get to 192 Gbps even with double counting. As previously noted, this is a fixed config box, so there is no upgrades possible. Maybe someone from LR can find out how the math works.

One thing I saw on the Force10 site that puzzles me and maybe some one can throw some light on this. They mention that you can build line rate clusters from 48 to 384 ports using the S50s. How does one do that?

= PS =
chook0
chook0
12/5/2012 | 3:21:07 AM
re: Force 10 Aims for the Data Center
Actually, depending on the architecture, the claims are not necessarily bullshit.

1. for 1Gbps of full-duplex switch throughput, you actually need 2 Gbps of switching capacity. This is because a switch fabric is typically a half-duplex device. Count 'em, You have 2x input ports to the fabric, each carrying 1Gbps. That's 2Gig of switching to carry it.

2. Depending on fabric architecture, you may need even more switching capacity that that to avoid blocking. For example, many crossbar fabrics have a 2x speedup to avoid HOL blocking. If you don't have any speedup, at least on the output ports then a Crossbar fabric will achieve about 67% of line rate throughput under uniform random load.

Now I doubt very much that this switch uses a crossbar fabric. Probably shared memory. So anything in excess of 2x (sum of line rates) is probably superfluous. No doubt the switch fabric can handle it, but there are ports not connected (for example stacking ports).

--Chook

-----------------------------
"The numbers refer to switching capacity -- the capacity of the switch fabric. That's often greater than the maximum capacity of all combined interfaces (or so you hope.)

It shows the boxes' architectures have room to grow - but you're right; you can't put all that capacity to use right now."

Not right now and not EVER.

These are 1U fixed configuration switches. There is no room for expansion.

This is marketing nonsense. A user cannot get at capacity today or ever. In fact the capacity probably does not exist in the first place.

Next time please use calculator before repeating nonsense vendor claims.
Page 1 / 3   >   >>
Featured Video
Upcoming Live Events
December 3-5, 2019, Vienna, Austria
December 3, 2019, New York, New York
March 16-18, 2020, Embassy Suites, Denver, Colorado
May 18-20, 2020, Irving Convention Center, Dallas, TX
All Upcoming Live Events