x
Comms chips

Cavium Takes Its Thunder to HP

Cavium Inc. (Nasdaq: CAVM), which recently announced a project to build processors customized for the cloud, has joined a HP Inc. (NYSE: HPQ) initiative to develop next-generation blade servers.

Cavium is hardly alone. Applied Micro Circuits Corp. (Nasdaq: AMCC) likewise joined the HP Moonshot project on Tuesday, as announced on HP's blog. And the program's membership already includes chipmakers Advanced Micro Devices Inc. (NYSE: AMD), Calxeda and Intel Corp. (Nasdaq: INTC), as well as processor-core developer ARM Ltd. .

Moonshot aims to develop servers based on low-power processors -- the kinds used in cellphones. Technically, what AppMicro, Cavium and others have joined is the Pathfinder Program, the name for the partner club that HP is forming around Moonshot.

Why this matters
What's interesting is that just last week, Cavium launched Project Thunder, which pledges to develop multicore processors customized for the cloud. Exactly what there is to customize, Cavium isn't specifying yet.

Moonshot, meanwhile, aims to save power by making processors share elements such as RAID controllers and network interfaces. Those elements go into the system, while the blade (HP calls it a "cartridge" in this context) houses the low-power processor.

There is nothing in HP's blog that says the Moonshot and Thunder are connected. Still, if Cavium is truly developing a new type of chip for the cloud, then HP's new server architecture seems like a good place to showcase it.

For more

— Craig Matsumoto, Managing Editor, Light Reading

Pete Baldwin 12/5/2012 | 5:24:31 PM
re: Cavium Takes Its Thunder to HP

I'm asking a lot of the same questions. That's one reason I'm interested in keeping an eye on all this.


Applications certainly behave differently in the cloud -- they can be portable, and/or they can share resources or locations. Should a processor be specialized to facilitate any of that?  I don't know yet.

joferrei 12/5/2012 | 5:24:31 PM
re: Cavium Takes Its Thunder to HP

Sure, public cloud computing has different requirements and objectives than traditional server computing with per customer dedicated hardware, and public cloud computing will likely become dominant form of server computing during this decade.


But how much different are the demands for processing hardware in the cloud?


And what other ongoing macro trends are there in computing, e.g. parallel programming?


Software development productivity is often key to success in selling computing infrastructure technologies or services. How are these demands for productivity being addressed if/when the processor architectures change?


Is the right business model thus rather PaaS, with an integrated development environment, than selling processor chips or servers?


 

paolo.franzoi 12/5/2012 | 5:24:29 PM
re: Cavium Takes Its Thunder to HP

 


Craig,


 


I strongly disagree with your assertions about applications behaving differently in the cloud.  


Of course we may mean different things by stuff like portable and sharing.  If you mean they can be instanced in different locations and have the ability to share compute resources...again that is not directly a cloud thing.


So, we have cloud processors...they are x86 architecture running on many kinds of hardware.  Many of the applications are vastly abstracted from that and won't care a whit about the processor that they run on (see Java or Ruby or Python).  The real issue you should be asking is:  "Will Amazon switch AWS over to Cavium based servers?" and "If it did this switch, why would it do so?"


seven


PS - Just to be clear with a very simple example of the portable and sharing thing.  Think of a huge corporation and its Exchange Mail Infrastructure.  There is all kinds of resource sharing and portability.  And it has nothing to do with being a cloud thing.

joferrei 12/5/2012 | 5:24:27 PM
re: Cavium Takes Its Thunder to HP

Craig,

I think one should rethink the perspective of the question whether processors should be "specialized" for cloud. The point is the public cloud computing will be the new norm, instead of a special case, for use of server hardware.

We will see a wholesale shift from use of customer-dedicated hardware and sequential processing on unicore processors to multi-tenant shared manycore processors and parallel processing between and within the client programs -- with both forms of parallelism to be supported at the same time and, if we are pursue the elasticity benefits of cloud, in a dynamic manner.

These macro trends bode very fundamental changes to what is demanded from the processing hardware.

An essential question thus is: Should we try to retrofit (aka abstract or virtualize) at software layers the legacy processor architectures to appear to be what they fundamentally are not, or should the hardware for the age of parallel cloud computing, together with the cloud software application development and deployment platforms, be comprehensively re-architected to match the prevailing use case demands?

Clearly, the latter alternative has a key advantage of being able to fight the otherwise ever increasing complexity and overhead caused by the middleware software layers for abstraction, virtualization, resource management etc. Then the question becomes can the cloud applications (eg SaaS vendors) tolerate the inefficiencies of the layers of middleware on top of the (current generation) hardware designed for customer-dedicated processing in a sequential manner. My guess is that in particular the need to manage electric power consumption and other facility and management costs will make the case stronger for designing the new server hardware and processor architectures for the demands of parallel cloud computing, to cut through the inefficiency-hiding layers of abstraction and virtualization middleware. However, these new cloud processing hardware capabilities should be provided to the (SaaS) customers via an integrated development and runtime hosting platform, rather than as a separate hardware and software components. 



paolo.franzoi 12/5/2012 | 5:24:24 PM
re: Cavium Takes Its Thunder to HP

 


Northern,


Just FYI, multi-tenant SaaS (which I have been running for a couple of years) does not require virtualization nor does it require middleware.  Those are conveniences and scaling issues but not required for implementation.  There are some applicationss which might require a separate VM per customer instance but I can tell you we run over 2000 separate customers one set of clustered hardware (we have other customers in other clusters) with not a single hypervisor. In fact the only place we use a hypervisor is in our SQA tests.


The service I operate is a 24/7 mail service and the things I want virtualization for are:


- Beta Tests


- Turn up of extra capacity under stress loads


The reason is that "cloud capacity" at any of the vendors that I have put through a modeling exercise are about 3x the cost of my servers in data centers.  And I use expensive servers and expensive data centers.  If I ran a website where there were significant ebbs and flows of customer volume then I might have a different need.  And we have been buying multicore processors for a LONG time in standard x86 architecture.


I would not consider a change to what I use unless there were three things I could satisfy and then two things I could get:


Satisfy:  Quality, Service, Lights Out Management


Get: Lower cost, Multi-vendor/Multi-scale.


Power is just a part of cost.  What I need is something where I have many choices and ways of building different size boxes - racks of 1U servers do not help me very much.  Don't hurt but I want various choices and until there is multiple sets of vendors with 1U appliances all the way to large blade server configurations then I will just stick with x86.


The software for this mail service is written in Java primarily with a whole bunch of OS+ level services written in Perl.  It runs on a custom Linux distribution that is being moved to an LFS distribution for control purposes.  We use Postgres as our primary database.  We use Dell Poweredge servers (mostly r710s) for running this software.


seven


 

HOME
Sign In
SEARCH
CLOSE
MORE
CLOSE