It's All About Power

If we’re going to build cheap wireless thin clients, we need to harden the other side of the link. This is a bigger job than most people think, but it’s getting easier.
I arrived at CeBIT in Hannover, Germany, a couple of days ago. I’m part of the American press delegation this year, and I’m really happy to finally see this show. CeBIT is the largest IT event in the world – more than 6,200 exhibitors and about 2.8 million square feet of occupied exhibit space. It could easily take two months to see everything, but the show runs only for a week and a half, and I’m only here for two days. Still, I’m hoping to see a lot of new products and technologies that might never make it to the US. By the way, of those more than 6,000 exhibitors, about 25% are from Asia, but only 200 are from the United States!
Needless to say, when you get a bunch of writers, editors, journalists, and analysts together, the opinions fly around and the others let you know exactly what they think – good or bad. My big theme remains wireless thin clients, and I think the majority are buying my opinion on this. But a couple did point out a big issue with the vision of moving the data and processing back to the other side of the link.
Specifically, the server farms and data centers need to be hardened, redundant, and otherwise way more reliable than your average PC. This is not easy to do – it takes a lot of planning and money. But it is at least getting easier.
On Monday, American Power Conversion Corp. (APC) (Nasdaq: APCC) offered to show me a data center they had a hand in planning at the Technical University at Delft, about a hour from Amsterdam, where I landed (direct flights to Hannover are tough with an event of this size). The power and cooling they designed are really amazing, and far different from most installations. The Delft room uses APC equipment to contain all of the heat in a relatively small space and then carefully move lots and lots of air. Less space and energy are required, and reliability is improved. The folks at Delft mirror two centers across the campus for redundancy, although, given the fact that most of the Netherlands is below sea level, and Katrina-related issues remain fresh in my mind, I would suggest mirroring to, say, Alaska. Another journalist in our party suggested that this location might also cut down on both security and cooling bills as well. Maybe…
The point is that the core technologies required to implement broad-scale thin-client computing – servers, storage, cooling, power, and reliability – all exist today. My next stop here in Hannover is to check on the client end of the equation, especially that other end of the power problem – batteries and power conservation and management.
— Craig Mathias is Principal Analyst at the Farpoint Group , an advisory firm specializing in wireless communications and mobile computing. Special to Unstrung
I arrived at CeBIT in Hannover, Germany, a couple of days ago. I’m part of the American press delegation this year, and I’m really happy to finally see this show. CeBIT is the largest IT event in the world – more than 6,200 exhibitors and about 2.8 million square feet of occupied exhibit space. It could easily take two months to see everything, but the show runs only for a week and a half, and I’m only here for two days. Still, I’m hoping to see a lot of new products and technologies that might never make it to the US. By the way, of those more than 6,000 exhibitors, about 25% are from Asia, but only 200 are from the United States!
Needless to say, when you get a bunch of writers, editors, journalists, and analysts together, the opinions fly around and the others let you know exactly what they think – good or bad. My big theme remains wireless thin clients, and I think the majority are buying my opinion on this. But a couple did point out a big issue with the vision of moving the data and processing back to the other side of the link.
Specifically, the server farms and data centers need to be hardened, redundant, and otherwise way more reliable than your average PC. This is not easy to do – it takes a lot of planning and money. But it is at least getting easier.
On Monday, American Power Conversion Corp. (APC) (Nasdaq: APCC) offered to show me a data center they had a hand in planning at the Technical University at Delft, about a hour from Amsterdam, where I landed (direct flights to Hannover are tough with an event of this size). The power and cooling they designed are really amazing, and far different from most installations. The Delft room uses APC equipment to contain all of the heat in a relatively small space and then carefully move lots and lots of air. Less space and energy are required, and reliability is improved. The folks at Delft mirror two centers across the campus for redundancy, although, given the fact that most of the Netherlands is below sea level, and Katrina-related issues remain fresh in my mind, I would suggest mirroring to, say, Alaska. Another journalist in our party suggested that this location might also cut down on both security and cooling bills as well. Maybe…
The point is that the core technologies required to implement broad-scale thin-client computing – servers, storage, cooling, power, and reliability – all exist today. My next stop here in Hannover is to check on the client end of the equation, especially that other end of the power problem – batteries and power conservation and management.
— Craig Mathias is Principal Analyst at the Farpoint Group , an advisory firm specializing in wireless communications and mobile computing. Special to Unstrung