Featured Story
Ericsson rewrites sales pitch in face of slowing traffic growth
Ericsson substitutes value for volumes in its patter after recognizing a slowdown in traffic growth – but it still bets AI will have a massive impact on the network.
New cloud-based storage lets AT&T's product and service developers turn up on-demand storage that meets their exact needs and saves money.
July 14, 2015
AT&T has developed its own approach to software-defined storage and is letting its service developers and project managers easily define and quickly turn up the cloud-based storage they need through an automated user interface. Combined with SDN and computing, the disaggregation of storage is one more step in the process of breaking down vertical product silos, as AT&T's top technologist, John Donovan, has repeatedly vowed to do. (See Donovan Touts AT&T's Software Push.)
The new software-defined storage capabilities aren't being primed yet for external products, but aspects are being shared with the open source community through contributions to OpenStack, says Chris Rice, AVP of AT&T Labs , who described the new capabilities in this blog post last week.
Demand for storage is exploding with the generation of new content, and the need to store data for analytics and other purpose, the blog notes.
Cloud-based storage isn't new but Rice believes what AT&T Inc. (NYSE: T) has developed is a unique package of capabilities that builds in part on lessons learned in digital communications to offer a cost-effective, automated way of matching storage capabilities to specific requirements with high reliability. Part of what is unique is a user interface that allows less technical folks -- not IT staff or developers -- to define what they need in storage, based on a number of characteristics, and then spin up cloud-based storage that meets those requirements.
The "real science" in the system, Rice says, is its software-defined storage planner, which has ability to take the user requirements, input through the UI, and go through an analytic function which AT&T built to determine which solutions best satisfy those requirements and then, from those solutions, which one is optimal at what cost.
"From this SDS planner, you get a range of different options and what is optimal in terms of cost that satisfies your need," Rice says. "So, in a span of about five to ten minutes, that solution is not only spun up and created in the cloud, but also goes through a test verification cycle. That includes a test script and automated verification that the cloud-based storage meets the user-specified requirements."
Those requirements include such things as operations per second, throughput, bandwidth, reliability and more.
But AT&T didn't stop there. "We took that output and said, from that one solution that is optimal, let's go off and create all the templates that are required in OpenStack to automatically instantiate that particular storage solution," Rice says.
Coding for sharing
The "cherry on top" of the software-defined storage, says Rice, is how AT&T is delivering reliability. That is also what the company is sharing back into open source, as part of OpenStack. The company built on and expanded "erasure coding," which was actually developed for digital communications early on, and can now be used to dial up or dial down reliability, as needed, meeting the levels offered by Hadoop and its storage system, while consuming less storage capacity.
As Rice explains it, the Hadoop approach is to use triple redundancy, replicating a given amount of data three times. Erasure coding consumes half as much storage space. It is a method developed for digital communications when it was first introduced and the industry was looking for a way to recover bits that were dropped due to line impairments or noise.
Instead of simply repeating the information multiple times, which used up a lot of the digital channel in unnecessary replications, erasure coding uses more intelligence. The code is applied and protects a certain number of bits and if some of those get lost due to noise, they can be recovered. "So digital communications became reliable and more efficient," Rice says.
AT&T applied erasure coding to software-defined storage so it consumes only one-and-a-half times the storage capacity versus the 3X required by Hadoop. That makes AT&T's approach more efficient. The company did create the ability to "dial up or dial down" the amount of redundancy as needed for a given application, Rice says.
Taken as a whole system, the AT&T software-based storage offers more than is available today from point solutions that deliver cloud-based storage, and offers cost savings as well, he adds.
Cost was important because while storage is getting cheaper, the demand is rising so fast, it outstrips the falling costs. So while adding the flexibility of cloud-based storage was important -- to eliminate the need to constantly buy storage in large 250 terabit to 500 terabit chunks -- savings was also a drive.
AT&T may at some point offer its cloud-based storage as an external product but that isn't currently on the roadmap, Rice says.
— Carol Wilson, Editor-at-Large, Light Reading
You May Also Like