Many companies find that if they compare the rate of data growth to their global IT budget, their IT budget will be consumed by storage within the next five years -- even with the dropping costs per terabyte of storage.
Is the pressure on to find new and more efficient ways to store and access data?
It would appear so. Cloud computing is having a big impact on how enterprises manage IT and enterprise applications just as the virtualization of network functions to run as software on virtual machines or containers is driving service providers towards cloud-native networks.
Facing cost and rigidity with regard to dedicated storage hardware and solutions, applying the scale-out on-demand cloud model to the storage layer makes sense.
This, combined with the pressure to become more data-driven with increased use of big data analytics for rapid insights, means storage has to be more flexible and holistic, with the need to reliably access data quickly from different sources. Storage needs to support data on demand yet be simple to manage and access.
A wide range of dynamic storage technologies is emerging, such as flash storage and hyper-converged infrastructure where cloud storage is integrated with cloud computing and networking. Storage-as-a-service options claim to offer greater flexibility and performance scalability for the needs of different types of workloads as well as a way to only pay for data storage needed.
Storage-as-a-service allows federated storage with resource pooling and precise service definition, i.e. the ability to manage SAN, NAS, object and other storage equipment from a single pane, whether on premises or in the cloud. This creates a virtual storage pool across both enterprise workloads and cloud-native workloads, improving usage efficiency, and avoids data being stuck in silos.
Resource pooling allows customers to combine various storage devices into logical unified resource pools, yielding unified management and scheduling capabilities to suit service needs -- application-aligned storage matches the most appropriate resource to the 'in-the-moment' need of the customer.
The new model also delivers the flexibility in defining the precise level of performance, latency, bandwidth capacity, data security and other attributes, including:
SLA-based templates: Storage capabilities can be abstracted into SLA templates for ease of ordering. The templates allow enterprises to specify everything from input/output operations per second (IOPS), latency, bandwidth and other information without having to pay any attention to the differences in storage at the underlying layer.
Adaptive data management: Storage capabilities are divided into multiple tiers. Each tier has different specifications e.g. IOPS, latency, bandwidth and data protection. The resources assigned to a specific service can also be changed over time to adapt to evolving business requirements. The solution supports cross-tier migration, expansion and reclaiming of storage resources at the data plane. These abilities allow the solution to adapt to changing service requirements with precise resource scheduling, a key requirement for operators with dynamic NFV-based services and distributed workloads.
Open architecture: Hybrid cloud and IT strategies, whether public vs. private cloud, storage, applications, etc. are being deployed. Yet infrastructure can no longer operate in silos, and likewise storage solutions have to work well with other solutions and not lock enterprises into a single proprietary solution. Storage-as-a-service solutions should support cloud infrastructure platforms such as Openstack as well as existing cloud storage and big data applications and other types of cloud-based platforms.
As operators make the push towards NFV, it's equally important to move towards data-on-demand storage that is unified and can support enterprise apps and cloud-native workloads as well as big data analytics into the cloud era.
This blog is sponsored by Huawei. Sandra O’Boyle, Senior Analyst, Heavy Reading