Data is the most valuable business asset, and how you leverage it defines business success.

September 23, 2021

5 Min Read
A new king is crowned in the cloud

In 1996, Bill Gates coined the memorable phrase "Content is King." This credo has stood the test of time, but a succession is underway. The new king is data. Data is the most valuable business asset, and how you leverage it defines business success.

This presents a challenge when managing your data assets in a cloud-native environment. The traditional database strategy is unfit for cloud-native architecture. Essentially, no "one" database solution addresses how to manage and effectively persist data in the cloud. Instead, you must build a new strategy for how you manage different data types to meet operational SLAs.

What do you need for your "succession plan?" We will look at three items to inform your strategy: baseline database requirements, existing database technology, and database management options available for a cloud-native environment.

A crown to fit the king – What are the requirements?

When you scan today's data management market, you realize you have to reset your thinking. Instead of looking for "one" database to address your requirements, focus on how you manage your data in a cloud-native architecture. Anchor your initial workflow to map and address persisting a myriad of data types on the cloud that align with your specific business requirements.

What is required for a good fit? The following are fundamentals for fully leveraging a cloud-native mode:

✔ Portability - Ability to deploy in your chosen infrastructure (private, public, hybrid) with flexibility to migrate with minimal business impact.

✔ Cloud-native fundamentals of ephemerality - Architecting around ephemerality to achieve significant gains for what matters most - serviceability.

✔ Scalability and high performance - 5G demands ultra-low latency. You mustcomply with telco-grade SLA requirements for applications and databases.

✔ TCO - Optimize required data management and associated TCO with new use cases and growing traffic.

"The One" - What does the database market offer?

Let's examine database technology in today's market. Typically, databases are optimized to handle specific classes of data and related management scenarios. For the capability to manage all BSS domain database performance scenarios, however, no traditional database fully meets cloud-native business requirements. For example:

  • Traditional SQL - ACID compliance, scale vertical, end-to-end resiliency, heavy dependency on hardware resources

    • Distributed - ACID compliance with latency impacts, scaling may impact latency, synchronization, data durability needs monitoring

    • NoSQL - horizontally scalable, generally has limitations on full ACID compliance, optimized to a specific data access pattern or structure

    • Managed - different products for different use cases, portability partially given and linked to dedicated infrastructures, tuning options are limited, performance and latency needs could increase TCO, management not in your control

      While there are many database options, the cautionary tale is that no "one" is a true fit for data management needs in a cloud-native architecture. Thus, the king needs access to many crowns for different occasions.

      New approach required: King of many crowns

      Implementation of core products requires you re-architect BSS solutions and adopt a cloud-native data persistence (CDP) approach. This approach analyzes data technologies and performance requirements against cloud-native databases and managed products serving cloud infrastructures. The CDP approach assesses performance metrics, use cases, data types, and operational scenarios (e.g., charging, billing, analytics, customer management, self-management), providing an optimal baseline. It focuses on achieving cloud-native data management success versus the underlying technology. Four performance principles anchor this approach.

      #1: Portability and high performance

    • Support various deployment options with same software version - "Implement once, deploy often." Data technologies need flexibility to use multiple infrastructures from private or public cloud, hosted or managed infrastructure, and the ability to react on local conditions for datacenter setups, intersite latency, regional distribution, etc., especially when architecting an active/action solution.

    • Autotuning - Technologies, different infrastructures, and new hardware provide tuning options. Have the data layer localized and autotune to key variables, including deployment resources, resiliency settings, and application profile, enabling you to maintain a "zero-touch" principle.

      #2: Cloud resource availability and reliability

    • Unexpected resource removal - Database technology must operate in a container orchestration environment and acknowledge a resource will unexpectedly disappear, meaning designs implicitly contain resiliency from resource level up.

    • Verify algorithm on non-functional behaviors - The algorithm on resiliency from node recovery, scaling in/out, and upgrade process are often the same. Ensure zero business impact on maintenance activity or failover scenario. A true cloud database will continue with no serviceability interruptions.

      #3: Elasticity and latency SLA

      Distributed setup elasticity - High-availability systems need clear architecture for scalability, resilience, and data durability. To ensure you meet SLA:

      -Choose your "sync" model carefully to achieve performance and resiliency.

      -Change the database access pattern to tune queries to achieve low latency.

      -Remove data unneeded for your core, decompose and orchestrate it via a message bus, and move to near-real-time, cloud-native app processes.

      -Ensure resiliency design, so probability of failure matches utilization of database resources.

      Latency when scaling - Distributed data layers have partitioning mechanisms that do the corresponding split to resources with different available algorithms. Verify which fits you best and how to tune scaling to avoid latency impact.

      #4: Balance cost for best TCO

    • Decide impact of lost resources - Distribute resources to ensure best resiliency. Define maximum impact (e.g., 30% resource loss, continue handling 100% traffic, zone failure). Define the RTO/RPO corresponding to resource restoration and align to capabilities the infrastructure provides.

    • Managed resources - You may need more resources to achieve your SLA, such as storage. Because IOPS is the limiting factor, scaling is only possible with dedicated utilization, etc.

      Data is king

      Tim O’Reilly, founder of O'Reilly Media, sums up the new world well — "Who has the data has the power." With data your most valuable asset, proper management is vital to accelerate your business velocity. The foremost stage of your journey must include architecting your data management based on principles that underpin a cloud-native solution.

      Today, no single database provides a robust enough solution for the full range of cloud-native data management scenarios. Thus, a CDP approach provides a strategy for managing telecom's varying data types across operational scenarios, ensuring your most valued asset is leveraged for your business success.

      — John Giere, President and CEO of Optiva, Inc. ([email protected])

Subscribe and receive the latest news from the industry.
Join 62,000+ members. Yes it's completely free.

You May Also Like