Kubernetes 1.3 Steps Up for Hybrid CloudsKubernetes 1.3 Steps Up for Hybrid Clouds
The new version of the open source container orchestration software supports deploying services across multiple cloud platforms, improved scaling and automation.
July 6, 2016

The Kubernetes community on Wednesday introduced Version 1.3 of its container orchestration software, with support for deploying services across multiple cloud platforms, including hybrid clouds.
Kubernetes 1.3 improves scaling and automation, giving cloud operators the ability to scale services up and down automatically in response to application demand, while doubling the maximum number of nodes per cluster, to 2,000, says Google (Nasdaq: GOOG) Product Manager Aparna Sinha in a post on the Kubernetes blog. "Customers no longer need to think about cluster size, and can allow the underlying cluster to respond to demand," Sinha says.
Cloud operators can federate services across multiple local and remote clusters, for higher availability, greater geographic distribution and hybrid and multi-cloud scenarios. To that end, Kubernetes 1.3 supports cross-cluster service discovery.
Kubernetes 1.3 introduces Minikube, to allow developers to run Kubernetes clusters on their laptop that are API compatible with full Kubernetes clusters. Using Minikube, developers can test software locally and push the results live to full Kubernetes clusters.
The new version supports stateful applications such as databases, and emerging standards such as the Container Network Interface, the still-under-development Open Container Initiative and rkt from CoreOS.
And the software has an updated dashboard user interface, as an alternative to the command line interface.
Google plans to roll out support for Kubernetes 1.3 on its Google Container Engine -- its Kubernetes as a services offering -- over the next week, says Google Senior Product Manager David Aronchick in a post on the Google Cloud Platform blog. The new version will allow users to double the number of nodes in a cluster, to 2,000, and run services across different availability zones.
Containers are hot technology as lightweight alternatives to virtual machines for running applications and services in the cloud. Unlike VMs, containers don't have the overhead of full operating system images. Cloud operators deploy containers in vast numbers and with rapid churn, which leads to management and orchestration challenges that vendors and open source communities are stepping up to meet.
Want to know more about the cloud? Visit Light Reading Enterprise Cloud.
As an alternative to Kubernetes, Red Hat Inc. (NYSE: RHT) supports OpenShift container management, upgraded last week to support a variety of scenarios, from a free version for developers to product versions for private and public clouds. (See Red Hat Adds Variety to Container Management and Red Hat Builds Out Enterprise Cloud Application Stack.)
Like Google Container Engine, Red Hat offers container management as a service; Red Hat's version is OpenShift online.
And Docker Inc. last month introduced a new version of its container management software with built-in orchestration as well as support for Amazon Web Services Inc. and Microsoft Corp. (Nasdaq: MSFT) Azure. (See Docker Targets Google Kubernetes.)
Docker says its software "democratizes" orchestration; where Google delivers hyperscale performance and scalability but requires a specialized team to deploy and support, Docker is designed to be accessible to a typical IT organization. (See Docker Targets Google Kubernetes.)
Related posts:
— Mitch Wagner, , Editor, Light Reading Enterprise Cloud.
About the Author(s)
You May Also Like