Advancing the Telco Cloud: Q&A With VMware's Shekar Ayyar
Two decades ago, back in the last century, VMware snagged itself a unique position in the IT and networking technology market by developing and marketing code that would enable companies of all sizes to virtualize their networks.
A lot of people, myself included, expected other companies to quickly catch up and challenge VMware in the server virtualization space as a result of that time-honored process that enables interoperability, certification and competition -- the development of industry standards specifications.
But that didn't happen in VMware's case, partly because the company itself worked hard to continue to expand its product portfolio as a private company and then as part of EMC, but also because the industry abandoned the tried and trusted standardization process of the 20th century and relied instead on the open source community to develop virtualization (in its own inimitable slightly floppy way).
The absence of robust, traditional standards-based competition left -- and continues to leave -- the space open for VMware to continue dominating the server virtualization market.
All of which has ended up making VMware, a company with its origins in the 20th century, a key player in the 21st century's communications networking sector.
I recently had the chance to catch up with one of VMware's most senior executives, Shekar Ayyar, the executive vice president and general manager of the company's Telco Group. He shared his thinking on the latest trends in distributed cloud, automation (and orchestration), and provided an optimistic assessment of the arrival of interoperable virtualization solutions (a somewhat ironic prognosis, given that it is the lack of standards-based interoperability that has allowed VMware to maintain its market-dominant position for the past 20 years).
Ayyar's insights, as outlined in the interview below, are an essential read for anyone involved in creating virtualization strategies for 21st century businesses.
Steve Saunders: Hi Shekar, what new trends are you seeing in the industry?
Shekar Ayyar: Overall, the world is increasingly about hybrid, connected cloud infrastructure. But within that, there are three trends. First, we're seeing an increased need from our customers to convert their private cloud firewalls and data centers to a secure, self-service, automated model.
Second, they are trying to work out how to connect all of that to the services being offered by the public cloud players -- companies like Google and IBM, and, in China, Alibaba.
There's a third trend, and it's what I am focused on personally here at VMware, which is the transformation of what is happening in the domain of service providers -- the CSPs, telcos, cable companies, and so on. And what we see is the emergence of a new type of cloud -- a much more distributed cloud, where the resources are increasingly pushed out to the edge and exist in multiple points of presence.
SS: So what's the difference between the new telco cloud architecture and the old one, other than we've moved the resources to the edge of the network?
SA: So, first of all, the "old" architecture isn't really all that old, right? It's basically what passes for the current public cloud architecture today. But the most fundamental difference is that it's built on a model of consolidated capacity, where you solve the hardest problems by actually moving them to [the data center] where the greatest compute capacity resides.
The new architecture is different. Let's use the example of an AI/ML [artificial intelligence/machine learning] problem. As an application developer, you now have the ability to say, "My AI or ML problem has, let's say, 16 subcomponents to it, each one of which can be computed with different levels of latency and are differently sensitive to latency. If I have to actually come up with the 'golden rule', then I need to do a massive amount of compute on data. But if I simply want to make a comparison, for example identify if a face is actually a face, then that can be done pretty quickly and it can be done right here on my mobile device." So, that's an example of a problem you could actually part out [split into parts] and essentially make sure you have different levels of latency-sensitive compute in different points.
Another example could be some fairly sophisticated traffic or travel application that, again, does something similar. Or a third example could be a multiplayer game that is based on AR or VR or some combination of these things. So I think the pendulum is essentially swinging from concentrated capacity pools to highly distributed capacity pools. Our fundamental belief at VMware is that the world is going to be hybrid -- it is not going to be … sufficient to have just one of these types of architectures.
Will a distributed model negate the need for having consolidated data centers? The answer is no. [And it] doesn't mean that all applications will be complex enough to deserve spreading across all these types of clouds. I'm sure there will be certain applications that can be served just from a public cloud … other things may never move from private cloud alignments.
SS: Talking of applications, is that the overall biggest driver for all of this change?
SA: 5G is a big driver, and when I say 5G, I mean more than just the wireless protocol. 5G brings with it the notion of convergence, both wireline as well as wireless convergence. It also symbolizes convergence between data center IT-type applications and network-type applications or network functions. Now, some of these things will happen before 5G arrives -- clearly, there are people that are consuming things with virtual network functions already. But we think 5G will be a fundamental catalyst in driving this faster and better.
SS: Do you think Tier 1 service providers and CSPs need to change their network architecture in order to compete with the web-scale companies like Facebook and Amazon? Or will they not really be direct competitors when it comes to the next generation of cloud?
SA: Good question. You might end up having a Facebook partnering with an AT&T, or who knows what other types of complex relationships. Having said that, I think all of these guys will still be important participants. They will compete and they will also cooperate. For example, I don't see all cloud revenues just going to Amazon, Azure, Google, IBM and so on. The smart telcos will start taking a considerable share of next-generation cloud revenue. Conversely, will we get to a point where they completely disintermediate Amazon or Azure? No, that's not going to happen. So the hybrid world will require some nuanced partnering to figure out how it is that these guys can come together and deliver a single service. But it's certainly not going to be the sole domain of the public cloud guys anymore.
Next page: NFV, open source and automation
NFV, open source and automation
SS: You mentioned virtual network functions. Obviously, in this century, the communications industry turned over the responsibility for developing interoperable NFV capabilities over to the open source community. In the area of SDN, that's worked pretty well. Conversely, with NFV, there hasn't been any success at all. No interoperability. A disaster. Do you see that changing or will things stay this way for the foreseeable future?
SA: I see these things as different problems. I would not put interoperability and open source and NFV in the same bucket. These are three very separate things. The reason I say that is there's this general conception that open source is free, and that open source means that somebody is going to set up interoperable infrastructure that actually works. None of which is actually true. But having said that, I would also say that interoperability is key and, yes, I do believe that it will happen.
In fact, there is no way it cannot happen. Meaning that people's APIs need to be interoperable -- people need to be able to plug and play components, and people need to move into a mode where they are saying "Look, I can compose an application with infrastructure components from A, B and C vendors or A, B and C operators."
And we can't end up in a situation where one group determines how interoperability happens for, say, telco cloud, and some other community decides it for public cloud, and yet another group defines it for the private cloud – that would be disastrous.
Instead, what we are going to need is a way to build these applications on some common interface definitions that are interchangeable, interoperable and work seamlessly with each other. So, that, I think, is a requirement -- and I believe that that will happen. Because I think there is a time and a place for everybody to be able to contribute to the development of something, but there is also a point at which customers need to know that the interfaces are supported and reliable. So that helps determine the components that will benefit from being open source.
The other thing, of course, is having the right levels of software-defined abstraction. That's critical. The old world where people could say, "This is custom-built hardware and nobody can touch it and you don't get to open the box, only we get to do it" -- that's gone.
And so, complex as it sounds, I actually think a virtualized software-defined architecture is going to become more and more prevalent in all parts of the infrastructure stack. People just need to grow up and understand how to deal with it, and realize it comes with concessions… but that's the way the world is headed.
SS: But in the meantime, they can get everything which they need from VMware, right?
SA: Well, if I have my sales guy hat on, then the answer is "Absolutely yes." But in reality, whether you buy VMware or another vendor, the questions you need to ask are exactly the same: Do they give you the ability to run the applications that you want on their infrastructure? Is it agile enough for you to go and deploy your service today?
Then you have to take into account the path to the future, the evolution. There are lots of companies that can help run applications on VMware on a private cloud. In the public cloud space, most people would point to Amazon as the leader. But if your question is, who is it that is actually starting to transform the infrastructure for telcos from a software-defined standpoint, we would absolutely be one of the strong contenders. Finally, if your question is who can do all three of those, I would say that we're probably best positioned to doing that.
SS: I can't argue with you there. And I also agree that given the complexity of NFV, people have to be very pragmatic. At the same time, I would argue that there is a sense amongst Light Reading's service provider audience that a lot of vendors overpromised on the rate at which they said it would be possible to create interoperable heterogeneous, virtualized networks.
Another term being used a lot in the industry at the moment is "automation." It's become the new term for companies, and marketers, to throw into a conversation when they want to be relevant to service providers. Is it a term that you hear a lot at VMware? And what does it mean to VMware and your customers?
SA: I might modify that slightly, because we also hear the word "orchestration" a lot in the same context, right? The short answer is that people would like to do things more hands free, with less workflow and easier approvals. So, we've seen that for a long time, right? Pretty much since the day the company was founded. And it's largely, I would say, still driven by data center and IT requirements.
So I think that is going to be a continuously evolving scenario, because there is going to be more and more that we have as a tool set to enable greater levels of automation. AI is the latest buzzword -- everybody now wants to do some AI in an attempt to automate better.
But if I step back, is there value in thinking about automation? Absolutely yes. You really want to simplify things so it's easier to stand up infrastructures, to deploy them, to have them operate in a relatively hands-free mode and then to have error correction, to have four (quadrants) and to close the loop in an efficient a way as possible.
Is it going to be completely human free? Absolutely not. It is always going to have some level of somebody observing a NOC and flagging things, or connecting the dots between on-prem or off-prem or maybe between a company's employees and partners, things like that. But I think the idea of automation is very powerful. The toolset that enables automation gets increasingly more valuable and advanced and sophisticated.
So I think automation will improve. It's not going to be a one-size-fits-all or a magic button that you just press and everything gets automated, but I think it is an important concept. I would say that the same thing applies to orchestration, where a term is so loosely used but where there are, in fact, different levels of orchestration -- at some level you actually want to manage the underlying infrastructure, at another level you want to manage the service, and at another you want to manage the billing infrastructure that gets connected.
And sure, it would be nice to have a single solution that just solves everything in one shot, but I believe that is going to be an evolutionary path, that you are going to need to get used to the idea that two or three things need to talk to each other in order to orchestrate something end-to-end.
I think the industry, ourselves included, will be working to make those hand-offs, loops and requirements around components to be simpler and easier, but it's not going to be an instant fix.
— Steve Saunders, Founder, Light Reading