AWS pitches gen AI for 'autonomous operation of networks'

The world's biggest public cloud has opened a $100 million AI innovation center and spies a huge opportunity in the telco sector.

Iain Morris, International Editor

June 29, 2023

9 Min Read
AWS pitches gen AI for 'autonomous operation of networks'
Bird's eye view of the Amazon campus in Seattle.(Source: Amazon)

Stanley Kubrick fans will recall the politely rebellious robot in 2001: A Space Odyssey that refuses in the soothing voice of a hypnotist to carry out a simple instruction. More than 20 years since that was supposed to have happened, telcos can expect more tractability when using generative artificial intelligence (gen AI) to command their networks.

With CodeWhisperer, a software developer can set the basic objectives and leave gen AI to write the program. An Amazon tool trained on trillions of lines of code, it has been shown to boost productivity by an average of 57% and is extremely versatile, according to the Internet giant. "It is a software-writing gen AI that can be used for writing code for pretty much any kind of application," said Ishwar Parulkar, the chief technologist for the telco vertical at Amazon Web Services (AWS). Unlike HAL, the rogue AI in Kubrick's movie, CodeWhisperer looks controllable.

This software-writing equivalent of the better-known ChatGPT – the consumer app that has fueled much of the recent excitement about gen AI – is one of the latest innovations from AWS, which is courting the interest of telcos as it sets out its AI stall. Just last week, the company lifted the curtain on a $100 million innovation center dedicated to AI. Titan, its own foundation model, was revealed in April, along with Bedrock, described by Parulkar as a managed service to help developers build gen AI models.

All that is complemented by AWS's colossal investments in chip technology. Its Graviton range gives it an Arm-based general-purpose processor that is gradually helping to weaken Intel's control of the market for central processing units (CPUs) in servers. For AI, it can boast Inferentia and Trainium – highly customized chips designed to cope with the immense processing needs of AI.

No HAL, lots of hallucinations

But for AWS and its prospective telco clients, there is much at stake and considerable uncertainty about the role AI will play and the manner of its deployment. Telcos fret about training foundation models on confidential data and the risk of it falling into competitors' hands. There may be no HAL, but there are hallucinations, the phenomenon whereby gen AI throws up wrong or misleading results. Governments are increasingly worried about ethics, including the impact of AI on jobs.

AWS, as the biggest of the public clouds, faces its main challenge from Microsoft, which made its own telco pitch in the run-up to this year's Mobile World Congress. Of the big three public clouds (the other being Google), Microsoft has been the one most associated with gen AI so far thanks to its public backing for OpenAI, the Sam Altman-led originator of ChatGPT and the GPT-4 large language model (LLM) that underpins it.

1109.jpeg

Adam Selipsky, the CEO of AWS, is ultimately responsible for the push into AI.
(Source: Amazon)

If nothing else, the battle between Internet giants for this business implies that telcos must rely even more heavily on those companies if they are to exploit gen AI. This relationship is known to be a concern for executives who fear dependency on a few large suppliers and are unnerved by the ceding of telco power to Big Tech. But one goal of the AWS innovation center is to involve clients in development and provide access to a wide range of tools and models. All that could mitigate some of the AI anxiety.

Of particular interest could be the invitation to work with AWS experts on tailoring a model to specific sector needs. Outside telecom, Bloomberg has already done something like that after working with OpenAI to create BloomGPT, a model based on financial data. AWS is also trying to address the concerns regarding security and data privacy. "In Bedrock, we make sure that customer data is not used for training the model and is always encrypted and doesn't leave the virtual private cloud construct we have, which is intended for preventing the trickling of data across certain application boundaries," said Parulkar.

For telcos that want to develop apps based on existing foundation models, Bedrock also supports several such models, including Stability AI, Anthropic and AI21 Labs, as well as Titan. Conceived in 2021, Stability AI is an Amazon-backed open source player that claims to have built a community of more than 200,000 creators, developers and researchers. Anthropic began life in the same year, founded by former members of OpenAI, while AI21 Labs is a six-year-old Israeli company specializing in natural language processing.

Parulkar believes the first telco "use cases" for gen AI will sit on these existing models. "These fall into the category of knowledge management or customer experience where you have a lot of language data that can be fed into the gen AI foundation models." In a second wave, he anticipates fine tuning of those models before a third wave, during which completely new models are built. "That is the place where there is a lot of opportunity to work in the network space," he said. "How do you design and build networks?" Some of the telco use cases Parulkar envisages are described in this blog he wrote for Light Reading.

What it all means for the telco workforce remains uncertain at this stage. Published numbers that Light Reading has monitored and analyzed for a long time show that huge cuts to headcount have been going on for years. Mergers, divestments and retrenchment explain much of that, but automation and earlier forms of AI are partly to blame. Executives talk proudly of concepts like closed-loop service assurance and zero-touch operations, admitting chatbots have already had an impact in areas such as customer service. Philip Jansen, the boss of the UK's BT, reportedly thinks AI will claim about 10,000 jobs at his organization this decade – roughly 8% of the current total.

Headcount at major telcos5176.png(Source: companies)

The telco roles that could be quickly affected by gen AI will probably include some of the more routine, fault-finding jobs on the technical side. "You have processes and manuals and if there is a certain failure technicians look through manuals and go through a process of figuring things out or checking things off," said Parulkar. "Things like that are ready to be handled by gen AI today using chatbots instead of manuals."

What some vendors have referred to as the self-driving network is much further off, he says. But he believes it is coming. "Going into more autonomous operation of networks where you are changing configurations will take a little more time, but we are definitely heading in that direction," he said. "Telcos are absolutely interested in that and we are looking at how we can build that technology."

Your cloud or mine?

AWS is naturally pushing for more use of the public cloud with gen AI, arguing that investment in infrastructure would be an "operational headache," in Parulkar's words, for telcos. His case is helped by the sheer expense of gen AI during the initial phase. "The training of the model, which is a one-time activity and requires a lot of compute, can be done in the regions because it is more cost-effective and we have more capacity."

But there is some industry debate about the extent to which telcos might use their own clouds and infrastructure to support gen AI apps. Because most apps are unlikely to be latency-sensitive, they could also be hosted more cost-effectively in AWS regional facilities, according to Parulkar. The exceptions could be deployed in edge facilities owned by telcos using a slew of AWS offerings, including its Wavelength and Outposts products, he says. "If a telco wanted to run inference at the edge, it could buy an Outpost, which is an AWS rack – the same rack we have in the regions – and install it at the far edge and you would have the cloud running there."

8087.jpg

Nvidia CEO Jensen Huang sees a huge addressable market for GPUs.
(Source: Nvidia)

A competing vision is supplied by Nvidia, the world's biggest developer of the graphical processing units (GPUs) thought to be ideal for many AI applications. Nvidia counts the hyperscalers, including AWS, among its key customers, but it also views telecom operators as prospective GPU clients. In markets such as China, telcos are already operating regional cloud services on their infrastructure, says Ronnie Vasishta, the senior vice president of telecom for Nvidia. "Operators are looking now to monetize their infrastructure in different ways and the advent of LLM inference gives them a very valuable asset to monetize."

Were only a small investment in AI to happen at the edge, the opportunity for Nvidia to directly serve telcos would not be as big. Nvidia could also be hurt by AWS spending on Inferentia and Trainium, its own customized chips, to handle gen AI. Amid industry reports of GPU shortages and high prices, these seem to give AWS an important in-house alternative, just as Graviton expands its CPU choice beyond Intel. Besides the other hyperscalers, the main threat to AWS is perhaps the opposite scenario – where AI is deployed extensively at the edge on telcos' own clouds.

Even this would hold out a big potential role for AWS, though. The greater risk is a fading of the current AI excitement as costs mount and regulators pile in. To ward off government interference, AWS will have to show it is taking ethical concerns seriously and doing its best to end hallucinations and tighten up security. "We are focused on giving developers that ability to build applications in a responsible way," said Parulkar. Italy's temporary ban on ChatGPT just a few weeks ago shows that being disconnected by human monitors, the fate HAL feared in 2001: A Space Odyssey, is always possible.

Update: An earlier version of this story said CodeWhisperer had been shown to boost productivity by "up to 70%." This has now been changed to "an average of 57%" after feedback from AWS.

Related posts:

— Iain Morris, International Editor, Light Reading

Read more about:

AsiaEurope

About the Author(s)

Iain Morris

International Editor, Light Reading

Iain Morris joined Light Reading as News Editor at the start of 2015 -- and we mean, right at the start. His friends and family were still singing Auld Lang Syne as Iain started sourcing New Year's Eve UK mobile network congestion statistics. Prior to boosting Light Reading's UK-based editorial team numbers (he is based in London, south of the river), Iain was a successful freelance writer and editor who had been covering the telecoms sector for the past 15 years. His work has appeared in publications including The Economist (classy!) and The Observer, besides a variety of trade and business journals. He was previously the lead telecoms analyst for the Economist Intelligence Unit, and before that worked as a features editor at Telecommunications magazine. Iain started out in telecoms as an editor at consulting and market-research company Analysys (now Analysys Mason).

Subscribe and receive the latest news from the industry.
Join 62,000+ members. Yes it's completely free.

You May Also Like