September 25, 2023
"Copilot" is suddenly a popular word to humanize "generative" artificial intelligence, the incarnation that has sparked new fears about job losses and murderous robots. It was almost a standard response to questions about genAI's potential impact on the workforce at this year's Digital Transformation World (DTW) event in Copenhagen. Microsoft's code-writing genAI is even called GitHub Copilot – a truly bizarre moniker that to British ears sounds like an insult shouted in a plane.
The idea is partly to make genAI seem more of a smiling assistant than a sinister force, or at least more of a useful tool. It's not genAI that will replace you, say the genAI salespeople. It's the person who wields genAI when you don't or won't. But the copilot comments also reflect genAI's current limitations. Prone to making stuff up, genAI could never be trusted on full autopilot. And there are arguably even greater concerns among telcos eyeing the tech.
Avoiding it is now almost impossible, judging by this year's DTW. Frowned on by some, herd mentality is encouraged in the telecom sector, where clubs, standards and interoperability are all deemed necessary to avoid fragmentation and survive the Big Tech onslaught. A stampede in the direction of a new buzzword every few years is routine. Of course, genAI is no mere buzzword, said various executives in Copenhagen – an assertion that was made about every previous buzzword.
Speaking the right language
Perhaps the biggest concerns are about who builds, owns and supports the tech. And first off are the large language models (LLMs) themselves. The best known include the GPT (generative pre-trained transformer) range from OpenAI, the research lab that has reportedly received $10 billion in Microsoft funding. Alongside these are the wackily named Llama and Llama 2, open-source models released early this year by Meta. Others include the more conventionally christened Claude, the product of a startup called Anthropic. None of these LLMs, though, was built with telecom operations in mind.
Untrained on much telecom data in their original guise, they are of limited usefulness when it comes to the specifics. The answer is an adaptation or fine tuning to make them more telco-relevant, which could be done either by a telco or a third party such as Amdocs, a big vendor of telco software that has already part-built an LLM branded amAIz, using Microsoft and OpenAI technology. "We are not the engine," said Gil Rosen, the chief marketing officer of Amdocs. "Basically, we are training the large language model on our telecom data set and creating our own version."
There are many sensible reasons why Amdocs rather than a telco should do this. "Honestly, it requires more than a single telecom data set," said Rosen. "And that is why it will only work in an alliance or coming from a company like us."
With their obvious focus on serving customers and maintaining networks, telcos remain relatively inexperienced and unskilled in software. Building and training LLMs would gobble telco resources and put a further squeeze on profit margins already under pressure. "It is a lot of money to build an LLM," said Danielle Royston, the CEO of Totogi, a small vendor in the market for business support systems.
But some telcos are dipping their toes into the water, with Telus one of the most outspokenly enthusiastic about in-house development. "This is a strategic imperative for us – to both understand generative AI and to have that muscle around it," said Hesham Fahmy, the Canadian operator's chief information officer.
Among other things, Telus has been experimenting with off-the-shelf platforms including Stability AI's Stable Diffusion and Google's Palm, as well as Llama and OpenAI. Using an in-house team of data scientists, it is effectively playing these models off against one another to figure out which is most suitable where. "It is less about the foundation model and the size of the model," said Fahmy. "You can get better responses by taking a smaller model and tuning it with your own embeddings."
Taking charge of fine tuning may hold several attractions for operators. First, handing responsibility for this to a third party would naturally increase the risk of confidential company information being used to train a model eventually deployed by rivals. At the same time, a telco with its own resources would be less reliant on any vendor for critical software and the expertise around it.
"The lock-in risk they will be most concerned about is when they are considering using LLMs for more network-touching use cases," said Chris Silberberg, a research manager at IDC, in emailed comments. "I can't see there being more than a handful of LLMs attempted to be built to support these use cases, so the attendant lock-in risk is greater, but at this time I do not see this as a great concern for telcos experimenting in this space." The bigger issue today, he thinks, is finding use cases in the network domain.
A related worry is about the ownership of LLMs. The amAIz product developed by Amdocs would seem to include intellectual property (IP) licensed from OpenAI as well as the company's own. In some cases, and especially when there has been fine tuning by more than one party, the lines separating one company's IP from another's may start to blur. It is shaping up to be a glorious future opportunity for lawyers.
"My understanding is that when we are using models that are licensed, we are always using them under the license that covers them," said Fahmy. "When we're using OpenAI, we're doing it through Azure. In that case, we create our OpenAI tenant in Azure and under the terms of the agreement anything you do there is yours."
In the meantime, the fine tuning of LLMs looks set to be fiercely contested by vendors and other stakeholders, according to Roz Roseboro, a principal analyst with Omdia (a Light Reading sister company). Telcos store and generate immense volumes of data that will need cleaning up and putting into consistent and intelligible formats before they can be processed. "This is where the smarts are," said Roseboro. "The decisions being made there determine what comes out and that is the secret sauce."
Rather than buying from an Amdocs, an operator could always work on this with a professional-services company such as Accenture, said Roseboro. An alternative to that is in-house training or recruitment to support genAI. In some fields, what makes genAI look different and exciting from a skills perspective is the ability to shape it through natural language.
"Analytics has been around forever, but you had to speak the system's language," said Roseboro. "Now you can speak your own and the system will know what you mean. It is like the democratizing of insights. It is intent-based. With genAI, you don't have to tell it how to go about doing something but just the answer you are looking for."
Going deeper into the cloud
To some observers, but not everyone, the use of genAI also implies much greater dependency on the public clouds. This is partly because the training of LLMs seems to require the sort of computing resources found only in a public-cloud facility seasoned liberally with graphical processing units (GPUs), the expensive chips – sold mainly by Nvidia – that have turned out to be ideal for AI. It's a perception some public-cloud spokespeople are keen to reinforce.
"You could run some models on the edge, but you are still processing probably more at the core center," said Monte Hong, the worldwide communications industry business strategy lead at Microsoft. "In general, they do require a lot of compute. It's not necessarily in transmittal of data across a wide network. This is more ingesting data and running the computation on all the values you see – call summarization, running the large language models – which means there is generally a lot of large compute."
Telcos largely concede that during the training phase an LLM would probably have to live inside a facility owned by AWS, Google or Microsoft. "Training needs very bespoke GPUs and a huge number of them and it is going to be very hard to build a business case for having all those GPUs," said Fahmy. "Second, I don't think we have the buying power to get what is such a scarce resource if you're competing with the Googles of the world and the Azures of the world."
Where there is some divergence of opinion is on what comes next. Scotty Petty, the chief technology officer of Vodafone, recently played down the business case for hosting LLMs inside Vodafone's own premises. But Fahmy believes some LLMs could eventually reside in a smaller data facility or even a user device. "Training is different from running," he said. What could justify this are use cases where there is a need for low latency, a measure of the round trip journey time for a data signal on the network.
Even so, one of the big conversations in telecom is still about growing reliance on public clouds, or at least public cloud providers, to support IT workloads. A service AWS is now pitching would allow an operator to use multiple LLMs for network operations, all hosted on a machine-learning (ML) platform dubbed Sagemaker. Such moves are bound to make some telco executives even more nervous about lock-in – the difficulty of moving from one cloud to another.
Totogi's Royston, an AWS and Sagemaker customer, thinks telcos have no choice. "Every year there is more technology coming out of the hyperscalers, so are you going to do LLMs and big genAI stuff?" she said. "Will you buy $100,000 Nvidia servers that are sold out till next year? Where do you get the compute for that? Where do you get the ML capability? It is all in the public cloud."
In the run-up to this year's DTW, the Next Generation Mobile Networks Alliance (NGMN), one of those famous telco clubs, published a manifesto about the need for genuinely "cloud-native" software. Partly, that means having software that is not anchored in any single cloud environment. But a huge amount of engineering work is still needed to port workloads between hyperscalers, according to Vodafone's Petty. Telcos are unlikely to be overjoyed if moving around critical LLMs and genAI applications also turns out to be difficult, if not impossible.
"The conversations I'm having with telcos indicate they don't see owning the infrastructure for LLMs as a core competency they need to acquire, so naturally they will be reliant on partners, particularly cloud ones, to support their genAI developments," said IDC's Silberberg in his email.
Omdia's Roseboro agrees that genAI could be yet another thing that binds a telco to the hyperscalers. "They're gaining more and more power and influence and being a more critical path." While porting a genAI application from one hyperscaler platform to another will be possible, it is unlikely to be straightforward, she said. "The software may be optimized to run on AWS infrastructure. Once it is somewhere, it is hard and disruptive to move it."
GenAI has barely advanced into the telecom sector so far. But as telcos continue to experiment with it and consider how it can be deployed, they will need to think carefully about ceding further control of technology development to others. The copilot descriptor clearly downplays the risks, but genAI is unlikely to stay in the assistant's chair forever.
Read more about:AI
About the Author(s)
You May Also Like
5G Network Automation and AI at Global Megaevents: A Telco AI-at-scale case study with Ooredoo and EricssonOct 10, 2023
5G Transport & Networking Strategies Digital Symposium.Oct 26, 2023
Improve Service Efficiency in the Call Center and Field with Slack AutomationOct 13, 2023
Open RAN Evolution Digital Symposium Day 1Jul 26, 2023