The chatbot that has taken the world by storm is now being assessed for use in a range of telco operations.

Iain Morris, International Editor

February 22, 2023

5 Min Read
Be afraid – ChatGPT has caught the eye of telecom

He's an energy hog who makes things up, plagiarizes your work, encourages you to leave your wife and even fantasizes about stealing nuclear codes. Despite all this, the investor community seems to think he's the most exciting thing to emerge from the technology sector since the early days of the Internet. Businesspeople swoon at the mention of his name. On LinkedIn, they gabble about the difference he will make. If he were standing in front of you now, you'd be tempted to punch him.

He is not really a he, of course, but the most talked-about artificial intelligence (AI) ever. Created by Sam Altman and his team of brainiacs at OpenAI, and gifted billions by Microsoft, ChatGPT was first allowed out of its cage late last year. Since then, it has triggered mania. A search on Google this morning generated 725 million results, and the figure surges daily. In a sign of what's to come, it is now being assessed for use in telco operations, Light Reading has learned.

Figure 1: (Source: Pitinan Piyavatin/Alamy Stock Photo) (Source: Pitinan Piyavatin/Alamy Stock Photo)

ChatGPT is best described as a large language model, trained to predict deleted words in a sequence of text through exposure to vast libraries of written data. Do this long enough and AI can eventually formulate full sentences, paragraphs and even essays. One of OpenAI's innovations was to insert humans into this process as fine tuners of the model. Staff would both write and answer questions during a training phase before the AI was allowed to function independently. With Microsoft money to spend and resources to use, OpenAI eventually produced a "chatbot" of apparently frightening sophistication.

Telco sources now believe there are several potential uses for ChatGPT in their operations. The most obvious is in call centers, where it could feasibly "eavesdrop" on conversations between a customer and agent. ChatGPT would probably have no interaction with the customer, instead providing "expert advice" to the agent for passing on.

Hallucinating robots

The reason for that and much of the general wariness about ChatGPT and related models is the phenomenon of "hallucinations." Put simply, ChatGPT has been shown in countless examples to provide wrong information, be inventive or even sound like a cross between Glenn Close's character in Fatal Attraction and Osama bin Laden. Exposing customers to a homewrecking chatbot with terrorist ambitions is probably unwise. The idea is that a human agent can act as a necessary filter.

One telco is also investigating the use of ChatGPT or an alternative large language model by field service technicians. Trained appropriately, it could advise staff in "complex environments" where a technical problem is unusual and only a few people would know the answer. It could also be used for explaining to a programmer what a piece of software code is doing.

But in neither case do existing large language models, ChatGPT included, look sufficiently foolproof. This does not guarantee they will be kept outside operations before they can be fully trusted (assuming that can ever happen). The danger is that some telco under cost pressure or seeking a competitive advantage deploys one of these models and ultimately finds itself in trouble. Regulators seem largely oblivious to the whole field of generative AI, which has materialized faster than Arnold Schwarzenegger's Terminator beaming back from the future.

Doomsday scenarios are dismissed by AI critics who routinely point out the shortcomings of ChatGPT, Google's Bard rival and other models. But this raises a different concern – not of AI outperforming humans but of a willingness to use error-prone AI understood by few people besides its inventors. In his Radio Free Mobile blog, analyst Richard Windsor writes that these systems "are black boxes as their operators have no idea how they work as all they can see are the inputs and the outputs."

AI overlords

Their widespread adoption would carry other problems, too. Automate tasks through AI and people will eventually lose the knowhow and skill they needed to carry them out, just as a muscle will atrophy if never used. That could be an issue if technical problems surface, and society already looks dependent on a host of complex technologies built by a few giant firms. Only a handful of Chinese and US Big Tech companies have the resources to develop and train large language models like ChatGPT. While some "open source" alternatives have appeared, they are far less advanced than ChatGPT, according to telco sources.

The training of large language models is hugely expensive, too, which is why only companies with trillion-dollar valuations and state-of-the-art data centers can afford it. Even after this process is complete, ChatGPT remains a power hog, gobbling electricity as it processes queries. This is partly why Google's share price has suffered on news of ChatGPT. The fear is that Microsoft's inclusion of ChatGPT in Bing, its search engine, will force Google to respond with similar functionality. Higher computation needs would then drive down the profit margin Google generates per query.

Where data is stored remains a massive issue for European companies operating in highly regulated markets, which could slow adoption of large language models in the region. There is also telco nervousness about intellectual property infringement – if ChatGPT feasts on copyrighted company information for training purposes, could that information show up in queries outside the company?

None of these are insurmountable problems, and the attractions of ChatGPT as a helpful tool for customer care and technical support are plain. Right now, though, it is hard to disagree with Windsor's assessment that "these machines are not nearly as capable as their proponents would have us believe." Reliance on them would be a risky affair.

Related posts:

— Iain Morris, International Editor, Light Reading

Read more about:

EuropeAsia

About the Author(s)

Iain Morris

International Editor, Light Reading

Iain Morris joined Light Reading as News Editor at the start of 2015 -- and we mean, right at the start. His friends and family were still singing Auld Lang Syne as Iain started sourcing New Year's Eve UK mobile network congestion statistics. Prior to boosting Light Reading's UK-based editorial team numbers (he is based in London, south of the river), Iain was a successful freelance writer and editor who had been covering the telecoms sector for the past 15 years. His work has appeared in publications including The Economist (classy!) and The Observer, besides a variety of trade and business journals. He was previously the lead telecoms analyst for the Economist Intelligence Unit, and before that worked as a features editor at Telecommunications magazine. Iain started out in telecoms as an editor at consulting and market-research company Analysys (now Analysys Mason).

Subscribe and receive the latest news from the industry.
Join 62,000+ members. Yes it's completely free.

You May Also Like