Ericsson house of AI horrors provokes hope and fear

In its latest ConsumerLab report, Ericsson unearthed some terrifying visions of an AI-powered 2030s.

Iain Morris, International Editor

June 6, 2024

5 Min Read
A pixellated dress put together by AI
AI-generated fashion, as imagined by Ericsson.(Source: Ericsson)

It's right to be "hopeful" about artificial intelligence (AI), just as it's right to be hopeful that another Chernobyl won't happen. But it's probably wise to be at least a bit anxious about a technology Russia has now used to create a deepfake Tom Cruise and have him slag off the organizers of the upcoming Olympics, from which Vladimir Putin's empire is banned. The real Cruise probably isn't preparing his legal team for an assault on the Kremlin.

There are far worse and less unintentionally comedic abuses of AI, which has triggered a new gold rush among technology players – meaning just about everyone, these days – despite the obvious ethical and existential concerns. The highest form of intelligence on Earth (us) hasn't exactly been an unalloyed positive for lower forms, excluding pampered poodles and overfed cats. If artificial intelligence eventually lives up to its name, becoming more than just a glorified search engine and serial plagiarist, precedent suggests its main interest won't be the welfare of humanity.

Nobody seriously expects a shotgun-blasting Schwarzenegger cyborg to arrive and demand their clothes, boots and e-scooter anytime soon. But science fiction's overwhelmingly dystopian take on AI – which surely reflects people's worries – makes Ericsson's latest ConsumerLab report look counterintuitive at a first glance. It is also as bizarre as Scandinavia gets.

Ericsson spoke with 6,510 people aged between 15 and 69 to gauge their views about "foundational AI technologies" such as generative AI. Based on feedback, it then divided people into two groups: the "AI hopefuls" and the "AI fearfuls." The hopefuls reckon AI is "fun" and apparently feel "joy, hope or excitement" when they think about it. The fearfuls imagine dole queues, automated dentistry and the subjugation of man. (Actually, Ericsson boringly said they feel "fear or anxiety," but we colored in the picture.) AI hopefuls dwarfed AI fearfuls by 51% to 34%, presumably meaning 15% of respondents were utterly befuddled.

Darwin Award candidates

The big reason for that imbalance is likely to be that Ericsson spoke only with urbanites it classes as "early adopters." These are the guinea pigs for inventors, the people who volunteer for robot surgery and brain implants, who clang into lampposts wearing Apple's VR ski masks, who implode at 3,000 meters below sea level in submersibles made from tennis-racket materials and steered with games consoles. They're the lead candidates for the Darwin Awards, the ones who were queuing up to use ChatGPT, and they account for only about 13.5% of the population, by reliable estimates.

The real takeaway? Even among this group of tech fanatics, only a tiny majority were in the "hopeful" group and more than a third were envisaging AI-geddon. Outside the early adopters, a decent wedge of the population lumps AI into the same category as nuclear war, catastrophic climate change and fatal airborne diseases. We might see them. The hopefulness is that we won't. Ericsson, though, sounds all in. "Consumer trends unveiling the AI-powered future," it headlined its report.

Whether intended to be good or bad, the AI examples given by Ericsson often seem to evoke some nightmarish Terry Gilliam movie. "In the 2030s, humans will use plastic surgery to get the right AI-generated beauty standard look, according to six in ten," it said. Who needs a deepfake Tom Cruise when there could be millions of tech-sculpted doppelgangers, vain 60-year-olds who lack the Scientologist's gift of eternal youth but can now chisel his features into their flesh after appropriate AI guidance?

For anyone with the willies ahead of his or her wedding day, AI can help you decide, according to 50% of respondents, who apparently "think people will simulate their marriages for future changes or divorce." But should the relationship last, don't expect any offspring to win a Pulitzer because 74% of respondents "foresee AI assistants in parenting boosting children's technical skills but diminish [sic] creative/emotional intelligence."

Fake AI

Of course, none of this is very likely – at least, not by the 2030s. In the same week Nvidia's market cap crested $3 trillion, making it the second most valuable company in the world (behind Microsoft), the global conversation about AI has become as hysterical as a Salem witch trial, even if it has so far left fewer people dangling from branches or crushed beneath rocks.

As then, a few sensible voices can still be heard. One of the most compelling belongs to Richard Windsor, a former Nomura analyst who now has his own telecom-focused analyst firm called Radio Free Mobile. In a series of blogs written since the emergence of ChatGPT in 2022, he has continued to point out that AI's large language models do not understand causality and cannot reason – key traits of human intelligence. Every deep-learning technology Windsor has analyzed since 2016 falls into this category, meaning "they are simply very advanced statistical pattern recognition systems," he wrote in a blog last month.

In other words, anything described as "artificial intelligence" today probably isn't. The vested interests making billions out of AI-branded technologies have tacitly acknowledged this by coining "artificial general intelligence" to describe what would be the real deal. Yet it's pure science fiction. And creating a superior intelligence sounds about as desirable as an alien invasion, which is effectively what it would be. Naked ambition stops Sam Altman and his gang from worrying about such trifles. It's up to the rest of us to stay fearful and hope for the best.

Read more about:

AIEuropeAsia

About the Author(s)

Iain Morris

International Editor, Light Reading

Iain Morris joined Light Reading as News Editor at the start of 2015 -- and we mean, right at the start. His friends and family were still singing Auld Lang Syne as Iain started sourcing New Year's Eve UK mobile network congestion statistics. Prior to boosting Light Reading's UK-based editorial team numbers (he is based in London, south of the river), Iain was a successful freelance writer and editor who had been covering the telecoms sector for the past 15 years. His work has appeared in publications including The Economist (classy!) and The Observer, besides a variety of trade and business journals. He was previously the lead telecoms analyst for the Economist Intelligence Unit, and before that worked as a features editor at Telecommunications magazine. Iain started out in telecoms as an editor at consulting and market-research company Analysys (now Analysys Mason).

Subscribe and receive the latest news from the industry.
Join 62,000+ members. Yes it's completely free.

You May Also Like