Featured Story
What the $500B Stargate AI plan could mean for telecom
The flood of investment into AI data centers powered by Nvidia chips could have major implications for 5G silicon developers.
It is time to stop describing technologies as artificially intelligent when they are no such thing.
During the Internet bubble that inflated at the turn of the millennium, tagging ".com" to the end of a company's name was guaranteed to boost its valuation by a few zeros. Organizations did so whose online business plans seemed to have been drafted in a coffee break. At a few brick-and-mortar retailers rebranding amid the zeitgeist, the only real web dangled from a dusty corner catching flies.
But something much worse is happening today with what the world calls artificial intelligence (AI). To people raised in the latter half of the twentieth century on a diet of thought-provoking science fiction, AI was scarily Promethean and ethically questionable. It was the point at which machine became indistinguishable from man unless its inner circuitry were exposed. Harking back to the Frankenstein tradition, Blade Runner gave us a blond-haired giant spouting poetry through a shimmer of tears and a detective unsure if he's human. Terminator gave us Armageddon.
What we have instead ended up with is very advanced pattern recognition that can assemble words, computer code, musical notes and images without even understanding what as basic a phrase as "the cat sat on the mat" actually means. It is undoubtedly useful and could result in mass redundancy (there are signs this is already happening in some professions). For that reason, it raises ethical questions that are routinely ignored as the world's almost exclusively male tech bosses accuse anyone with qualms of being risk averse. But it is not the slightest bit intelligent in the human sense, as the qualifier of "artificially" would imply.
Dumb and dumber
Nevertheless, investors remain enticed by the charlatan, and everyone suddenly has an AI strategy, just as everyone became a dotcom in 2000. Forty years ago, proof of AI would probably have had to involve a rigorous application of the Turing test, or even some emotional encounter on a rain-drenched rooftop between human and robot. These days, a fake video of Donald Trump dancing the Limbo at Mar-a-Lago will do. How is this significantly different from those old joke photoshopped images splicing your neighbor's head and Arnold Schwarzenegger's body? If a paragraph of text generated by a computer after a human prompt is evidence of AI, was a text-generating Google search in the early 2000s an example of it long before the expression had become everyday currency?
Technologists may argue this is quibbling and that AI is acceptable terminology because the applications in question exhibit some human traits and cognitive abilities. But imprecise language can be dangerously misleading. The advocates are not helped by dystopian visions of AI in novels and movies that cause societal unease when ChatGPT and its brethren are a much lower-grade threat. The tech gurus have also acknowledged that AI does not describe what it was originally supposed to describe in conceiving a modified term for the real thing.
Artificial general intelligence (or AGI) now substitutes for AI as the sentient machine that can reason, understand causality and – OpenAI founder Sam Altman hopes – figure out the answers to life, the universe and everything. Its desirability is clearly in doubt. If a machine is that much smarter than people, it might decide people are unnecessary or belong in servitude, treating them much as we treat less intelligent species. What's unlikely, though, is that today's AI is the path to AGI. Returns have diminished with the construction of even bigger large language models. Throwing more processor power at the problem has not led to any breakthroughs. If AGI is achievable, it may demand a radically different approach.
What's most disappointing about today's exceedingly dumb, copycat AI is that it turns out to be good enough at the tasks many people find intellectually stimulating and yet entirely unsuited to routine, menial and physical chores. Generative AI can write a passable school essay, cribbing from the Internet's vast library of knowledge, and fool a concert audience into thinking it's a classical composer. It can easily put together the sort of presentation for which McKinsey would charge thousands of dollars. But the foundation models that underpin it won't clean your desk, substitute for you in a pointless management meeting or even tidy up some Excel data without detailed instructions.
Overuse of it for those cognitive tasks could have same atrophying effect on a person's brain that riding a mobility scooter would eventually have on a healthy person's legs. It's an analogy the fathers of generative AI would obviously resist. For them, it is more like an e-bike requiring muscle power, or even one of those small single-wheel contraptions that a rider grips between feet and balances on at highway speeds. It elevates the mind, boosts the brain and frees you up for more rewarding jobs, they insist. Seeing more employees relegated to a role as machine operators of generative AI would make it hard to agree.
Read more about:
AIYou May Also Like