CableLabs Futurist: AI Still Has Its Limits

ATLANTA -- SCTE Cable-Tec Expo -- Although he's a big fan and long-time student of artificial intelligence and machine learning, Bernardo Huberman would not trust smart machines to do everything that human beings do right now.

For instance, Huberman, a fellow at CableLabs and VP of the organization's Core Innovation Team, would not let an AI-infused machine pilot a passenger plane by itself, at least not one carrying him. Nor would he let a smart machine drive a bus or train, except possibly under very limited, controlled circumstances.

"You don't want to fly a plane with no one in the cockpit," Huberman said in a fireside chat here with Light Reading Senior Editor Jeff Baumgartner last week. "I like to know there is someone competent in the cockpit when I fly."

Huberman is also not quite ready to trust fully autonomous automobiles. While he likes the concept of self-driving cars and looks forward to seeing them on the road in the future, he thinks many things must still be worked out, including the technology, the roadway system and the regulations, before the cars hit the streets. "The rollout will be a lot slower than predicted," he predicted.

Home in on the opportunities and challenges facing European cable operators. Join Light Reading for the Cable Next-Gen Europe event in London on November 6. Admission is free for all!

And Huberman is definitely not ready to let AI systems start nuclear wars between the superpowers. He recalled the famous example of the late Stanislav Petrov, a lieutenant colonel in the Soviet Air Defense Forces who prevented an all-out nuclear war in September 1983 by over-riding his country's satellite early-warning system. In the wake of the Soviets' downing of a Korean passenger plane just three weeks earlier, that warning system reported that the US had just launched a nuclear missile towards Russia and had up to five more on the way. But, rightly judging that report to be a false alarm, Petrov disobeyed orders and called off a retaliatory nuclear strike on the US and its NATO allies.

"He thought it very unlikely it [a US nuclear attack] would happen like this and the US would launch just five nuclear missiles," Huberman said, speaking of Petrov, who died just last year. "That's something a machine couldn't figure out … But he admitted there was a 50% chance he was wrong."

Despite such qualms, Huberman still sees great promise ahead for both AI and machine learning in many other fields and applications, such as in distance education. But even here he sees limits for smart machines. "I totally believe we cannot replace humans in the education process," he said.

— Alan Breznick, Cable/Video Practice Leader, Light Reading

peter morris 11/6/2018 | 6:00:38 AM
Simone Excellent post. It was really informative. please keep sharing more Essay Writing services
Sign In