The telecom industry has been battling robocalls for years. But now, new AI technologies threaten to dramatically raise the stakes.
For example, two days before the New Hampshire 2024 presidential primary election, a political consultant named Steve Kramer used AI to create a deepfake audio recording of President Biden's cloned voice encouraging voters to not vote in the primary but rather to "save your vote for the November election." The calls were sent out via voice service provider Lingo Telecom under the number of a prominent local political consultant.
In May, the FCC issued Kramer a $6 million fine.
But it could get even worse, according to the FCC and other administration officials.
"Take, for example, the family emergency scam, where an impostor pretends to be a distressed relative. A scammer could clone a voice that sounds just like your loved one," according to a new FCC report. "Scammers could also clone the voice of a CEO or other company executive and then trick employees into transferring large sums of money or to pay a fake invoice."
In order to address such concerns, FCC Chairwoman Jessica Rosenworcel convened the agency's Consumer Advisory Committee (CAC) to investigate the topic. The CAC comprises a few dozen members from a variety of corners of the telecom industry, including trade groups like Incompas, companies like AT&T and groups like the AARP and the LGBT Technology Institute.
The FCC also concurrently opened a proceeding into the AI robocall topic, with companies including Apple, Twilio, Microsoft and Hiya offering their perspectives.
This week, the CAC released the results of its investigation into the matter: A 21-page report sporting 16 different recommendations.
The findings
In general, the CAC said its recommendations focus on "greater coordination with other agencies on comprehensive solutions to protect consumers from the use of AI for malicious calling purposes, promoting innovative uses of AI to enhance call-blocking technologies, and consideration of the privacy risks associated with the use of AI-based robocall identification and blocking tools."
Specifically, it recommends a "comprehensive solution" across all government agencies and the White House and clear enforcement guidelines at the FCC to restrict fraudulent AI-generated robocalls and robotexts.
The report also details some of the technological tools that may prevent AI robocalls and texts, specifically YouMail, Microsoft's Azure Operator Call Protection and call-scanning services using Google's Gemini Nano.
But much of the committee's recommendations focus on tracking the problem and educating consumers. For example, it suggests creating another advisory committee to write a report "that would examine specific topics related to the implementation of AI by industry stakeholders and other relevant participants to help consumers avoid unwanted robocalls and robotexts."
And it offers detailed guidelines on "national public education and engagement efforts" to warn consumers about possible AI robocalls and texts. That campaign could include everything from direct mailings to in-person events to "producing primetime specials on national networks about the evolution of AI, scams and their societal impact."
The political winds
To be clear, it's unlikely that the incoming Trump administration will put much effort into implementing the report's recommendations. Broadly, a Trump FCC would likely work on reducing regulations on telecom businesses rather than increasing them.
However, FCC Commissioner Brendan Carr is widely expected to be Trump's nominee to lead the agency, and he did voice some support for general guidelines on AI in the telecom industry.
"I don't believe in 'no regulation' of AI but at the same time believe there is a risk of overdoing it early on," he said in August. He said he supports new FCC rules that would in part require callers to disclose that they're using AI-generated calls and text messages.
Carr said he prefers AI regulations that stem from actual harms showing up in the real world, rather than theoretical problems. And he wants guidelines narrowly targeted to AI and not the broader industry.
The Trump administration is expected to apply a permissive, anything-goes approach to AI. That contrasts with the Biden administration, which engaged in a round of AI policymaking including an AI executive order aiming at safety and security.
The industry's response
In the FCC's proceeding on AI robocalls, a number of telecom players argued against weighty federal regulations on the use of AI in telecommunications.
Apple, for example, said it's already using AI technology in some of its services for customers with disabilities. "With Personal Voice and Live Speech, AI is leveraged to allow users to preserve and continue using their voice if they are in danger of losing their ability to speak. In making these features available, Apple has put in place safeguards against their potential misuse to perpetrate fraud or other deceptions, including in the context of robocalls," the company wrote.
Twilio, meanwhile, argued against AI-specific regulations in general. "The commission's efforts to promote industry adoption of trust and verification solutions, regardless of the technology used to generate those communications, would be far more effective at protecting consumers from bad actors that use AI for illegal calls," the company wrote.
And the USTelecom trade association offered statements largely in line with the FCC's actions on the topic. For example, it urged the commission to "require that a caller disclose that a call is made with AI in scenarios where an AI-calling agent mimics a human agent, a scenario where it arguably would be deceptive to fail to make such disclosure."