Sponsored By

How telecom is working to influence AI policyHow telecom is working to influence AI policy

Telecom firms are weighing in on federal policy matters related to artificial intelligence (AI), advocating for 'light touch' regulations, a nuanced approach to 'risk' and more investment.

Nicole Ferraro

November 21, 2023

6 Min Read
President Joe Biden delivers remarks before signing an Executive Order on Artificial Intelligence in the White House
President Biden delivering remarks before signing an executive order on AI in October.(Source: White House Photo/Alamy Stock Photo)

As the US federal government makes moves to regulate the development and use of artificial intelligence (AI), telecom firms are lobbying for rules that would spur innovation, citing use cases for AI in spectral efficiency, open RAN deployment and beyond. They're also urging caution and nuance around terms like "high risk" and "critical infrastructure."

With AI developing faster than anticipated, including by those who created it, the US is among several countries trying to update its laws. In recent months, we've seen President Biden release a White House executive order "on the safe, secure, and trustworthy development and use" of AI, plus hearings on Capitol Hill and the introduction last week of a Senate bill on boosting AI "accountability" and "innovation."

Throughout those proceedings, telecom companies have started to weigh in on how AI can best benefit the sector and how the government should enact "light touch" regulations without impeding innovation. Here are three ways the telecom industry is working to influence policy makers' understanding and regulating of AI.

Pushing AI's role in national telecom priorities

As AI catches the eye of regulators, the telecom industry is seeking to demonstrate its value to the communications sector, while setting those uses apart from what it deems as more high-risk consumer-facing applications.

Related:Five takeaways from Biden's new national spectrum strategy

At a hearing last week, held by the House subcommittee on communications and technology, industry representatives pitched the importance of AI in spurring key national telecom priorities, like facilitating spectrum sharing and network infrastructure deployment.

Dr. Sameh Yamany, CTO at Viavi, referred to his company's "digital twin" technology – which creates virtual replicas of physical infrastructure and devices – as "a potent instrument to ensure the efficiency and efficacy of 5G and open RAN networks" allowing network operators to "build an ORAN network alongside a digital representation of their existing network and see how they will work together – all before the first component is added," he said. That matters particularly as the US seeks to execute on its somewhat-stalled "rip-and-replace" strategy to replace equipment supplied by Chinese vendors Huawei and ZTE.

Another area of interest to Congress, spectrum use, was highlighted as a key job for AI at last week's hearing as well. According to Courtney Lang, vice president of policy, trust, data and technology, at the Information Technology Industry Council (ITI), increasing spectral efficiency is a "high value use case" of AI.

Related:Alaska maps its broadband locations with Ecopia AI

"Initial demonstrations of using AI to adapt the modulation and coding scheme have shown gains in spectral efficiency of roughly 10% depending on the scenario," said Lang.

"AI can be used to optimize spectrum allocation by determining both the environmental conditions and where the demand is," she added.

AI can also improve broadband mapping, said Lang, as AI tools can "convert satellite images into real-world features to develop a map of broadband serviceable locations that can help identify communities that previous mapping models missed."

Urging a nuanced approach to 'high risk' and 'critical infrastructure'

With those and other use cases in mind, Viavi's Yamany told the House committee that Congress should understand "telco AI" technology as "low-risk, high value" AI systems, with minimal interaction with consumer data. Indeed, the industry seems concerned about having its technology fall into a "high risk" bucket, legislatively speaking.

"As the Committee continues its pivotal work on the subject of AI, we urge you to embrace the nuanced nature of AI systems, particularly those exemplified by Viavi's offerings," said Yamany. "Low-risk, high-value AI systems like ours – which we term Telco AI – represent a new frontier in enhancing network security, resiliency, and efficiency, and we believe that AI-driven solutions, especially those with limited to no interaction with consumer data – like ours – are crucial for our future."

ITI's Lang offered a more direct take on how Congress should consider "risk" in its policymaking.

"In crafting specific policy, we discourage classifying entire sectors as high-risk. Blanketing entire sectors with requirements is not proportionate and misses important nuance," said Lang in her opening testimony before the House subcommittee. 

"For example, there has been interest in designating AI used in 'critical infrastructure' as high-risk, but this would be too broad and complicate the ability of critical infrastructure owners and operators, including in the communications sector, to apply AI in many low-risk use cases. It is more appropriate to designate particular AI components used for safety functions in critical infrastructure as high-risk, than to classify entire critical infrastructure sectors as high-risk," she said.

ITI shared similar sentiments in response to a Senate bill on AI introduced last week. That bill, a bipartisan effort, aims to regulate the "highest-impact applications" of AI. At present, the language of the legislation refers to "critical-impact" AI organizations and systems as those involved with "the direct management and operation of critical infrastructure."

Responding to that Senate bill, ITI President and CEO Jason Oxman said in a statement that the organization welcomed the "risk-based approach and common-sense provisions" of the legislation, but added: "As lawmakers consider this measure, we encourage them to ensure that high-risk AI categorizations focus on the uses of the technology, not the sector."

Another firm using the "risk-based approach" language with policymakers is USTelecom. In a statement responding to the White House executive order on regulating the development and use of AI last month, USTelecom CEO Jonathan Spalter said the organization's membership is "committed to working with the Biden Administration, and all stakeholders, to advance a risk-based approach to AI governance, prioritizing partnership over regulation, and ensuring effective international coordination and harmonization."

Encouraging federal investment

In addition to making the case for AI's critical role in communications, and nuanced, light-touch legislation as a result, telecom firms testifying before Congress have also made the case for additional federal resources to go toward AI research and development (R&D).

"While a significant part of the policy conversation has been focused on addressing risks, commensurate attention should be given to the ways in which policy levers can support innovation, advance helpful applications of AI, and progress the research and development needed to implement risk management practices," said ITI's Lang during last week's House hearing. 

"In order to supplement contributions by the private sector, Congress and the U.S. government should provide the necessary resources and incentives for R&D activity, including that taking place at National Labs, in the private sector, and beyond."

To that end, the White House in its executive order (EO) on October 30 called for the establishment of "at least four new National AI Research Institutes, in addition to the 25 currently funded as of the date of this order" within 540 days of the EO's publication. Seven of those 25 were funded earlier this year with $140 million from multiple federal agencies, as well as IBM, as part of "a broader effort across the federal government to advance a cohesive approach to AI-related opportunities and risks," said the National Science Foundation (NSF) in a press release. That effort overall has received close to a half a billion dollars in funding thus far from the NSF and its partners, the agency said.

Read more about:


About the Author(s)

Nicole Ferraro

Editor, host of 'The Divide' podcast

Nicole covers broadband, policy and the digital divide. She hosts The Divide on the Light Reading Podcast and tracks broadband builds in The Buildout column. Some* call her the Broadband Broad (*nobody).

Subscribe and receive the latest news from the industry.
Join 62,000+ members. Yes it's completely free.

You May Also Like