European standards body ETSI has announced the formation of what one leading industry analyst calls an "exceptionally important specification group."
The new collaborative effort is ETSI's Industry Specification Group on Securing Artificial Intelligence (ISG SAI) -- the industry analyst is Patrick Donegan, founder and principal analyst at HardenStance, who has long been focused on network security developments.
The group will "develop technical specifications to mitigate threats arising from the deployment of AI throughout multiple ICT-related industries. This includes threats to artificial intelligence systems from both conventional sources and other AIs," notes the standards body, which is well known for other groups that have tackled specifications development related to NFV, multi-access edge computing and more.
The new specs are needed because once artificial intelligence-based systems become involved in automated decisions related to network configurations and/or actions, those decisions might result in security threats, either as a result of the AI-enabled system's design or because of malicious activity. "The conventional cycle of networks risk analysis and countermeasure deployment represented by the Identify-Protect-Detect-Respond cycle needs to be re-assessed when an autonomous machine is involved," notes ETSI.
The group -- which counts BT, Cadzow Communications, Huawei Technologies, the UK's NCSC (National Cyber Security Centre) and Telefónica as its founder members -- will address three aspects of AI in the standards domain:
- Securing AI from attack (e.g., where AI is a component in the system that needs defending)
- Mitigating against AI (e.g., where AI is the problem or is used to improve and enhance other more conventional attack vectors)
- Using AI to enhance security measures against attack from other things (e.g., AI is part of the solution or is used to improve and enhance more conventional countermeasures)
For more details, see this press release.
Why this matters
AI-based tools are becoming more pervasive in the telecom ecosystem. While there is much hype about AI, and many companies dubiously claim to have AI-based products, there are genuine AI-based tools already being used by communications service providers (CSPs), particularly in customer care-related operations.
But, particularly as operators invest in the systems they need to support their 5G strategies (including AI and Next-Gen Analytics), AI tools will impact an increasingly broad range of applications. AI-focused research house Tractica expects CSPs to spend $11.2 billion annually on AI-driven software solutions across eight use cases by 2025, up from $419 million in 2018.
Those CSPs will be hoping for greater automation, reduced costs and new business opportunities that will come from more intelligent analysis of their data sets: Excitement levels are high. But AI will also bring multiple challenges, including security.
"Throughout the telecom and other tech sectors, people from senior management down are busily talking about how they're going to deploy AI here, there and everywhere," notes Donegan in emailed comments to Light Reading.
"They are paying nowhere enough attention to how they are going to minimize the substantial risks associated with these systems: How they're going to ensure the quality of training data that's used; how they are going to protect AI algorithms themselves from being hacked; how they are going to measure the level of autonomy of an AI instance," he adds.
So the analyst is pleased to see ETSI addressing these critical issues. "Kudos to all the founding members of this exceptionally important specification group," adds Donegan.
— Ray Le Maistre, Editor-in-Chief, Light Reading