Earlier this month, Orange announced the creation of a Data and AI Ethics Council made up of 11 independent, recognized experts, chaired by Orange's chairman and CEO, Stephane Richard. According to the press release, the Council will "draw up ethics guidelines for the responsible use of data and AI at Orange, and monitor its implementation within the Group's entities." The Council will be involved in a large variety of topics, "for example, to ensure that AI-based systems developed by the Group have incorporated the principles of non-discrimination and equality in their design, or that they do not run the risk of invading privacy."
The types of unethical recommendations and ethically unrestrained outcomes that AI is capable of generating are well known. For example, when applied to facial recognition, AI tends to be a lot more accurate at recognizing light-skinned than dark-skinned faces. It's also somewhat better at recognizing males than females. AI-driven chatbots that are exposed to the Internet have no qualms about picking up and using offensive slurs for all to see or hear (and wince at). In the context of specific telco use cases, Orange's announcement singled out the risk of invading privacy when a telco uses AI to analyze network data to detect the causes of a failure in a video over fiber service.
For reasons like these, AI ethics has been a big preoccupation of the high-tech industry for many years. And there are literally dozens upon dozens of industry groupings and associations researching, publishing and lobbying on AI ethics and AI ethics standards to show for it. Hence it's certainly good to see Orange take this initiative. If they're thought about only in isolation, though, deliberations and standards around AI ethics have a fundamental flaw. That's because they tend to assume that control over ethical outcomes is determined largely or even entirely by ethics-related design and implementation factors. Which is true – right up until the point when the AI gets hacked into and re-programmed to behave unethically. Boom – so much for all the due diligence on ethics.
Finding solutions to protect against AI being hacked isn't the field of AI ethics, it's the related but fundamentally different field of AI security. And here the comparison between the huge amount of industry collaboration going into AI ethics and the much smaller amount going into AI security is pretty striking. All the AIs that are commercially deployed today are protected against cyberattacks by proprietary security mechanisms of varying quality or they're not secured at all. And, by the way, that includes all those cyber security products out there that are positioned as being "AI-driven."
The biggest collaboration effort in defining standards for AI security is being undertaken by the European Telecommunications Standards Institute (ETSI). At the end of 2019, ETSI created the Securing AI (SAI) Industry Specification Group (ISG). The group's work is organized around five "Phase 1" Work Items – these are focused on The Problem Statement; AI Threat Ontology; Secure AI Supply Chain; Security Testing AI; and Mitigation Strategy. Phase two has recently got underway, focusing initially on the role of hardware in AI security. Why is it the telecom sector that's leading in AI security standards? For one thing it has a comparatively good track record of collaborating in defining as well as adhering to standards. Telcos are also best placed to shut down volumes of connected devices that have gone rogue on behalf of all sectors of industry and society – whether that's due to an AI having been hacked or for any other reason.
The SAI ISG now has approaching 50 members. Its chaired by BT and both Telefonica and Deutsche Telekom are members. Orange is arguably Europe's leading cybersecurity provider now. This position has been built up on the back of key acquisitions made in the UK and the Netherlands by its cybersecurity business, Orange Cyberdefense. If Orange wants to be a European leader in addressing all the risks as well as the opportunities of AI holistically, participating in this key ISG might complement the work of the new Council very nicely.
— Patrick Donegan, Principal Analyst, HardenStance