LONDON -- Managed Services World Congress -- Existing contractual laws affecting telecom operators are failing to address the questions posed by the rapid development of artificial intelligence and automation technologies, said an outsourcing lawyer at today's Managed Services World Congress in London.
Amanda Pilkington, a partner with law firm DLA Piper, said current legislation was "out of kilter" with the rate of technological progress, which left companies facing huge uncertainties when signing deals with customers and suppliers.
Telecom operators' use of AI in areas including customer services as well as network maintenance, design and rollout poses a conundrum for lawyers developing service contracts, Pilkington told conference attendees.
"The problem is that contracts and laws have [previously] been developed on the basis that a human is at fault," she said. "We never envisaged that a machine would have the ability for independent thought, and we need to figure out who is responsible for the act of an intelligent machine."
Within the legal community, there is also now growing concern about the ramifications of AI for service level agreements in the telecom industry, according to Pilkington.
"When we introduce AI into the mix, the preponderance of low-grade issues should in theory reduce," she said. "Does that mean that a success rate of 100% is automatically assumed? Because that is not something that a supplier wants."
As employees on outsourcing contracts are replaced by AI and automation technologies, there will also be questions about the ownership of data and intellectual property that contract lawyers will have to address, said Pilkington.
Ethical considerations surrounding AI and machine learning also appear to be a major worry for lawyers in the technology industry.
Pilkington cited the example of the "trolley conundrum," whereby an out-of-control trolley will collide with five people if it continues on its present course, but only one person if it can be redirected.
"If a robot is in control of making that switch there is an ethical issue as to who is responsible for the decision-making of the robot," said Pilkington. "That is the kind of ethical liability issue that the European Commission is grappling with."
As a starting point, European authorities are currently looking into ways of ensuring that a standard definition of a "smart robot" is applied across various member states. The uniformity should make it easier for lawmakers in different countries to harmonize legislation regarding AI and automation technologies.
"There is a big push in the European Union to ensure open standards for interoperability and harmonization, and there will be a requirement for entities using robotics to disclose they are doing so," said Pilkington.
On the ethics front, another concern to have recently surfaced is that AI systems could be inherently racist or sexist due to the "unconscious bias" of their developers.
"If the coding is done by a white, middle-class English person, say, and the tool is for a recruitment process, there is a risk that the way in which it is coded means it has a propensity to select candidates similar to the coder," said Pilkington.
She also agreed that European Union legislative efforts could stifle innovation, but downplayed those drawbacks.
"If we follow precedence then you could say that it could stifle innovation, but one of the advantages of European legislation is that it leaves implementation open to each individual country," she said. "So in the UK telecom market you don't need a license to be a communications service provider whereas the position in France is very different."
By contrast, US authorities are lagging on the legislative front when it comes to these revolutionary technologies, according to Pilkington. "They are looking at it, but they are behind because they generally are less concerned about it," she said. "The European Union is more advanced in its thinking."
— Iain Morris, News Editor, Light Reading