x
Service Provider Cloud

Google Launches Fight Club for AI Security

As AI emerges as a leading tool for both security and network attacks, Google is setting artificial intelligences to fight each other to toughen them up.

The competition focuses on the specialized realm of image recognition. Google Brain, Google's machine learning division, is using Kaggle, a platform for data science competitions, to host a competition to prevent attackers from poisoning input data used to train machine learning in image recognition, according to the abstract describing the competition:

"Most existing machine learning classifiers are highly vulnerable to adversarial examples," the abstract explains. "An adversarial example is a sample of input data which has been modified very slightly in a way that is intended to cause a machine learning classifier to misclassify it. In many cases, these modifications can be so subtle that a human observer does not even notice the modification at all, yet the classifier still makes a mistake."

Photo by Lorie Shaull - Own work, CC BY-SA 4.0, Link
Photo by Lorie Shaull - Own work, CC BY-SA 4.0, Link


Keep up with the latest enterprise cloud news and insights. Sign up for the weekly Enterprise Cloud News newsletter.


The competition will proceed along three tracks, two of them to find the best attack system that can trick a machine learning classifier, and a third for the best classifier to defend against attack.

Tricking machine learning systems is nothing new. Spammers evade spam filters by figuring out what patterns the filter's algorithms has been trained to identify. But more recently researchers have shown "that even the smartest algorithms can sometimes be misled in surprising ways. For example, deep-learning algorithms with near-human skill at recognizing objects in images can be fooled by seemingly abstract or random images that exploit the low-level patterns these algorithms look for," according to Technology Review.

"Adversarial machine learning is more difficult to study than conventional machine learning -- it's hard to tell if your attack is strong or if your defense is actually weak," Google Brain researcher Ian Goodfellow tells Technology Review.

The implications of the contest go beyond image recognition.

"Computer security is definitely moving toward machine learning," Goodfellow tells Technology Review. "The bad guys will be using machine learning to automate their attacks, and we will be using machine learning to defend."

"In theory, criminals might also bamboozle voice- and face-recognition systems, or even put up posters to fool the vision systems in self-driving cars, causing them to crash," Technology Review says.

Google announced in March that it plans to acquire Kaggle.

Related posts:

— Mitch Wagner Follow me on Twitter Visit my LinkedIn profile Visit my blog Friend me on Facebook Editor, Enterprise Cloud News

HOME
Sign In
SEARCH
CLOSE
MORE
CLOSE