Automation, Machine Learning Key to YouTube Clean-Up

Aditya Kishore
8/3/2017
50%
50%

Responding to concerns from advertisers and politicians, YouTube Inc. has added new measures and improved existing ones to better regulate objectionable content uploaded on its platform. The company detailed new measures in a blog post, chief among which is the use of machine learning and automation to remove objectionable content from the site and also limit access to content that falls into a gray area.

YouTube has been under pressure to clean up hate speech and terrorist-related content on its site. Earlier this year, major global media buyer Havas Media pulled all advertising off Google (Nasdaq: GOOG) and YouTube. Havas is estimated to spend about £175 million ($230 million) every year on behalf of its clients in the UK. The move followed other major advertisers including the Guardian, the BBC and Transport for London also pulling their advertising. In fact, Google was summoned by government ministers to explain why government advertising was being placed next to extremist content on YouTube.

The Internet giant promised to improve its ad placement, and announced a four-step strategy to combat extremist content a few months ago. These included: better detection and faster removal driven by machine learning, more experts to identify objectionable content, tougher standards for "borderline" videos that are controversial but don't violate YouTube's stated policies, and more counter-terrorism efforts.

The challenge for Google/YouTube is that digital advertising is increasingly sold programmatically. This term refers to the automated trading of advertising online. Media companies make advertising slots available via a programmatic system and advertisers and media buyers bid on these slots. The entire process is conducted using digital trading desks that match advertising to buyers using various demographic and contextual criteria.

Unfortunately, this can sometimes result in advertising showing up next to the absolutely wrong video. For example, a toy manufacturer might find its advertisement placed in an adult video, or a government agency could have its message inserted into a video from a hate preacher. This is a huge concern for advertisers. Prior reports found that message from advertisers such as UK broadcasters Channel 4 and the BBC, retailer Argos and cosmetics brand L'Oréal were slotted into extremist content on Google and YouTube.

Google claims it previously removed nearly 2 billion inappropriate advertisements from its platforms, more than 100,000 publishers from its AdSense program and blocked ads from more than 300 million YouTube videos. Examples of inappropriate content that was removed included videos of American white nationalists and extremist Islamist preachers.

Following the theory that the problems created by automation can also be solved by automation, YouTube has invested in using machine learning to try and regulate the content uploaded to the site. The scale of the service makes it impossible for a human-only solution anyway: 400 hours of video are uploaded to YouTube every minute, and 5 billion videos are viewed daily.

The key to Google approach is better detection and faster removal driven by machine learning. It has developed new machine learning technology in-house specifically to identify and remove violent extremism and terrorism-related content "in a scalable way." These tools have now been rolled-out, and according to the company it is seeing "some positive progress" already.

It cites improvements in speed and efficiency -- more than 75% of the videos removed for violent extremism in the previous month were pulled automatically, before being flagged by a single human. The system is also more accurate due to the new machine learning technology, with Google claiming in many cases it has proven more accurate than humans at flagging objectionable videos. And, lastly, Google points out that given the massive volumes of video uploaded to the site every day, it's a significant challenge to root through them and find the problematic ones. But over the past month, the new machine learning technology has more than doubled not only the number of videos removed, but also the rate at which they have been taken down.

Google is also adding new sources of data and insight to increase the effectiveness of its technology, partnering with various NGOs and institutions through its "Trusted Flagger" program, such as the Anti-Defamation League, the No Hate Speech Movement, and the Institute for Strategic Dialogue. And it's using the YouTube platform to push anti-extremist messages. When users conduct potentially extremist-related searches on YouTube, they are redirected to a playlist of curated YouTube videos that challenge and debunk messages of extremism and violence.

In addition, Google is targeting videos that are flagged by users as being objectionable but don't cross the line on hate speech and violent extremism. These videos are placed in what Google calls a "limited state": they are not recommended, monetized and users cannot like, suggest or add comments to them. This will be rolled out in coming weeks for desktops and subsequently for mobiles.

While Google appears to be making a significant effort to make YouTube less likely to be misused by hate groups, advertisers and government agencies will probably have to see the results to believe them. Still, this is likely to help alleviate some of the concerns that have been building up.

However, these efforts seem aimed only at hate speech and extremism. Google has done little to alleviate concerns from brands about context and placement outside of extremist content. Advertisers are concerned about having their brands appear to sponsor content that could be damaging for their brand image even if the content is not hate speech -- like a message from a religious group placed in a video featuring a wild, drunken party.

In the recently concluded "upfronts," an annual event where advertisers buy inventory upfront for the year from broadcasters, "brand safety" was an important selling point. NBCUniversal ad sales head Linda Yaccarino pretty much led with that in her address, underscoring the benefits of human ad placement in broadcast advertising.

Google -- and others such as Facebook and Twitter -- will need to develop ways to resolve these advertiser concerns, because the larger brands that control the bulk of advertising expenditure are increasingly worried about where their brands are showing up. If the machine learning technology applied to YouTube is effective then it must be extended to also address objectionable content beyond extremist videos. It should also be able to relate advertiser messages to the videos they are placed in, and create better matches. If Google is able to do that, it will take away one of the most effective selling points from broadcasters, and help shift ad spend towards online video even faster.

— Aditya Kishore, Practice Leader, Video Transformation, Telco Transformation

(6)  | 
Comment  | 
Print  | 
Newest First  |  Oldest First  |  Threaded View        ADD A COMMENT
242ak
50%
50%
242ak,
User Rank: Moderator
8/4/2017 | 5:25:07 AM
What software can make, other software can fake.
If I remember correctly some years ago the Guardian tried to  programmatically buy ad slots on its own site as an experiment, and found that about a third of the inventory it bought didn't exist -- someone was auctioning off space on the Guardian site without actually having any.

To some extent,  stuff has always been part of the advertising eco-system though. Digital just allows you to measure traffic/audience more accurately than other media, and so issues with measurement and delivery are coming to light.
242ak
50%
50%
242ak,
User Rank: Moderator
8/4/2017 | 5:14:33 AM
Re: Bad, and worse
Probably the very argument made by the agency executive who had to break the news to the client...!
mendyk
50%
50%
mendyk,
User Rank: Light Sabre
8/3/2017 | 11:09:31 AM
Re: You Get What You Pay For
Absolutely, Joe. So far, this has been a big money grab from the start, with the main grabbers being the "agencies" and the content overlords. There's still a ton of money to be made in nickels and dimes.
Joe Stanganelli
50%
50%
Joe Stanganelli,
User Rank: Light Sabre
8/3/2017 | 11:08:19 AM
Bad, and worse
> For example, a toy manufacturer might find its advertisement placed in an adult video

Well, that's not too bad... Certainly not as bad as the other way around -- a kids video about toys bearing an advertisement for an adult-video website.
Joe Stanganelli
50%
50%
Joe Stanganelli,
User Rank: Light Sabre
8/3/2017 | 11:06:33 AM
Re: You Get What You Pay For
I blame the vendors/agencies taking advantage of businesses who have no idea about anything related to digital-marketing. There are SOOOOOOO many agencies out there (including (and even especially) the so-called "reputable" ones) that are nothing more than charlatans -- charging thousands of dollars a month in commissions to do no more than post irrelevant and/or just-plain bad content to social while buying up ad-space indiscriminately and bragging about getting millions of impressions (from people not in the target audience/demographics/geographies).
mendyk
50%
50%
mendyk,
User Rank: Light Sabre
8/3/2017 | 10:07:15 AM
You Get What You Pay For
Some of this is pretty funny. "Advertisers" buy blind postings because, first and foremost, they are cheap. If they want to control their brand associations, they best way to do that is to place ad with specific content providers. But of course that's more expensive to do.
Featured Video
From The Founder
Light Reading founder Steve Saunders grills Cisco's Roland Acra on how he's bringing automation to life inside the data center.
Flash Poll
Upcoming Live Events
February 26-28, 2018, Santa Clara Convention Center, CA
March 20-22, 2018, Denver Marriott Tech Center
April 4, 2018, The Westin Dallas Downtown, Dallas
May 14-17, 2018, Austin Convention Center
All Upcoming Live Events
Infographics
SmartNICs aren't just about achieving scale. They also have a major impact in reducing CAPEX and OPEX requirements.
Hot Topics
Here's Pai in Your Eye
Alan Breznick, Cable/Video Practice Leader, Light Reading, 12/11/2017
Ericsson & Samsung to Supply Verizon With Fixed 5G Gear
Dan Jones, Mobile Editor, 12/11/2017
Verizon's New Fios TV Is No More
Mari Silbey, Senior Editor, Cable/Video, 12/12/2017
Cloudy With a Chance of Automation: Telecom in 2018
Iain Morris, News Editor, 12/12/2017
The Anatomy of Automation: Q&A With Cisco's Roland Acra
Steve Saunders, Founder, Light Reading, 12/7/2017
Animals with Phones
Don't Fall Asleep on the Job! Click Here
Live Digital Audio

Understanding the full experience of women in technology requires starting at the collegiate level (or sooner) and studying the technologies women are involved with, company cultures they're part of and personal experiences of individuals.

During this WiC radio show, we will talk with Nicole Engelbert, the director of Research & Analysis for Ovum Technology and a 23-year telecom industry veteran, about her experiences and perspectives on women in tech. Engelbert covers infrastructure, applications and industries for Ovum, but she is also involved in the research firm's higher education team and has helped colleges and universities globally leverage technology as a strategy for improving recruitment, retention and graduation performance.

She will share her unique insight into the collegiate level, where women pursuing engineering and STEM-related degrees is dwindling. Engelbert will also reveal new, original Ovum research on the topics of artificial intelligence, the Internet of Things, security and augmented reality, as well as discuss what each of those technologies might mean for women in our field. As always, we'll also leave plenty of time to answer all your questions live on the air and chat board.

Like Us on Facebook
Twitter Feed