The countdown to connect with viewers begins the moment they open a video streaming app.
Whether it's an app powered by an OTT giant such as Netflix or Hulu or from any one of the multitudes of smaller players, the accepted rule-of-thumb is that the service provider has about 90 seconds to grab the viewer's attention with appealing content. If they fail to engage during that time, the viewer typically gives up and peels off to something else.
That's not only a challenge for the legions of pure OTT services. It also applies to pay-TV providers and programmers that have branched off with their apps for TV-connected streaming boxes, smartphones and tablets.
Quickly connecting viewers to content is not getting easier, given the rising number of streaming options that are available, not to mention the sheer size of their content libraries. The Paradox of Choice is in full swing in today's OTT video marketplace.
But few are standing idle, content to use static recommendation engines or simply let viewers drift and bob in this roiling sea of content.
To forge a faster connection with viewers, user interface developers and video service providers have begun to turn to AI techniques and machine learning systems that attempt to make the experience more personalized and engaging.
Some recent work in this area has centered on the use of AI to sift through and find cover images and other artwork associated with a TV show or movie. These images can be plugged into the user interface and do a better job at grabbing viewer's attention better than the old way -– having human editors pore over those images themselves and try to guess which ones will resonate.
And there's a good reason for video services and apps to lavish their attention on the cover art images and thumbnails being presented to the viewer. A Netflix Inc. (Nasdaq: NFLX) study published in 2016 found that 82% of a Netflix viewer's focus is on the artwork while browsing, and that the average user spends 1.8 seconds evaluating a thumbnail image.
Among those kicking the tires on how AI can play a bigger role is UI company Accedo , which has developed an AI-driven prototype in partnership with Amazon Web Services Inc. (using the AWS "Rekognition" AI engine) and British broadcaster ITV plc (London: ITV).
For the early trials on tablets with a test group, the prototype system takes the source video content from ITV, which defines the "engagement tags," and runs it through the AWS system to pull down and propose cover art images that, if everything is working properly, should forge an emotional connection with the viewer. The human editor then selects which one to plug into the cloud-based Accedo One platform, which works in conjunction with an A/B testing system that can be used to refine and fine-tune the user experience.
This goes well beyond the studio or programmer supplying the thumbnail images, the content and the underlying metadata. The theory is that the AI engine is learning every second and providing conclusions back to the media companies about what types of images and artwork are the most effective at triggering consumer emotions.
"We're not guessing anymore," Fredrik Andersson, SVP of strategy and solutions at Accedo, said.
The tests got underway in July, and the project partners are still finalizing their conclusions concerning what kind of images get consumers clicking and engaging with content.
But early findings suggest that consumers tend to engage with images that show facial emotions that express or convey tone or display recognizable characters, such as a villain. Consumers are also less engaged by images with large groups of people, preferring those with three or fewer characters. Images that are humorous or even weird or strange have also done well.
"AI, these days, is smart enough to tell the engine to look for these characteristics," Andersson said. "Technology democratizes this opportunity. You can have the machine do this for you. And the more tests we're running and the more data we have, we're learning more and more about how people prefer to be engaged."
If the prototype, which has stitched the Accedo One platform to the AWS AI engine, gets the intended results and validates this use of AI and machine learning, the hope is to put it into full-scale production, he added.
Andersson is confident that the use of AI in video apps and services is poised to move from a differentiator to a table stakes capability.
"In a few years' time, most of the multiscreen companies and service providers will use something like this to be competitive," he predicted.
Accedo and its partners will be showing off their AI handiwork at next month's IBC show in Amsterdam.
— Jeff Baumgartner, Senior Editor, Light Reading