Skip to main content

2 posts tagged with "aws_rekognition_video_labels"

View All Tags

Redenbacher POC

· 7 min read
Scott Havird
Engineer

Overview

Popcorn (details below) is a new app concept from WarnerMedia that is being tested in Q2 2020. The current version involves humans curating “channels” of content, which is sourced through a combination of automated services, crowdsourcing, and editorially selected clips. Project Redenbacher takes the basic Popcorn concept, but fully automates the content/channel selection process through an AI engine.

Popcorn Overview

Popcorn is a new experience, most easily described as “TikTok meets HBOMax”. WarnerMedia premium content is “microsliced” into small clips (“kernels”), and organized into classes (“channels”). Users can easily browse through channels by swiping left/right, and watch and browse a channel by swiping up/down. Kernels are short (under a minute), and can be Favorited or Shared directly from the app. Channels are organized thematically, for example “Cool Car Chases” or “Huge Explosions” or “Underwater Action Scenes”. Channels comprise content generated by the HBO editorial team, popularity/most watched, and from AI/algorithms.

Demo

removed

Redenbacher Variant

Basic Logic

In Redenbacher, instead of selecting a specific channel, the user starts with a random clip. The AI selects a Tag that’s associated with the clip, and automatically queues up a next piece of content that has that same Tag in common. This continues until the user interacts with the stream.

Scene Finder POC

· 6 min read
Scott Havird
Engineer

Goal

We began this project as an exploration around up-levelling the capabilities of “assistants” (AI, chatbot, virtual, etc) with regards to the specific field of media & entertainment. As the overall platform of voice (and other) assistants increase in capabilities, we believe that they will focus on more “generic” oriented features, which gives us the opportunity to specialize in the vertical of entertainment. For this phase of work, we are exploring the concept of what an “Entertainment Assistant” might do, and how it might function.

One such function would be, for example, a voice-driven search where the user doesn’t exactly know what they are looking for: “Show me the first time we see Jon Snow in Game of Thrones” or “Show me dance scenes from classic movies” or “Show me that scene from Friends where they say ‘we were on a break!”.

In other words: respond to a voice-based command, filter out the relevant keywords, and deliver to the user all the matching scene-based results.