Discovering AI-infused video
Series part one recap
In Part One of this four-part series, we introduced you to the basic concepts of a special demonstration project involving IBM Watson® and TED®. If you’re just starting the series, you might want to check it out first.
Are you ready for an interactive exploration of AI-infused video? Here we go…
There are four ways to provide Watson with information that it will first study and then produce unique results.
Use a suggested question
Type your own question or phrase
Provide your twitter handle (yours or someone else’s)
Copy and paste text you (or someone else) wrote
Ok. Let’s try this!
- Follow along by using a browser on your computer, tablet or mobile device and navigate to: watson.ted.com
- On the landing page, you’ll see a brief introductory overlay:
- Action: To proceed, click Dismiss.
- On the next page, you’ll see the hub of interaction methods.
METHOD #1 – USE A SUGGESTED QUESTION
This is where you choose what to give Watson. The first type is the most obvious: Ask Watson a suggested question. The team that created this site provided five examples.
Action: Click “What is the secret to happiness?”
The resulting page is busy and shows a lot of information.
Action: Locate and click Pause. (If you’re following along with your own browser, pause the video so that we can focus on a couple of key areas to start.)
In the following screen captures, I’ve added numbered yellow dots for the key highlights:
The search is restricted to five results.
Choosing to change your question will simply bounce you back to the prior page.
The green boxes are a type of keyword metadata. Watson associates these metadata with the topic.
If you select one of the top-level metadata, an inserted row of videos appears immediately below. If you follow one of these, the video will play in the center area.
Clicking the X closes the insert panel.
The main focus of the page is the middle:
Video player, in which the video autoplays.
Metadata keywords specific to the video.
The grey bar is a “scrubber”, allowing you to click-select a different point in the video timeline.
The cyan block will appear anywhere on the scrubber timeline. It shows you where Watson found a segment in this video that relates to the concept.
“Watson Recommended clips” is a switch that lets you toggle to see the clip Watson selects, or the entire video.
This area shows the other videos that Watson selects for you.
Note that if you select metadata keywords in this area, a secondary panel of video choices opens up further below.
Also, if you select one of the five sidebar videos, it will open a TED.com site profile containing that video and additional background information.
Method one – recap
We selected a “suggested question” and Watson (IBM’s cognitive discovery technology) responded:
- Watson found videos related to the suggested question (and the specific segments in those videos)
- Watson queued up five videos for us
- Watson provided a variety of related metadata keywords for us to explore
- Watson started playing the first video at the relevant segment
METHOD #2 – TYPE YOUR OWN QUESTION OR PHRASE
Next, let’s get more personal and more interesting.
Action: In the box with the text “Type your question”, type your own phrase, question or even a single word. Click Ask Watson.
If your question is personally meaningful, you might see TED videos that you’ve previously watched. Do you?
Important note
If you type a short phrase, Watson may interpret the words individually – i.e., select results specific to one of the words. As a quick demonstration, try asking Watson about “Conga line”. The first TED video to come up may be Robert Hammond’s 2011 “Building a park in the sky”. In the segment that Watson selects for you, the phrase “conga line” is not mentioned. But the word “line” comes up several times during Robert’s narrative about New York’s “high line”.
What this means is that you can simplify right down to a single word that you might want to see in TED videos.
When you’ve finished exploring your own questions, phrases or words… let’s look at the next method.
METHOD #3 – PROVIDE YOUR TWITTER HANDLE
If you have a Twitter® account, use your own Twitter profile.
Action: Click the Twitter symbol.
The screen changes to the Twitter profile page.
Use your Twitter handle (if you have one)
Note that Watson will then read your Twitter posts and links you’ve shared – in real time
Action: Click Analyze
Different graphic patterns represent Watson at work…
Next…
Action: Click the forward arrow after “Watch now”.
And what do you see? How well did Watson interpret what you’ve written?
Action: Type @elon in the Twitter handle box.
It’s fascinating to see the list of videos that Watson selects after reviewing Elon’s Twitter account. Watch them, listen closely in the selected segments. Relate them back to Elon’s philosophy and comments.
Also, you can use other accounts in the field to have some fun. For example, let’s try CNET author Rick Broida’s Twitter post series. He scours the web for interesting deals and his posts are always fun.
Action: Type @cheapskateblog in the Twitter handle box.
When you watch the results, it’s easy to see how Watson’s recommended videos correspond to Rick’s themes of buying and using technology at the lowest possible price.
METHOD #4 – COPY AND PASTE
In this last method, you can copy and paste a block of text. It’s best if you paste up to 2000 words. Give Watson something substantial to analyze. It could be an article you’ve written, a long letter or a blog post.
Summary
In Part Two of this series, we’ve seen that IBM Watson can interact in a variety of ways, to produce amazing results using the TED library of videos. Using any of the four methods, Watson analyzes the request, then serves up five recommended TED videos along with metadata that can give you access to many more thematically-related topic videos.
But, how does IBM Watson do it? In upcoming Part Three of this series, we will explore what Watson is doing when we interact with TED and has the ability to interpret meaning and context in videos, all the while learning cognitively more about what the user wants and needs.
Contact Bob Swift-Hill at 403-461-6227 or swifthillb@saas-ssi.com
Graphic and web design by Virgil Smith