Formula to boost ratings for a long running quiz show and advertise at the same time: Two men. one supercomputer. Machine talks in a monotone and correctly answers with the question – who is Lady Gaga. Will it sell? Perhaps this blog and many other publications gives that indication along with ratings for Part 2 of IBM’s Jeopardy Challenge. I checked out some of my old colleagues’ Facebook pages to hopefully get a reaction to the Jeopardy event but my old comp-sci professors summed it up best under his status in three words: cool yet scary.
I’ve been following Watson’s development closely. Watson’s ability to play is an astounding and monumental accomplishment. My father-in-law mentioned how IBM won against the world’s top chess player many years ago but that pales in comparison to what has happened today. Knowing the difference between knowledge and language was the competing edge of the best human Jeopardy contestants. You can have the knowledge in the whole internet and still not understand the language because its that hard of a problem. My father-in-law could not understand this.
Jeopardy category names are tricky because they only weakly suggest the expected answer, so Watson tends to downgrade the significance of the category name when calculating its answer. If the question had included U.S. city” in the question, it would have given U.S. cities more weight in its database.
In the first game, Watson was marvelous at coming up with “cut and dried” answers: questions pertaining to song lyrics or historical facts. It seemed to falter though at those “nuanced” questions so prevalent in Jeopardy! – the ones where the answer takes a bit of creative thinking and is often not so apparent. Take for instance one question, “From the Latin for end, this is where trains can also originate.” The correct answer, “terminus” was given by Jennings. Watson, gave the incorrect answer for the question, but technically got the part right about “From Latin for end” with its answer, “finis.” It’s these types of subtleties that Watson was unable to grasp.
But in the second game, the engineers and researchers at IBM made an adjustment and Watson became the know-it-all who turned up at a party and insisted on telling everyone just how much he knows. And he knows a lot. Despite blowing a Final Jeopardy! question about airports, IBM’s computer dismantled its human rivals and finished the first game of a two-game match with $35,734 in winnings, far ahead of runner-up Brad Rutter, who earned $10,000 and Ken Jennings with $4,800.
IBM Watson is one very interesting project. Although the challenge isn’t over yet, it’s a definite head-shaker to see how far processing has gone as it interprets and learns the nuances of language and providing actual answers in realtime. Yet, there are still others who don’t understand how such a project could benefit in real life problems. Very many actually. Imagine paving the way for more interactive robots in health care, help desks, defense, and maybe even education. Ever had an issue with a product at home or needed to make an adjustment with your flight but hated calling support? With a highly-evolved IBM Watson at the helm, those long-winded voice recognition menus are over.
Below is a video from Engadget interviewing one of the researchers of the IBM breaking down Watson’s innards and thought process.