- The Latest
- Topics
-
About
Has the age of computer overlords begun?
"I
for one welcome our new computer overlords," quipped Ken Jennings as he
and Brad Rutter, up to that point the world's leading contestants on the quiz
show "Jeopardy!", went
down to defeat by Watson, a robotic system developed by IBM.
Has
it really come to this? Is humanity now doomed to subservient status, while
robots increasingly displace us from jobs that were once considered the exclusive
domain of thinking beings?
Can
a machine think?
This
question was posed already more than 60 years ago, at the dawn of the age of
electronic computers. Alan Turing, a pioneering computer scientist, proposed a
test by which it could be judged whether a machine was intelligent. The test,
which he called the Imitation Game but which is now universally referred to as the
Turing Test, basically requires that the machine be able to carry on a
conversation (using a text interface similar to a chat system to remove cues
such as physical appearance from consideration) that would fool a human judge
into believing it was human.
Although
the test has its critics on philosophical as well as practical grounds, it has
stood as a grand challenge for workers in the field of artificial intelligence
(AI). Turing believed that a computer would pass the test by the year 2000, but
clearly the task has turned out to be harder than it looked to him.
Despite
decades of efforts by many very smart people, the goal remains seemingly as far
out of reach as ever. This fact has only been made more apparent by an actual
attempt to carry out the Turing Test in practice. The Loebner Prize contest, sponsored
by a businessman with no background in computer science, annually pits the best
conversationalist computer programs, called chatbots, against each other and
against a selection of humans to compete for the prize of Most Human.
AI
researchers generally regard the competition as a farce and a distraction. Even
the best chatbots are hollow when you look inside them, using various gimmicks
and rules of thumb to produce plausible but essentially meaningless dialog. Meanwhile,
efforts to produce computer programs that can genuinely understand language and
display common sense have made only slow progress.
Now
suddenly a new contender has burst onto the scene, which actually looks like a
possible candidate for the status of a thinking machine. Has IBM finally done
it? Have its scientists produced true artificial intelligence?
Watson
could not pass or even enter the Turing Test, since it is not a conversationalist.
It only answers questions, one after another and cannot carry on a dialog. So
we would have to set aside Turing's standard for judging intelligence. But if
we look at Watson's performance simply on its own merits, was the machine
thinking?
Despite
all the hype that appeared in the wake of Watson's victory on
"Jeopardy!" in the match televised February 14, 15, and 16, the
answer is: definitely not.
Watson
unquestionably represents a new level of ability in responding to questions
given in natural language, a notoriously difficult problem for AI. Human
language is full of ambiguity, words have multiple meanings, and context makes
a great deal of difference to what sort of answer is expected. Compounding
these difficulties, "Jeopardy!" clues often involve puns or other
types of witty word play that are very difficult for computers to deal with.
The
IBM researchers who built and programmed Watson are to be commended for a real
tour de force, combining sophisticated processing of vast quantities of
unstructured textual data harvested from the Internet with powerful
statistical-association methods that allow various facts and concepts to be
related to one another. They have built on the results of years of AI research
to achieve an impressively capable question-answering system.
But
Watson does not think, at least in the usual meaning of the word, and as Turing
intended it to be understood. Watson does not "understand" the
content of either the questions or its own answers. It works by generating
hypotheses by applying search-engine techniques to the clues and then deciding
which hypothesis has the highest likelihood of being correct. Watson's lack of
true understanding is revealed occasionally when it makes an elementary mistake
such as giving "Toronto" as an answer when a US city was clearly
required.
Watson
is a remarkable achievement. More importantly, the methods that were used in
programming it are not tied specifically to the task of answering trivia
questions. It would be straightforward to re-program it to answer questions
about medical diagnosis, for example. In fact IBM has already announced plans
to work with a hospital to develop a physician's assistant based on Watson technology.
Other possible application areas, such as sales assistance or computer
troubleshooting, come readily to mind.
But
Watson is too expensive for widespread adoption in the near term. The system that
was used to play "Jeopardy!" packs a cluster of 90 high-end servers
into a rack the size of ten refrigerators, and must have cost well over $10
million. However, if computer hardware continues its historical trend of
steadily increasing performance per unit cost, we can expect similarly capable
systems to become affordable for moderate-sized businesses in a decade or so.
When
that happens, we can expect to see many companies attempt to use this type of
technology as a complete replacement for human workers such as sales clerks or
help-desk staff. However, I believe that they, and more importantly their
customers, will find the results less than satisfactory. Susan Feldman, an
analyst for the consulting firm IDC, has written insightfully,
What
Watson is not is a substitute for a human. It answers questions. That's what it
was designed to do, and in information-rich areas like healthcare, finance,
government intelligence, or call centres, it will be a boon to overloaded workers
who need quick, accurate answers that they can trust.But
Watson cannot carry on a conversation. It has no real world experience. It has
no emotions.
It
is also important to note that Watson is not creative. It cannot come up with
any new ideas of its own. If asked for a solution to a problem, it can search
for and locate a solution, if someone somewhere has already come up with one. But
it is not equipped to be original.
This
means that a better role for Watson is as a quick, knowledgeable aide to human
specialists, helping them to manage large volumes of facts that would be too
overwhelming for any one person to assimilate. This is a development to look
forward to, not to dread.
After
all, we need to keep clearly in focus that we build computers and other tools
to help humans achieve their goals, not the other way around. Efficiency and
lowering costs are means, not ends.
In
"R.U.R.," the classic science-fiction play about a world run by robots,
the general manager of the robot factory is asked what makes the best kind of
worker: is it honesty, dedication? "No," he answers, "it's the
one that's the cheapest."
But
ultimately labour and production are not about economics and doing things the
cheapest way. They are about fulfilling human needs, one of which is to be
useful, to be creative, to participate in the great task of being stewards of creation
and making the earth a better place. Computers are not yet our overlords or
even our peers in this enterprise, and not likely to be so soon.
Associate Professor Bob Moniot teaches computer
science at Fordham University in New York.
Have your say!
Join Mercator and post your comments.