- Free newsletter
- The Latest
- Topics
-
About
Merging man and machine
“There are now in the world machines that think, that learn, and that create. Moreover, their ability to do these things is going to increase rapidly until--in a visible future--the range of problems they can handle will be coextensive with the range to which the human mind has been applied.”
This astonishing statement was even more astonishing at the time it was made, by Herbert Simon in 1957. Astonishing, and as we now know, wrong, or at least premature. The “visible future” alluded to is no more visible now than it was then. Simon (who was later awarded the Nobel Prize in economics for his work in operations research) clearly let optimism get the better of him. This optimism was based on promising early results in artificial intelligence (AI), in which many seemingly difficult problems were rather quickly solved, and the way to the grand goal of fully human-level machine intelligence seemed clear.
It soon became apparent, however, that the remaining problems were extremely difficult. Simon's statement has become something of an embarrassment to most researchers in AI, who have since then generally tried to distance themselves from it and to speak in more imprecise terms of when this holy grail will be attained.
At least one researcher, however, has no such qualms. In his recently published book, The Singularity is Near (Viking, 2005), Ray Kurzweil predicts that a “singularity” will be reached in the next couple of decades when computers will first equal and then soon far surpass human abilities in all fields of thought. Humans will not be left behind, however. We will augment our physical, mental and sensory abilities with increasingly sophisticated implants and plug-ins, so that the distinction between human and machine will fade away and become irrelevant. In this vision, which is often called “transhumanism”, we will merge with our computers and become a new species of thinking being.
Who is Ray Kurzweil, and why do some very smart people take him seriously? He is no armchair futurologist. He has worked for many years in the field of artificial intelligence and has numerous inventions to his credit, including a print-to-speech reader for the blind, a music synthesiser, and a speech recognition system. So he is well acquainted with what computers can and can't do at present. Another quality that sets his predictions apart from most others is his use of quantitative estimates. The dates he predicts for the occurrence of such events as the equalling of the computing power of a human brain by a US$1,000 laptop are found by extrapolating charts of existing trends, and are not mere dreamy surmise.
Kurzweil is also an unsinkable optimist. He is confident that the rapid pace of improvement that we have seen in both computer hardware and software in recent decades will continue indefinitely, and even increase. At the heart of this argument is what he calls the “Law of Accelerating Returns”. In simplest terms, it posits that technology feeds back into its own development, so that the rate of progress, the rate at which new inventions and new capabilities are introduced, continually increases. One can find this law operating even in primitive technologies. For instance, the control of fire made possible the metallurgy of bronze and iron, which in turn enabled the invention of new and more sophisticated machines. But Kurzweil argues that this feedback phenomenon is especially marked with information technology. For instance, chip design software running on powerful computers enables the design of even more powerful chips. Thus each generation of computing technology is crucial to the design of the next.
Kurzweil concludes that in the near future this feedback process will cross a threshold where the technology becomes, in a sense, independent of its creators. Human designers will no longer need to play a role in the process, and computers (or more precisely, transhumans) will be able to continue their evolution on their own. This will free them (us) from the limits set by the fixed capacity of human nature.
Artificial intelligence can’t match the real McCoy
What is wrong with this picture? Kurzweil is correct that technology has tremendous potential for further growth before it reaches fundamental limits set by the laws of physics. In a famous talk at Caltech in 1959 titled “Plenty of Room at the Bottom”, Richard Feynman showed that the amounts of information and computational capacity that can in principle be packed into a small space were vastly larger than the technology of the day allowed. In the nearly half-century since that talk, great progress has been made in reducing this gap but there is still much room for further improvements. Kurzweil may be too sanguine when he posits that subatomic particles will provide a basis for computation using features far smaller than atoms (most physicists do not foresee this happening anywhere outside of a neutron star), but he is correct that clever new ideas and steady engineering progress may be able some day to produce a pentium-class computer no larger than a grain of dust or a tiny robot the size of a human cell.
No, the problem with his prediction that human-level machine intelligence is right around the corner is not with the hardware but with the software. Kurzweil has fallen into the same error as Simon -- assuming that all that is needed to attain true intelligence in machines is to scale up existing systems a bit. In fact what is needed is a qualitative, not merely a quantitative leap. And no one at present knows how this can be done, or even if it can be done. If true AI is possible, achieving it will require profound new insights into the nature of intelligence. Such insights cannot come from machines that have not already achieved true intelligence: they will have to come from us plain old-fashioned humans. The technological feedback loop that Kurzweil is relying on will not get us past this obstacle.
Research in AI has indeed yielded many remarkable results, but always in circumscribed realms and with limited aims. Getting a computer to play chess at the grandmaster level was once regarded as a benchmark that would prove that it could think. Now that computers do play chess at that level, no one considers this a benchmark for intelligence any more. Computers do not play chess by understanding the game but primarily by searching massively among the possible moves and counter-moves to find the most promising one. Even for a seemingly simple task like reading text out of a page image (optical character recognition or OCR), computers have far higher error rates than grade-school children.
Modelling human thought
Philosophers such as Hubert Dreyfus, and even many AI researchers, notably Rodney Brooks, have argued that when humans tackle many, perhaps most, real-world cognitive tasks, they do not construct any formal theories or representations in their minds, but simply cope using low-level reactions to stimuli together with a store of implicit knowledge. If this is the case, then efforts to replicate human thinking by constructing computer models of representations of reality are doomed to failure. These efforts constitute the physical symbol-system approach to AI begun in the 1950s by researchers such as Simon, Allen Newell, John McCarthy and others.
Kurzweil recognises that the symbol-system approach has its limitations and that other strategies may be needed to achieve his goal. An alternative approach is the connectionist style of problem solving. A prime example of this style is the artificial neural network, a system of simplified simulated neurons connected together in a fashion that captures some of the features of biological nervous systems. Neural networks have proven remarkably successful in tackling pattern-matching problems like OCR that have resisted the symbol-system approach. However, they are unsatisfying to advocates of human-level AI for two reasons. First, although they are effective, they become so by a learning process that ends up obscuring how they actually achieve their results. Second, it is not clear how these sorts of systems can be adapted to more sophisticated tasks such as understanding language.
Kurzweil's favourite idea for making an end-run around the software problem is the brain simulator. After all, he argues, we have one working example of a design for an intelligent machine right between our ears. If we can reverse-engineer this design the way computer engineers figure out how their competitors' products work, we can build a working model, and then improve on it. Some progress on this project has already been made. Neurophysiologists have analysed the neural circuitry responsible for some low-level aspects of aural and visual perception, and have constructed computer models that reproduce the main features of these processes, including even illusions. However, this task will become progressively more complex as it works towards higher-level cognitive processes involving ever larger numbers of neurons cooperating in ever subtler interactions. A complete understanding of how the brain works is clearly not a near-term possibility, if it is possible at all.
In the end, one comes away from reading Kurzweil with the suspicion that he hopes mind will somehow simply emerge from the mechanism once the mechanism is sufficiently speedy and complex. Nearly 50 years after Simon's astonishing and embarrassing predictions, the secret of human intelligence and creativity remains as mysterious as ever, and as elusive to capture in a machine.
Associate Professor Bob Moniot teaches computer science at Fordham University in New York.
Join Mercator today for free and get our latest news and analysis
Buck internet censorship and get the news you may not get anywhere else, delivered right to your inbox. It's free and your info is safe with us, we will never share or sell your personal data.
Have your say!
Join Mercator and post your comments.
-
Robert Moniot published this page in The Latest 2023-12-28 22:31:56 +1100