Article

Coffee maker0 comments

An Unachievable Goal?

Published at 8:07pm on 23 Jul 2007

According to Steve Wozniak (co-founder of Apple), "we never will see a robot that makes a cup of coffee". Could he be right? Is it possible that AI is an unachievable dream? That we will never see robots that can even perform basic household chores, let alone outsmart us?

In a recent interview with Brazillian tech magazine IDG Now (reproduced in English here), Steve Wozniak, co-founder of Apple computer, and a tech legend in his own right, voiced some surprising opinions about the future of technological progress.

Are we going to use our voices, are we going to use our minds to communicate instead of the computer? My answer is: I don't think so. Computers have a keyboard input for you to write down your message to get the computer to work. And the keyboard is operated by the fingers. Why would that change?

To clarify, this man was there when Apple computer shipped the first ever desktop computer to use mouse-based interaction in place of the keyboard. Apple proved beyond any reasonable doubt that the average user prefers to interact with a computer using a device other than a keyboard for many purposes.

Apparently Steve missed it.

It is certainly true that nothing has yet replaced the keyboard for reliable input of large quantities of text. But the non-trivial success of products such as IBM's ViaVoice (speech recognition software) suggests that people are still looking for an alternative, and there is no question that most people can speak faster than they can type, so if the speech-to-text problem can be cracked reliably, it seems pretty likely that there will be a big market for it.

Because Wozniak has phrased his dismissal in the form of a rhetorical question, it doesn't really give away much of his reasoning for thinking this. The implication is that he thinks that the public don't want to be offered new input methods, but going on what he says next, and reading between the lines a little bit, it becomes clear that what he really thinks is not that there's no reason to do it, but that we will never succeed at doing it.

Setting aside the issue of mind control, which is mostly a neuroscience problem, and not one we are necessarily on the cusp of solving (although some recent reports suggest otherwise), speech-to-text software has always failed to satisfy because it fundamentally isn't good enough at guessing what we are trying to say. Most speech is intended for human consumption, and humans, in addition to being very adept at parsing speech, also have the benefit of being able to understand the context. If a human hears the word "shoe" they are unlikely to mistake it for "shoo" or "shoot", for example, because of context. But a machine may not be able to make that distinction. In other words, to improve on speech input technology we need to make more progress in the field of AI.

And Wokniak's opinion on AI? In answer to the question "do you think the age of intelligent machines is coming?", he replies:

It is coming, but it's coming very slowly. These machines that seem to walk really have a special requirement. The way a human being walks is still almost impossible to copy. ... Every one of these robots will kind of do one thing well, but we never will see a robot that makes a cup of coffee, never. I don't believe we will ever see it.

This is a very strange thing to say. Setting aside for a moment that he contradicts himself by first saying that AI is coming and then that we will never see it, he chooses making coffee as an example of something that won't ever be achieved by machines. Ever?? Granted, a bipedal robot servitor that can go into a kitchen, get beans out of a cupboard, put a kettle on to boil, and then make coffee from scratch is well beyond our present level of technology, but it is hardly outside the bounds of possibility! I find it much harder to believe that in a hundred years, or a thousand, that a coffee-making robot will still defy our best engineers and programmers.

Wozniak claims to have an interest in robotics, but his arguments display a distinctly woolly thinking when it comes to the field of AI:

Think of the steps that a human being has to do to make a cup of coffee and you have covered basically 10, 20 years of your lifetime just to learn it. So for a computer to do it the same way, it has to go through the same learning, walking to a house using some kind of optical with a vision system, stepping around and opening the door properly, going down the wrong way, going back, finding the kitchen, detecting what might be a coffee machine...

Wozniak doesn't seem to understand the difference between learning and problem-solving. In his example he states that it takes 20 years to learn to make a cup of coffee (dubious, most 3-year-olds could probably boil a kettle if they were tall enough to reach) and then gives an example of a "learning process" the robot must go through. But the process he describes (finding your way around an unfamiliar house) is something that any human can figure out in seconds. It's not like every time we visit a new friend it takes us 20 years to find the kitchen.

I presume that what Wozniak is trying to say is that it takes 10-20 years for a human to learn to solve problems such as navigating around unfamiliar houses, which they do by trial and error. But most of that time for a human is taken up by learning to walk, then learning to navigate - essentially basic motor functions. Once they have learned that, tasks such as making coffee can be learned instantly, if someone shows them, or even figured out without any help in many cases just by logical analysis of the situation. I don't think we will have robots that can do that any time soon, but that is not necessary in order to make coffee - that's just how humans do it.

Humans are very general-purpose machines. We can make coffee, but we can also make rocket ships, or surgical implements. It takes a human a year or more to learn to walk because walking is a complex task that a human must learn from scratch. But a specially built robot can be programmed to walk and it will begin doing so instantly. Humans have to learn to walk because our genes don't store instructions for doing that (as opposed to, say, breathing which comes built-in). But there are no limits to what we can pre-program into a robot. The limitation is on our own understanding. The reason robots can't walk as well as humans yet is simply that humans don't understand how to walk very well yet. Human's knowledge of walking is inexplicit - our subconscious knows how to move our legs and react to obstacles, but our conscious minds don't understand that process well enough to transcribe it into a set of instructions for a computer to follow. So it's not true that a robot must learn to walk - a human must learn the logic of walking, but once that is done we can program a robot to do it quite easily.

You can't program these things, you have to learn it, and you have to watch how other people make coffee. ... This is a kind of logic that the human brain does just to make a cup of coffee.

Wozniak makes a huge logical leap here. From "this task is very complex" we suddenly get to "this task can only be learned, not programmed". It just doesn't follow. Deep Blue did not have to learn how to play chess - Kasparov spent his whole life learning to play, and the computer beat him after just a few games. The learning was done by Deep Blue's programmers. They learned from its mistakes and then programmed the improvements. This mechanism is not AI in the sense that The Terminator uses it, but it is a practical approach that can be used to solve any arbitrarily complex task as long as we can fully understand the problem in advance.

We will never ever have artificial intelligence. Your pet, for example, your pet is smarter than any computer.

Wozniak ends his argument here with what he clearly considers a crushing blow. Yes, your dog is more sophisticated at solving problems relating to real-world interaction than any robot or computer built to date. To be honest, so is the average ant, or mosquito, or dust mite. But any chess computer is better at playing chess than a dog, and ELIZA, the software psychiatrist written in 1966, can conduct a more coherent conversation than a dog.

What does this tell us? It tells us that the "program" a dog runs is much better at solving certain types of problem than any software we have yet written. And yet a dog is entirely unable to ever learn to solve much simpler problems like playing chess. This is because a dog is not intelligent in the same way that a human is - it is a specialised problem-solving machine, not a general-purpose one.

And this is very good news for AI. We have no idea how to program sentience, i.e. the kind of intelligence that a human has. A human can create new knowledge - can learn things that were not known before. To make a computer do that is not something we have the faintest clue how to do. In fact if Steve Wozniak had merely said that that task was impossible, then I would have much more sympathy with him. I'd still disagree, because humans obviously have this form of intelligence, and since I do not believe in any magical, supernatural quality to humanity, I must assume that if biology can evolve a machine to do something, it must be possible to replicate it using technology. (It doesn't mean it isn't damned difficult though!)

But a dog isn't sentient. A dog comes pre-programmed with some abilities, and it learns a few more after it is born, but it is not creating new knowledge, it's just re-discovering what a thousand generations of dog before it already knew - in other words, it is following a program.

And that means that we can, in due course, expect to build machines that can do anything a dog can do - machines that can navigate the streets and locate and recognise people and places - and we can do all of that without having to solve the really tricky problem of genuine AI.

 

Disclaimer: The opinions expressed here are those of the author and are not shared by Charcoal Design unless specifically stated. The material is for general information only and does not constitute investment, tax, legal or other form of advice. You should not rely on this information to make (or refrain from making) any decisions. Always obtain independent, professional advice for your own particular situation.

Comments

There are currently no comments for this article

Post a Comment

(optional)

Plain text only - html tags are not supported.