Article

Pipette icon2 comments

Singularity-Shmarity (or How I Learned to Stop Worrying About a Robot Apocalypse)

Published at 10:02am on 04 Jan 2010

AI isn't just around the corner and it isn't going to bring about the end of humanity.

17 years ago Vernor Vinge, acclaimed science fiction writer and author of the rather good A Fire Upon the Deep wrote this essay about the technological singularity.

It's quite long, but it's well-summarised by the abstract:

Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.

Is such progress avoidable? If not to be avoided, can events be guided so that we may survive? These questions are investigated. Some possible answers (and some further dangers) are presented.

It not uncommon for people to make wild predictions about technology, particularly science fiction authors. It makes for a good read and it allows for some interesting hypothetical polemical debates. It also opens the author up to ridicule when their predictions spectacularly fail to come to pass, which is why it was both brave and foolhardy for Vinge to set the timescale for his predictions within his own lifetime.

It may seem unfair to ridicule someone for an article they wrote 17 years ago, particularly since Vinge has technically still got another 20 years to be proven correct (later in the article he revises his estimate to between 2005 and 2030), but my intention is not to pick on Vinge, but rather the specific combination of misconceptions about the nature of AI that he harbours (or harboured in 1993), as his views seem to be quite prevalent amongst futurologists, forward-thinkers and even AI researchers (who, of all people, really aught to know better).

This is going to be a long article, so I'll borrow Vinge's approach with a concise abstract of my own:

  1. Human-level-or-better AI will almost certainly not be synthesised in our lifespans, or our children's lifespans, or even our children's children's lifespans, unless of course we figure out a way to extend our lifespans beyond 100-odd years.
  2. Super-intelligent AI is inevitable, and when it happens it will not bring about the end of humanity, but will instead herald a necessary and desirable next-step in our evolution.
  3. No one will be unwillingly subjugated by artificial life, and the human biological form will continue to exist for as long as the universe can sustain life, though it may be that in a few thousand years it will exist only as a curiosity in a museum rather than as the preferred vessel for any human's mind.

...

Popular Myths About AI

To justify my outlandish assertion that all the AI researchers are wrong and we're nowhere near creating a sentient AI, I'm going to start by debunking a few myths about the nature of the problem.

Myth 1: Modern computers are x% as powerful as the human brain, once they reach 100% we'll be able to build AIs

It's an easy excuse that the reason we haven't achieved machine sentience yet is that our hardware isn't powerful enough yet. It's an argument that makes sense to most people - after all the latest version of Word or Windows may not run on a 3 year-old PC that seemed state-of-the-art at the time they were bought, so it makes sense that something as sophisticated as AI would need even more powerful hardware than we can make right now.

But this is actually a complete misconception about the way that computation works. Faster hardware can do the same calculations faster, but there is no fundamental difference in the nature of what a fast PC is doing as compared to a slow one. Alan Turing (father of the electronic computer) conjectured that any universal computer (or "Turing Machine") can perform any calculation given enough time and storage space (this was later proved mathematically by physicist David Deutsch using quantum theory). That means that anything a fast PC can do, a slow PC can do slower.

There is no algorithm or calculation performed by modern machines that could not be done by a much slower computer or even a person with a pencil and paper. Certainly, calculating the lighting of pixels or projectile collision vectors for a single frame of a modern first-person shooter game would probably take a human several days or even weeks to compute by hand, but they could still do it. New software is artificially restricted to not run on old PCs because it'd be too slow to use (making the software look bad), or in some cases simply because the hardware retailers have made a deal with the software writers to persuade you to buy an upgrade.

A related myth is that AI requires a different computer architecture to the one we currently use. Certainly, the hardware/software of modern digital computers is wildly different in arrangement to that of the human brain, which uses something more similar to an analogue neural network. But again, this simply boils down to a performance issue. Remember how Turing said that any universal computer can perform any computation? Well that also means that any computer architecture can emulate any other. It is perfectly possible to simulate a neural network in a digital computer - it's not as fast as using specially designed hardware, but it does the same job more slowly.

By saying that the secret to AI is faster hardware, AI researchers make it sound like they have already worked out exactly how to write a sentient program, but that when they tried running it on a modern PC it was too slow to be usable. But clearly this is not the case or it would be much bigger news and scientists would be scrabbling to build a supercomputer capable of running this revolutionary software. The reality is that these researchers are trying a scattergun approach to solving the problem by running lots of complex, time consuming processes such as evolutionary algorithms (more on these later) that they hope will magically result in a working AI, but that the programs are currently not yielding any useful results. With more hardware they could simply try more of these theories out in shorter space of time.

I do not believe that more computing power is a substitute for understanding the nature of the problem. No other complex programming problem has ever been solved this way. The key to creating AI is not throwing more hardware at researchers who don't know how to solve the problem. What we need is a solid philosophical understanding of the nature of the thing we are trying to create.

Also, though it's neither here nor there, I strongly suspect that modern computing power is actually already much more powerful than the grey goo between our ears in terms of its ability to quickly and accurately perform calculations, or to store information with perfect fidelity. What makes our brains special is not that they are powerful computers, but rather that the software that runs on them is incredibly good at working around the limitations of the hardware. The brain is not a general-purpose computer in quite the same sense that a PC is, so it's very much faster and better at performing some tasks than others (e.g. pattern recognition), but given an arbitrary clearly-defined computing task, the human brain is basically always slower and less accurate than a PC programmed to do the same job.

Myth 2: We can create a smart AI by starting with a dumb AI and giving it more information so it becomes smarter.

This is a good example of what I mean when I say we need a better philosophical understanding of the problem before we can solve it. I hear AI researchers saying things like "we're aiming to create a robot that learns and develops like a three year old child".

What they mean by this is that they can't figure out how to make a robot as "smart" as an adult, so they're aiming lower and trying to make one as "smart" as a child. This isn't merely a cheat, it's missing the whole point. It's based on the assumption that by aping the behaviour of infant humans we are somehow recreating the mental process behind their actions. This is similar to how a chess computer works: Deep Blue (the machine that first beat grand master Gary Kasperov) may have behaved in a similar fashion to a human in terms of moves, but we know that the thought process was completely different. Kasperov used his intellect to choose the best move whereas Deep Blue just calculated the result of every possible move and picked the one with the highest possible number of successful outcomes (that's a simplification as Deep Blue also had a lot of clever tricks programmed in by a chess master, but at its heart it still relies on being able to out-predict a human with brute force).

This approach (pre-programming a solution for every possible situation) works for chess, but breaks down when the range of behaviours you are trying to simulate is not a simple set like the moves on a chess board, but instead the complete range of actions of a human. Of course a young child can't do as many things as an adult so these researchers figure that they have a better shot at pulling it off. This is largely pointless though. What makes the human brain unique is that it doesn't have an upper limit in terms of what it can learn. Making a robot that can learn to walk but not learn to talk is not going to get us any closer to true AI.

It's also deeply insulting to 3 year olds - what they have effectively built is a non-sentient robot that has superficially similar behaviour and level of functionality to a child. But 3-year-olds can imagine new things and communicate, and it is evident to anyone who meets them that they have human-level intelligence. No robot I have seen is even vaguely as capable as a 3-year old child, and most of its capabilities were programmed in from birth, which is cheating. What these researchers have in fact created is not a robot child but a robot retard - a machine that is not simply low down on the learning slope like a child but rather permanently stuck at a fixed low level of intellectual ability.

These researchers hope that if they feed their childish robot more information then it will learn to become smarter and maybe eventually achieve "adulthood" like a human child does. But they are confusing information with knowledge - humans can learn new skills because they understand the implications of information they absorb and build an internal mental understanding of the universe. But to machines information is just data to be stored. They don't understand the meaning and they can't draw any conclusions that they aren't programmed to (see deductive reasoning below). A computer doesn't learn like a human just because you feed it information slowly over time instead of programming it all in to begin with.

AI is quite a broad term, so I think it may be worth clarifying the different meanings. Much confusion about our progress in AI stems from a lack of understanding of the definitions, and an inability to distinguish between them.

Animals, humans and the natural world in general perform many amazing computing tasks that are beyond our current ability to replicate in a computer. Whilst these computations are all amazing in their own way, they are largely independent, and our ability to replicate them artificially varies wildly.

  1. Deductive reasoning - this is not thought in the normal sense. This is logic - like if a=b and b=c then a=c. This is the kind of thinking that a chess computer does to work out the best move. The nature of deduction is that the answer is unambiguous and can be completely derived from known facts. It doesn't require creativity, it simply requires calculation - and computers are very good at this. We've pretty much nailed this task - our best chess computers can consistently beat our best chess grandmasters, not because they out-think them but because they out-compute them. Predicting every outcome for every possible move and picking the move with the best outcome is not really AI at all, this is computation. Of course because of limited computing power it's not always possible to predict every possible outcome in a reasonable space of time. And sometimes we want computers to make decisions about non-deterministic systems, or ones where they don't have all the facts. For this we rely on heuristics, and so-called fuzzy logic.
  2. Fuzzy logic - this is the process of making decisions when you don't have all the facts, or don't have time to take them all into consideration. Fuzzy logic is still not really thinking in the sense that humans do it, but it is part of the puzzle. Fuzzy logic works by allowing the possibility that things can be neither true nor false but somewhere in between. In essence you treat truth as a sliding scale and keep adding in more information until your confidence in the answer reaches some threshold. With deduction we might say that something is true, whereas with fuzzy logic we would say that it is 75% probable that it's true. Fuzzy logic is harder to program than classical binary logic, and it can be made easier and faster by special hardware such as analog computers and neural networks. It's not true AI, but it does allow us to program systems that behave a little more like natural systems. For example fuzzy logic and heuristics play a big part in pattern recognition, where the aim is often to find the "best fit" rather than an exact match.

  3. Pattern recognition - this is the field of AI in which we have probably made the most progress. I actually believe that it is plausible that we will have achieved human-level pattern recognition within 30 years, though there are actually creatures in the animal kingdom that do this much better than a human. Pattern recognition means identifying meaningful shapes, smells, sounds in an uncontrolled environment. A human can look at a crowd of people and know which one is their friend, for example. Believe it or not, most of this computation is actually performed in the eyeball before the signals even reach what we would consider to be the brain. This is a pure computational task - it's a very tricky task that involves much research and fiddling with technology like neural networks, but it is by-and-large something we know how to do. We have cameras that can pick out number plates, and toy robots that can recognise their owner's face. We're getting there. We're making progress. And yes, this kind of AI gets better with more computing power (or rather, it gets faster).
  4. Learning - at first this seems like a no-brainer. If we copy an encyclopaedia into a computer then it's "learned" everything, right? Wrong. This is a bucket-theory of learning - that the mind is like a bucket into which knowledge can be poured. It doesn't work that way. Learning is not just the acquisition of new facts, it is the understanding of those facts. For example, if you have learned math then you can solve any sum but if you have simply been told that 2+2=4 then you won't know how to solve 1+3 or 5+7. Given enough examples you may be able to use deduction to solve other cases, but ultimately that still isn't the same as having learned mathematics. We don't know much about how to make machines that learn. Most attempts at learning machines have just been a case of programming in a bunch of algorithms and facts and then relying on deduction to do the rest. This results in what are called "expert systems" and whilst the results can be impressive (check out Wolfram Alpha), ultimately these programs are just interactive encyclopaedias, or manuals, they can't learn anything new. The ability to absorb new facts and understand them requires creativity.
  5. Creativity - here's the rub. This is the core ability of humans (and possibly some higher primates, although they may just be using fuzzy logic and clever heuristics) that sets them apart from the rest of the animal kingdom. Creativity not only allows us to learn new things and understand concepts that aren't programmed into our DNA, but it allows us to create brand new knowledge out of thin air. Creativity is more than just deduction, it allows humans to take the available facts and come up with something that simply cannot be derived logically. We have imagination. We can imagine things that never were and make them real. A truly smart AI needs to have creativity and believe me we have absolutely no idea where to even begin thinking about writing software that has an imagination. This is the reason why we won't have AI in 30 years. Because sadly most AI researchers don't just have no idea how to program an imagination - they mostly don't even realise that this is the key component. But wait, there's more.
  6. Consciousness - sometimes called self-awareness, but this is a bit a confusing term. Consciousness is more than just the knowledge that you exist, after all you could program a machine to "know it exists" and go around telling everyone, but it wouldn't be conscious. Consciousness is the sense of self - it's the ability to say "I like the colour blue" and actually mean it. Computers cannot like the colour blue for two reasons; 1) they have no feelings associated with the colour - they might recognise its exact RGB components and be able to recite a pre-programmed opinion about it's aesthetic merits, but they can't experience any sensation as a result of seeing it, and 2) they have no emotions. A machine cannot like something or dislike it - all they can do is compare its qualities against a database of "good" and "bad" characteristics and then make an deductive judgement about its merit, which is not the same thing at all. Like creativity we have no idea how to program consciousness, but worse, unlike creativity we lack even the words to concisely explain what consciousness is. I mean, if you met a person who wasn't conscious, how would you know? If they said they liked blue, how would you distinguish if this was a genuine feeling or just a preprogrammed response? Creativity is fairly easy to identify (if it wasn't we'd already have computers that could pass the Turing Test) but consciousness? It's something that we all know we have but we just have to take it on faith that everybody else does too. If we were all living in private computer simulations populated by intelligent zombies with creativity but no consciousness we'd actually have no way of knowing it. Scary! Of course consciousness may not be a prerequisite for AI - it may be possible to build a sentient AI that has creativity but no consciousness. Or it may be that consciousness arises naturally as a side effect of creativity.

Aside: Notably missing from the list above is inductive reasoning. This is the process of algorithmically creating new knowledge by combining existing facts (e.g. 3+5=8 and eight is an even number. Therefore, an odd number added to another odd number will result in an even number). Unfortunately induction doesn't really work and is a red herring that has wasted the time of many an AI researcher and philosopher. What people describe as induction is really just theorising (aka using one's imagination) to come up with an explanation or idea. Often the reasoning is so simple and "obvious" given the presented facts that people assume it can be done mechanically. But it can't. Without some kind of intelligent filtering, induction just produces endless non-sequiturs, e.g. A hat is a piece of clothing therefore all clothes are hats.

So in other words, the problem of true AI can be split into several sub-problems, some of which we've already solved or are making good progress with, and others which we simply have no idea where to begin solving. AI researchers mostly divide their time between making incremental improvements to the areas we understand, or making hopeless arm-waving gestures about the ones we don't. Many AI researchers are simply in denial about creativity and consciousness - they just pretend that they are non-problems that will somehow come out in the wash - a kind of if we build it, they will come attitude. They hope that if they build a robot with pattern recognition and deductive reasoning capability and then start stuffing encyclopaedias into it then eventually it will "wake up" and become intelligent without them ever having to figure out why.

The root of this hope seems to stem from the way that babies appear to develop. When a baby is first born it appears to be non-sentient, but slowly over time it acquires new skills and new knowledge until it magically becomes an intelligent, creative being (yes, even Reality TV contestants are remarkably smart compared to a newborn - not using your intellect is not the same thing as not having any).

But this is an illusion. A baby is not just a fully sentient adult that hasn't learned to walk or talk yet. The human brain continues to develop long after we are born. One of the ways that the human brain differs from conventional computers is that it rewrites it's own software (and hardware - the line is a bit blurry when it comes to biological brains) in response to stimuli (learning). We have no reason to think that an adult is simply a baby with more data in its brain. It is far more likely that the brain starts out as a barely-sentient "seed" computer that reconfigures itself into a sentient adult brain as it learns, just as the rest of the body reconfigures itself into an adult form. Note that I am not suggesting for a moment that children are non-sentient prior to puberty (if anything as we pass puberty we become dumber and less able to learn new things). The brain reaches sentience probably within a few months of birth, but the brain's program is a constantly changing and evolving thing throughout the course of its life, unlike most computer programs which are designed to have a fixed set of functionality that doesn't change during during use.

Emulating a 3-year-old child is actually well beyond our current capabilities, but emulating a much younger infant - whilst technically possible - is not much of an achievement, and it is certainly not a stepping stone to creating human-level intelligence. That robot-child will never rewrite its own software to make it into an adult. We don't know how to create a non-intelligent program that can rewrite itself into an intelligent program any more than we'd know how to create an intelligent program in the first place. It's almost certainly not necessary anyway - nature performs all kinds of optimisations and compression in order to cram the full instructions for a human into a few kilobytes of DNA, which is probably the only reason that humans don't spring fully developed from the womb, ready to hunt down the nearest tiger. We aren't subject to the same restrictions.

Myth 3: The Internet might one day "wake up" because of all the knowledge coursing through it

Again this is just wishful thinking - hoping that if we don't solve the problem it will magically solve itself without us having to do anything. The brain is not special because it's big and complicated - there isn't that much space in our DNA for the encoding of intelligence, and if you factor out the parts of our DNA that are identical to a banana plant there's really only a few kilobytes left to encode the parts that make us unique, and most of that probably deals with things like blue eyes and nose hair. I wouldn't be surprised if, when we do finally create an AI, it can fit on an old-fashioned 800KB floppy disk.

Don't get me wrong, once the first AI wakes up I fully expect that its thirst for knowledge will rapidly consume a thousand internets worth of information, but without that sentient seed to start it off, it's not just going to happen spontaneously.

Myth 4: We can create AI by simulating evolution in a computer

It's a nice idea - after all evolution produced intelligence the first time round, so why not use evolution to create it again, right? There's just two major flaws:

  1. We have no idea how likely intelligence is to happen spontaneously as a result of evolution, and we can't direct the evolution because we aren't sure what the goal conditions are. As far as we know, evolution has only ever once resulted in intelligence (if we discount neanderthals, which happened to more-or-less the same species at more-or-less the same time) and that took hundreds of millions of years.
  2. We have no idea how to simulate evolution in a computer. The field of Artificial Life is just as stuck as AI. Despite having a much better understanding of evolution than we do of intelligence we have failed to simulate the conditions required for unbounded evolution. We have succeeded in creating bounded evolution, namely setting some end condition like "able to walk" and then programming a computer to achieve that goal through random change and nonrandom selection until it achieves the goal, but that isn't useful unless we know the goal conditions for intelligence, and arguably if we knew those it would be simpler just to code a solution ourselves rather than try to evolve one.

So-called evolutionary algorithms make use of the random alteration/nonrandom selection process of evolution to arrive at a solution that we don't know how to get to, or to find an unexpected and hopefully superior solution to an existing problem.

This may seem like a great alternative to having to solve tricky design or programming problems ourselves, but really they are of very limited utility. A six-legged robot that can rapidly learn to walk again with five legs after one gets blown off is pretty cool for military applications, and designing a radio circuit that runs at half the power consumption by using simulated evolution to optimise an existing design is quite cool too. But then again, it's not that difficult to simply pre-program all the optimal walking patterns for 1-6 legs into a robot in advance, and a radio circuit that runs really efficiently but is an incomprehensible rats nest of wires that nobody can understand or fix if it goes wrong is no good for real-world applications.

More importantly though, to use evolution in this way we need to program the computer to recognise "success", otherwise it will just keep producing useless random variations and may never arrive at anything useful. In biological evolution, success is determined by reproduction. Rather inaccurately described as "survival of the fittest", evolution favours variations that are beneficial (or at least non-harmful) to a creature's ability to reproduce. In nature this leads to sharper teeth or bigger brains (chicks dig smart guys), but only because these are survival traits, and survival is a prerequisite to reproduction. Nobody had to tell nature that intelligence was a success condition - it arose logically from the situation (spears beat claws). Guided evolution on the other hand requires you to recognise when a program is heading towards intelligence, and of course since we don't know how to define intelligence programmatically (even the Turing test requires a human adjudicator), we don't know how to guide evolution towards it.

So to create intelligence we can't use guided evolution, we need to use unbounded evolution but reward variations that lead towards intelligence. In theory we could just copy nature; simulate artificial lifeforms and pit them against each other in a virtual environment, hoping that the smarter lifeforms will triumph. But there's a catch. We don't actually know how to simulate unbounded evolution. Natural evolution works because DNA seems to be being especially well-optimised to create a variety of viable lifeforms given small random modifications to the strand. We've not figured out how to re-create this in a computer yet.

To illustrate how hard a problem this is, take a typical computer program and imagine the likely effect of randomly changing a bit here or there. In 99% of cases the program would crash, and in the other 1% of cases there would simply be no noticeable effect. There is absolutely no possibility that randomly changing a few bits in Microsoft Word would magically add a new feature, or even turn it into Photoshop. That's not the way software works.

Crude evolution programs like Conway's Game of Life do seem to be able to achieve quite complex (albeit 2-dimensional) organisms that are even capable of reproduction, but so far all such systems have failed to achieve the exponential, unbounded growth that nature managed to pull off with DNA. They inevitably end up getting stuck in a loop where nothing interesting happens and the same "lifeforms" appear and die out without any further improvement.

And then of course there's the problem of creating a suitable environment. The world is immensely complex and non-deterministic. Even if we could simulate DNA's special properties in a computer, how would we know which aspects of our environment we need to simulate in order to spur evolution into creating intelligence?

Besides all that, if we somehow lucked-out and recreated the exact right conditions, once we had evolved artificial intelligence (though it would be an amazing achievement), we would be no closer to understanding the nature of intelligence. The evolved AI would be just as random and incomprehensible in its design as the natural intelligence we have already. We wouldn't be able to tailor it to a particular purpose, or boost its intelligence level. It wouldn't really be artificial intelligence at all, it would be natural intelligence, just living in a simulated environment. There would no-doubt be benefits to having minds that lived in a computer rather than in the fragile, squishy bodies that we currently inhabit, but it would still only be half the problem solved.

And Now, The Good News...

So now that I've hopefully convinced you that AI is a sufficiently hard and poorly understood problem that it won't be solved by next Tuesday, I can move onto the slightly more positive side of this story.

Though I'm very sceptical about the progress we're making towards the goal right now, I've accepted it as a given that we will eventually achieve human-level artificial intelligence. Earlier in the article I predicted this achievement as being between 300 and 500 years away, but really that's just plucking numbers from the air. We may need to solve a philosophical problem that has troubled us for thousands of years (the nature of consciousness) before we can solve the technical problem of actually implementing it, and that could be solved tomorrow or it could take another thousand years - it's not a technological problem so we've really no frame of reference to predict how long it will take.

I don't believe we'll solve this problem soon because I don't think we're going about it the right way, and I don't think we're going to start going about it the right way until our society has undergone some pretty fundamental changes. I think we'll probably need to have stopped worshipping stupidity and started respecting our own intelligence. I think we'll need to have cast aside our obsession with wasting time on prejudice, western guilt, mysticism and many other largely-pointless activities that pre-occupy the lives of most of us.

We may also need to solve some more pressing problems, such as our own mortality. Right now our lives are fairly short and filled with the constant risk of arbitrary death or suffering from disease, old age or both. Aubrey de Grey is an example of an AI researcher who made the pragmatic decision that AI could wait until he solved the problem of death. After all, once we've stopped worrying about our own mortality we'll have a lot more time to think about other things.

Finally, we'll need to develop a much keener moral and philosophical understanding of the nature of humanity. To re-create ourselves we must first understand ourselves, and right now the vast majority of humanity either isn't interested, doesn't understand or has an almost-deliberately incorrect understanding of what makes us human. Right now I don't know if we could handle AI as a society; some feel revulsion at the idea of machines being treated as people, and others advocate that we treat them as slaves. If a friendly alien were to come along today and bring us a working AI I wonder if there might actually be a serious chance that it would be forced to wipe half of us out a-la Skynet due to our own ignorance and fear. Conversely, people have a tendency to anthropomorphise and may struggle to distinguish between smart-but-not-sentient machines that can be ethically used as slave labour versus sentient ones that can't. Will we see the equivalent of an animal rights movement who want to protect the dignity of computerised toasters?

But I do believe the problem of AI is solvable. I'm pretty sure that (weird hardware aside) the brain is not fundamentally different from a desktop PC. There are many who do not share this view, but if we disregard mysticism then we must accept that whatever process leads to intelligence and sentience must involve matter and energy behaving in accordance with the laws of physics, and if so then it can be replicated artificially and/or simulated in a computer.

The Singularity

So what will the future hold for humanity once we have developed AI? Strictly speaking, the technological singularity has nothing to do with AI - on the graph of technological progress (whatever that means) the singularity is the point on the curve beyond which it is impossible to make meaningful predictions about technological progress (when the curve tends to the vertical). In a post-singularity society, the shelf-life of any particular piece of hardware or software will become negligible and people will have to adjust to new tech being obsolete before they've even heard about it. The post-singularity society is defined by technology - tech is the answer to every problem. Instant communication ceases to be a luxury and become a necessity. Every manual job is automated; the only profitable job for a human is to work with their mind, and anyone who fails to keep up with the latest developments rapidly becomes as obsolete as yesterday's gadgets.

People who talk about the singularity have wildly differing ideas about what it means or when it will happen (some would say it's already here, triggered by the Internet, others think AI or self-replicating nanotechnology will be the tipping point). Personally I don't think AI is a prerequisite to the singularity. AIs may outpace humanity in terms of the ability to rapidly develop science and technology, but a sufficiently focussed human society with a high population density and good communication and education is quite capable of achieving exponential technical growth without the help of sentient machines. The singularity will be the result of a cultural shift as much as a technological one, and I suspect that AI will be a product of the singularity rather than the cause of it.

Even if it happens post-singularity, AI will potentially bring about another massive change to our culture though: Whilst manual jobs are rapidly becoming obsolete, humans are still valued anywhere that creativity or imagination is required. Only humans can solve creative problems and for now there is no danger of these jobs being replaced by machine. But once AI reaches human equivalency, we'll have to compete with AIs for creative work, and whilst (assuming we don't make them our slaves) they will probably expect to be compensated just as well as a human, they may well be far more capable, less susceptible to injury and illness and able to work in dangerous environments like space or the deep ocean without all the expensive of life support, food, etc. AIs will also make fewer mistakes, work faster than humans. We may even be able to produce AIs that do not suffer from aspects of the human condition such as boredom or depression. It seems quite likely that for 90% of jobs, AIs will be the preferred candidate, potentially leaving humanity with a lot of time on its hands.

Of course there are some creative jobs that may not benefit from superior intellect and precision - for which the objective benefits of AI may actually be a drawback. Art for example is a very much a subjective talent, as are poetry, storytelling, musical composition, etc. It may be that AIs prove to be very poor authors of creative works because they lack an understanding of the human condition. Love is an emotion that is deeply interrelated to our biology and hormones. I don't know if AIs will experience love, but I daresay that if they do it will be a very different thing for them than it is for us. I can imagine that a robot waxing lyrical about the platonic love it has for its fellows might not carry quite the same impact with a human audience as an angsty teenage pop star singing about her broken heart. I could be wrong, but I suspect that as long as humans exist there will be a demand for artworks produced by human artists.

The again, perhaps it is conceited to think that any aspect of the human condition is so difficult to comprehend for an external observer. After all we've seen deaf composers produce great musical works and gay authors write successful romance novels and songs for a predominantly heterosexual audience. Perhaps AIs will simply be our superior at everything we do.

Which raises the question of just how long humanity will be around once AIs arrive. That too depends on a number of factors. If we have not by that time cured most of the ailments of living life in a human body (chief amongst them being death), I suspect that we will decide very quickly that raising human children (doomed to a short and possibly unpleasant life and limited to a small subset of career choices) is no longer morally acceptable when the alternative is to raise AI children who will live an unbounded life in complete comfort and safety.

Of course we may well have cured ageing, death and disease by then. We may also have invented technology that allows a human to transfer their mind into a machine, and possibly back again. And if we don't invent it, perhaps AIs will invent it for us. I think in all probability that at least some segment of humanity will retain its biological form well beyond the time when it is no longer absolutely necessary for us to do so. But if and when we do eventually abandon our wetware, I don't think it will constitute the end of humanity in any meaningful sense. We are not defined by our physical form or the gooey grey substrate upon which our software runs.

What makes us human is our culture, and it seems to me that our culture is far too valuable a thing for us to abandon simply because the option arises for us to do so. Either we, our uploaded selves or our silicon descendants will ensure that everything that makes us human continues to exist long after our existence as wobbly, fallible meat-sacks comes to an end.

Ultimately the difference between AI and simply I will be at best a footnote in the history of our development. It is our destiny for our intellect and imagination to outgrow our fragile bodies and under-powered brains. Whether we end up creating new intelligence from scratch, transcribe our own minds to silicon, or some combination of both, our descendents will eventually be the product of our own design and ingenuity rather than nature's whim, and this is something to be embraced rather than avoided or feared.

 

Disclaimer: The opinions expressed here are those of the author and are not shared by Charcoal Design unless specifically stated. The material is for general information only and does not constitute investment, tax, legal or other form of advice. You should not rely on this information to make (or refrain from making) any decisions. Always obtain independent, professional advice for your own particular situation.

Comments

.

I think transferring a human mind into a machine is a task in the field of AI, because we have to design the machine to be a suitable seat for a mind, and understand how to put the mind in there, all of which seems very close to designing an artificial intelligence. I also think that this is the only really significant step that can be taken in life extension. (Perhaps we can squeeze several extra decades out of fleshy bodies by coming up with better ways to repair them, but is seems a losing battle.) So unfortunately AI has to come before this major step forward in our capacity to work on the problem! We could have brain enhancements, though, along the lines of personal computers integrated into the head, without this requiring AI (or having anything to do with it, really). That might brighten our prospects a great deal.

By the way, "We don't know how to create a non-intelligent program that can rewrite itself into an intelligent program any more than we'd know how to create an intelligent program in the first place. It's almost certainly not necessary anyway -" jars oddly with what you've said about the rewriting ability being an important defining feature of intelligence, and about creativity, and about fitting an AI in 800k, all of which makes it seem likely that an AI is a thing that bootstraps itself. Maybe it would be easier to make one which is already somewhat mature, but then again, maybe it would be harder. More convenient for the parents, arguably, but I imagine it would be very disorientated and might need educating, as it were, sideways rather than upwards, which sounds just as bad.

I also note that there is convincing evidence (at least, I'm told it's convincing) of unborn babies learning elements of language, like patterns of emphasis in sentences, via overheard conversations, so "sentience probably within a few months of birth" seems an arbitrary assumption. Not that this really matters, but it has the amusing effect that a newborn baby might be mentally somewhat Spanish, or German, or whatever.

_
Incidentally the contents of the name field gets blanked when I press preview. I wonder if it will show up when I press submit? And why am I obliged to think of a subject?

Reply

Posted by _Felix at 12:29am on 16 Feb 2010

Re: .

"... jars oddly with what you've said about the rewriting ability being an important defining feature of intelligence, and about creativity, and about fitting an AI in 800k, all of which makes it seem likely that an AI is a thing that bootstraps itself"

The distinction, which I apparently failed to make clearly between naturally occurring intelligence and any future AI we might create is that natural intelligence *evolved* whereas AI will be *designed*.

Yes, bootstrapping appears to be part of how natural intelligence arises, but I think that has more to do with the limitations imposed by nature on how much information can be packed into our DNA. Just because nature creates intelligence through an immensely convoluted compression and unpacking process doesn't mean that we have to go about it the same way.

It might be easier and more efficient to program intelligence in from the start rather than constructing a program that becomes intelligent only as part of some feedback loop with its environment. That's certainly the case for most other computing tasks.

Reply

Posted by Nick at 7:28pm on 16 Feb 2010

Post a Comment

(optional)

Plain text only - html tags are not supported.