Article

KITT - the sentient car from the 80s TV series Knight Rider1 comment

The Drive of Progress

Published at 1:11pm on 17 May 2007

Acclaimed science fiction author Charles Stross has published his predictions for how our future will be shaped by technology. In his books he describes a fabulous utopian future, but his vision of how imminent technologies will change our lives demands a sacrifice of personal freedoms that I find difficult to stomach...

Charles Stross recently posted his predictions for the future evolution of technology. Stross is a first-rate science fiction author, and I have the upmost respect for his attitude to the future of technology as portrayed in his books. Accelerando in particular is the only book I've ever come across that attempts to describe the experiences of humanity as it enters the technological singularity - most science fiction of this type skips straight into a posthuman utopia without bothering to cover how we get there.

An Inexact Science

I've mentioned before that futurology is an inexact science. When one voices an opinion about the future growth of knowledge, one is making a stab in the dark, and to claim otherwise would be hopelessly naive. Stross notes this fact, admitting that...

... people often think that means I spend a lot of time trying to predict possible futures. Actually, that's not the job of the SF writer at all - we're not professional futurologists, and we probably get things wrong as often as anybody else.

If anything I think he is giving too much credit to so-called "professional" futurologists here. In my humble opinion the difference between a professional futurologist and an amateur is only that an amateur admits that they are probably wrong, and a professional makes a living by persuading people that they aren't.

Privacy

I don't agree with the bulk of what Stross suggests will happen. His estimates of technological progress seem plausible enough, but there seems to be an implicit assumption that once we have the technology to do things, we will feel compelled to do so, or even find ourselves forced by law to do so, even though we might ourselves resent the imposition.

One example he gives is that he believes that because in a few decades we will have the technology to record the entirety of our lives on camera as a constant video feed, that everyone will choose to do so, and that for some reason we will make the un-edited footage available publicly via the Internet.

It seems a jump to me to assume that simply because online privacy is currently a difficult problem to solve, we will eventually give up on solving it altogether and not worry about it. Unlike copyright and patent laws, privacy is still important to a lot of people, and it is the will of the majority that tends to win out in such situations, at least that has historically been the case. Stross asks us to...

Meet your descendants. They don't know what it's like to be involuntarily lost, don't understand what we mean by the word "privacy", and will have access (sooner or later) to a historical representation of our species that defies understanding. They live in a world where history has a sharply-drawn start line, and everything they individually do or say will sooner or later be visible to everyone who comes after them, forever.

I ask you to picture instead a society where everyone is issued with a secure private key, stored perhaps on a chip beneath their skin, and used to encrypt all their personal data transactions. It will be upgraded each year to a longer, more secure version to keep up with the processing power available to crack encryption, and will guarantee near-perfect security against identity theft and privacy violation. Smart hardware will actively block or censor recordings if they contain sensitive data. Whilst you will be able to search for anything about anyone, compliant search engines will refuse to index or return data if they are asked not to by the owner of that data (like a smarter version of robots.txt) and non-compliant search engines will be hunted down and DOS'd by Internet-wide anti-spyware applications.

For the sake of historians, ownership of your life's records will pass on to your descendants, or enter into the public domain a few decades after your death (though I would hope that it will not be long after we have such recording technology that death will cease to play a big role in our lives, and historians will be able to find out about the lives of people who lived a few hundred years earlier by simply asking them).

I don't know if this is a plausible future - at some point the issue of privacy will come to a head and a satisfactory solution will emerge. Perhaps Stross is right, and society will decide that convenience trumps privacy. But I don't see what he's basing that assumption on. Technological progress doesn't work that way, at least, it hasn't historically. It may seem, especially to technophobes, that technology happens whether we want it or not, but if that were true why haven't we had videophones since the 1970s? The reality is that whilst technology can make us change our perspective and values, and even voluntarily change our way of life, it's not a given.

Road Safety

Another major revolution that Stross predicts is the driverless car. Again, he believes not only that the driverless car will be able to replace human drivers, but that the government will mandate that they do so, and ban humans from driving cars manually:

driverless cars. They're going to redefine our whole concept of personal autonomy. Once autonomous vehicle technology becomes sufficiently reliable, it's fairly likely that human drivers will be forbidden, except under very limited conditions. After all, human drivers are the cause of about 90% of traffic accidents:

This reminds me of Judge Death's argument in the comic book series 2000AD, that since all crime is committed by living beings, life itself should be outlawed. "The crime is life, the sentence is death". It's quite easy to fall into a fallacy when analysing the causes of accidents in terms of percentage statistics. The "lets eliminate the worst X%" arguments don't make much sense. If we were to ban the worst 10% of drivers for example, then there would be fewer accidents, but there would still be a worst 10% of drivers causing the majority of them - where would you draw the line?

Similarly, if machines drove cars instead of humans then maybe 25% of all traffic accidents would be caused by machines. The total number of accidents would be lower, yes, but we would then be in a position to say "25% of all car accidents are caused by machines, lets put humans in so they can override the machines when necessary and cut down on accidents."

It may be true that 90% of accidents are in some way caused by humans, but it's not a very useful statistic, because we don't know how many accidents are prevented by those same humans. To really look to improving the death rate from driving we need to figure out why people cause crashes. If it is due to distraction or falling asleep then an autopilot of some kind might be able to usefully intervene in such cases. But I can't see humans being removed from the loop altogether, For why this won't happen, we need to look to aircraft. Flying an aircraft is much easier than driving a car. Landing one isn't, but flying one is because there is nothing to crash into at 30,000 feet. As a result, autopilots have already revolutionised air travel, and all commercial passenger planes are now flown by computer.

But aircraft still have human pilots. And they aren't just there to take off and land, they are there so that someone can be there to sort things out when something unexpected happens. Because something unexpected always happens eventually. The real world is not a controlled environment - the application of creativity to writing a computer program for solving expected problems will never substitute for having someone on the scene to apply creativity to solving an unexpected problem when it occurs.

The study of safety-critical systems has taught us over the years that one must be extremely careful when taking humans out of the decision process. Machines can make decisions much faster than a man, but if they make the wrong decision that basically just means that they can make, say, fifty fuck-ups per second, which suddenly doesn't seem like such a great advantage.

Modern safety-critical systems are a careful balance between trying to solve problems without confusing the human operator with unnecessary warnings (sensory overload), and at the same time making sure that when things go pear-shaped that the human has time to intervene before it's too late. Unfortunately computer programs are not only very bad at solving unexpected problems, they are also very bad a recognising when a problem has strayed outside their ability to solve it, and failing to inform an operator that a problem exists (masking) can be disastrous.

In the end though, even if the technical barriers to creating a safe automated car are solved, they will still never replace human drivers completely because of the issue of accountability. It may be the case that automated cars will almost never crash, but when they do crash, people will want someone to blame, and car makers won't want that to be them.

We can forgive a human for failing to prevent an accident - to err is human after all - but we will never forgive a machine. A human can say "it all happened so fast, I didn't have time to react" and that's okay, we get that. But if a car maker tells us that their car failed to react in time, we'll want to know why they didn't splash out an extra $0.50 for a faster processor, and they'd better have a cast-iron explanation why they didn't, especially when the new Mitsubishi has them... etc.

Even when we eventually have AIs that are able to flawlessly make split-second decisions to avoid crashes, we will still allow humans into the decision making loop because of the issue of accountability.

Lets say that you are driving along in your expensive new auto-car when a person suddenly steps out into the road ahead of you. The car determines in microseconds that if it swerves you will die and if it hits them they will die. It's them or you. Now, would you want your $20,000 car to decide to kill you, or the innocent pedestrian in the road ahead of you? Are you comfortable having the manufacturer make that decision? You'd want your car to save you at all costs right? But then how would the pedestrian's family feel about that?

What if the "pedestrian" was actually a troup of little girls? Maybe in that split second you would have swerved and taken your chances, if you'd been driving. But would you want the car to make that decision on your behalf? "Car heroically kills owner to save children" doesn't quite have the right ring to it, does it?

If humans must risk death and grief they would rather it be at the hands of someone who understands what death and grief mean, and whose decisions are influenced accordingly. Decision-making machines have an almost limitless potential to reduce accidents when used wisely, but they will not being doing it by removing us from the decision-making loop - not for a long time yet anyway, not until we have thinking machines that rival human intelligence.

An autopilot that could intervene when we screw up, or fall asleep at the wheel, would be a great boon to road safety. Sadly though, we are a long way away from technology that can intervene when humans screw up - all existing safety critical systems work the other way round. Because until we have machines that are not simply faster at making decisions, but better at it, that's the only way that makes sense.

Conclusion

Stross's view of the future is clearly quite different from mine, and whilst I am sure that we are both wrong, I sincerely hope that he is wronger, because in my view, technology will free us from indignity and oppression, and in his view it seems that the only things that machines will free us from are our privacy, autonomy and freedom.

I think we would both claim to have an optimistic view of the future, and certainly Stross seems optimistic in that he doesn't predict that the future of humanity is for George Bush to wipe us all out with nukes - I applaud him for that. But I think perhaps his predictions stem from a more authoritarian outlook than mine - he thinks we face a future of ever increasing rules and regulations to prevent us abusing the power that technology grants us, and that this is both necessary and beneficial.

I hope that technology will grant us greater independence - that every human will become a self-sufficient entity, no longer reliant on others to feed, clothe, educate and rule them. I hope that technology will finally allow humans to relate to one another on a level playing field by removing the social barriers and abuses caused by one person relying totally on another for their survival (children on their parents, housewives on their husbands, sweatshop workers on their employers, etc). And I hope that we will finally get over our tendency to look for an "authority" - whether it be God, government or driverless cars - to make our decisions for us and accept blame for our own failings.

Here's to the uncertain future.

 

Disclaimer: The opinions expressed here are those of the author and are not shared by Charcoal Design unless specifically stated. The material is for general information only and does not constitute investment, tax, legal or other form of advice. You should not rely on this information to make (or refrain from making) any decisions. Always obtain independent, professional advice for your own particular situation.

Comments

Food for thought

That food may only be Pickled Onion Space Raiders, but still...

Food for thought.

Reply

Posted by Kieran at 2:51pm on 17 May 2007

Post a Comment

(optional)

Plain text only - html tags are not supported.