Tuesday, 10 August 2010
The Technological Singularity is Bullshit Part 1
Ever since fire was invented 2 million years ago, people have worshipped man’s inventions this is called Technophilia. Imagine what it was like on the African Savannah all those years ago when one of our ancient ancestors worked out how to harness the flame! All of a sudden people had something to scare away the most threatening predators in the dead of the night. It was probably the case that until fire was tamed that the weakest and youngest disappeared in the night but not anymore as a big fire now kept the tigers at bay. Not only that our ancestors could preserve foods by smoking, they could cook meats that were maybe inedible before and all sorts of other benefits from spear sharpening to cleaning wounds with ash. It is no suprise to me that these ancient people worshipped fire, it could be argued that without fire no technological progress is possible at all. Thanks to these now extinct hominids harnessing this energy our line of great apes has conquered the world. Later this led to making metals and machines till we get to the point of technology where we are today, its a humbling thought that none of this would be possible without taming the flame. Fire-worshipping is probably the oldest religion we have in the human world, it precedes even our line of humans (homo sapiens), possibly even predating languagues and culture. I would argue that until people started sitting around the fire there was probably not much in the way of cultural exchange which is needed to have a human society.
Today Technophilia is alive and well and it is my argument that todays over-enthusiasm for technology is just the updated form of fire-worship. Today’s technophiliacs call themselves Transhumanists or Singularitarians and have amassed a range of prophets and scriptures which even predicts a christsian like rapture! These people belive that through technology we will better the human race, live indefinite lifespans, banish aging, achieve immortality and cure all sorts of other human problems. They belive that sometime in the near future there will be a “singularity” of technology and some kind of super artificial intelligence will come into being.
Vernor Vinge (cutting edge sci fi writer) is credited with coming up with this first – he wrote that in the next few years a superintelligent AI system will be built and shortly afterwards the human era will have ended.
What is this supposed to mean exactly, are computers becoming so fast and powerful that very soon all humans will become useless compared to them? Well partly yes and partly no – the good thing is that Vinge and co see us merging with technology and becoming more than just our biology. The dream of the singularity which I can see is that soon augmented humanity with the aid of a super-AI will be moving so fast (in progress) that to attempt to predict where that may end up is meaningless. Imagine a very clever AI builds a better computer,this computer builds a even better one till you get to the point of machines which may seem almost god-like in their capabilities. After a while it gets to the point when their is no rational way to work out the capabilities or goals of these powerful machines and we have approached a technological singularity.
The idea of the singularity is borrowed from Mathmatics and Astronomy. In astronomy the singularity is the point of no return in a blackhole where normal physics no longer apply and the gravity is so strong not even light can escape. Nobody knows what happens inside a singularity the normal rules of physics break down and its not possible to work out what may happen inside them. More estoric theories about black-hole singularities say that they could be gateways to other universes or even the point where new universes are “budded” off in the multiverse.
So it is reasonable to apply this “singularity” when you look at technological progress? Is that what the future is some kind of “blackhole”? This idea has come in my opinion from some rather badly thought out diagrams which are mainly based on the computer side of technology. Normally they are based around Moores Law which stipulates that computing power will double every 18 months. This has undoubtedly held true for a number of years and looks like it will be ongoing for quite some in the near future, Ray Kurzweil predicts that computers will be so fast by around the year 2029 that they will surpass human capabilities in almost every field by this time. The question I ask myself when i here this statement is: just because you get faster processing does that mean computers are going to be more intelligent?
I was born in the year 1977, by this time the computer revolution was really gaining some momentum. However being born when computers were already old-hat has given me a completely different perspective on them. I see computers as YES/NO machines: they can either do things or they say no and crash or reboot.
Don’t get me wrong I love my computer, it helps me with my writing abilitys (without i am completely dyslesic and can barely write a sentence), I make my silly techno music on it and as a communications device its second to none. But I don’t see it anymore than a YES/NO machine it is incapable of doing anything unless i tell it too. In fact when you really think about it it has only incorporated a lot of other machines in software form and is pretty dumb without a human operator. Most people I know only use computers to check their social networking websites, a bit of banking or ebaying and of course a bit of hardcore porn!
Ray Kurzweil is about 40 years older than me and he has seen computers go from very expensive whole room installations to hand held devices which are already a billion times faster than his old academic machines. This gives him a longer term perspective and as he has already witnessed a billion fold increase in computing power its easy for him to assume that this will be ongoing. I have heard him make the analogy in one of his public talks that today’s mobile phones are more powerful than the computers used in the Apollo Space program. While this may be true in terms of computing power, there is no way you can get a I-phone app that can successfully fly a rocket to the moon and back.
I will give you another example, recently Supercomputing has become available to desktop users. For about five thousand pounds you can buy a Nvidia Tesla card and have a machine which can support up to 180 Gig of ram and you can have supercomputing capabilities in your house. But what would the average person really do with all that computing power – are they going to be able to make a Evil Artificial Intelligence and take over the world? Unlikely most people would still use the Tesla to do the normal stuff they do on the web – I was rather surprised to find out that these machines just run normal operating systems like Windows.
It appears to me that the average user has already reached a point when much more computing power isn’t going to make much difference. Generally most people are happy to use low-powered netbooks to do what they need to do on the web. In Japan the average user mainly use their smartphones to do most web stuff and only use a desktop PC when they can’t do what they need to on their phones. I think this is a trend thats likely to continue, People want things that are easy to use and they can carry around easily. The idea that the general computer user is going to want to start writing powerful programs on super PC’s is rather unrealistic.
Another massive flaw with that theory is that programmes/software is so difficult to write. Even as we make more processing power software is still moving at a very slow rate in comparison. Theres a saying in silicon valley - "what Intel giveth, microsoft take away" whic is why PC's still lake few minutes to boot up despite having processors 1,000's of times faster than a 1990 PC (which also takes a few minutes to boot up). Processing power doent mean a thing if the software can't be written to utilise it and software project faliures are much more common than you might think. Sometimes no matter how skilled the programming team is they can't find the solution - AI is a clasic example of this.
So I find the idea of a singularity due to Moores law as a rather archaic theory which doesn;t really take into account the average human user or software development for that matter.
Other parts of the singularity theory seem to be based on the work of Nicholai Kardeshev who made some interesting graphs based on energy comsumption to predict the likelyhood of alien civilisations. A level 1 civilisation would be able to harness all the energy perhaps of its home planet – this may include technology like space elevator technology, weather control and perhaps large orbital solar collectors. A level 2 civilisation would be able to harness all the output of their local star – this may include technology like star encompassing dyson spheres, a lot of space traffic between planets and colonised star systems. In fiction the Star Trek Federation would by a type 2 civilisation. Then we go on to a Type 3 civilisation which would be able to harness the whole energy of a galaxy – technology may include galactic engineering, black hole farming and the ability to blow up planets perhaps. A fantastic example of this would be something like Darth Vader’s Empire in the Star wars trilogy. A rather humbling thought is that humanity is only about 0.75 on the Kardeshev Scale and it would only take a huge solar flare (which a stage 1 civ wouldn't even worry about)to knock us back to pre-industrial technical capabilitys (0.2 on the scale).
In my mind a singularity culture like the one that Vinge/Kurzweil envision is going to be like Kardeshev’s type 3 civilisation. If we have computers with so-called godlike capabilities then a little bit of galactic engineering is not going to be a problem for them. If we are only at 0.75 of the scale in the year 2010 how long will it take to get to level 3 one million years perhaps?
More posts about the improbability of the Singularity to come....