The Singularity

This is a paper that I wrote for class a few months ago. I think that awareness of the issue is important to spread. Let me know what you think!

To say that there are many unknowns in our daily lives would be an understatement. Yet we still enjoy the pretense of a certain amount of predictability and  stability. What really changes in our daily lives? People come and go, we grow a little smarter, perhaps a little more self aware, technology improves. In the end, tomorrow isn’t usually all that different from today. The day may soon come, however, when this relative predictability will be tossed into a blender, sucked up by a hurricane and strewn across the walls of our stupefied imaginations. No I’m not talking about the apocalypse – at least, not necessarily. The day of which I speak is known to futurists (people who concern themselves with predicting the future) as “The Singularity”. This term was coined from the domain of astrophysics, which defines a singularity as a “point in space-time at which the known laws of physics break down” (TalkTalk), such as in a black hole. Yet to futurists, the term is much less well defined, which is fitting, since even believers can’t quite aggregate their theories. For example, the Singularity Institute defines the Singularity as “the technological creation of smarter-than-human intelligence” (Singularity Institute), while leading theorist Ray Kurzweil defines the Singularity as the point when artificial intelligence becomes a billion times smarter than the human brain. The important part is that the Singularity marks a point in time when the world as we know it ceases to exist. An astounding notion, to be sure, but at least  we won’t have to worry about this happening in our lifetime, right? Think again. Mr. Kurzweil predicts that his version of the Singularity will happen in precisely 2045. In fact, many theorists (I wont call them believers again, since this term can imply religious fanaticism) agree that computers will reach human intelligence before the year 2030!

My goal is not to convince you of the Singularity’s imminence, it would be a waste of my time – the research has been done. Computers have followed Moore’s law, which states that computing power essentially doubles every 18 months, with astounding accuracy, even when traced back to the very first computer (Grossman). The fact is that many in the scientific community acknowledge the Singularity as something that WILL happen, at least eventually. The date of its occurrence is of course up for major debate.  Whether or not this momentous event will happen, to me the mere fact that great mind’s believe in it makes it worthy of attention. I will use the same argument as is common in climate change: while it cannot be proven that the Singularity will happen, there are many benefits to preparing for it, such as the development of strong societal moral values. The consequences of ignoring it, however, are dire. The ethical implications of the Singularity probably haven’t escaped you. If this is the first time you’re hearing about it, then I can only imagine what is going on in your head. You would certainly not be alone if images of horror movies, Terminator and the like, came to mind. You might also, (especially if you were already knowledgeable about the subject) see the great potential in the occurrence of this event. Therein lies my purpose; to fill in the gap between the biases of optimism and pessimism with the sturdier mortar of rationality and truth.

Let us commence by studying the optimistic viewpoint. Experts such as Kurzweil believe that we should embrace the Singularity with open arms. The premise is that this will be the single greatest event in Earth’s history since the rise of human intelligence. To some, this fact alone is enough to warrant research to continue. More important to others is the slew of beneficial technologies that this event could unleash. There are of course a plethora of tangible improvements which super-intelligent machines could bring about in our lives. Medical benefits might include curing all disease, reaching immortality, and even being able to reprogram our genes as we see fit. These machines could help us unlock the secrets of the universe (visions of Deep Thought, the super-computer in the book/movie The Hitchhiker’s Guide to the Galaxy, spring to mind). And this is just the beginning – how can we predict benefits which we aren’t smart enough to conceive? Perhaps these computers could solve world hunger, or over-population, or the economy. In the words of Albert Einstein: “The problems that exist in the world today cannot be solved by the level of thinking that created them” (Singularity Institute).

Even Kurzweil admits that the Singularity could be dangerous. He isn’t worried for several reasons, however. First of all, he believes it to be inevitable – progress cannot be stopped. The fact is that there are people who seriously believe in it, and imposing bans and regulations on technology would simply force them to perform their research illegally. This would decrease their resource pool and force them to cut corners, which would increase mistakes, thus incurring a greater chance of an “unsafe” Singularity occurring. In comes the Singularity Institute, an organization with the goal of bringing about a safe Singularity (Sachs). Proponents of the theory believe that this is the best way to maximize chances of humanity’s survival. Many Singularitarians also believe that the dangers of technology lie in the user. Why would super-intelligent machines become hostile unless we program them that way? In any case, many futurists believe that we could simply “pull the plug” if machine intelligence became dangerous (Sachs).

The other reason for Kurzweil’s confidence is related to his particular vision of the Singularity, which not all experts agree on. According to his research, the first smarter-than-human intelligence won’t be a super-computer. He believes that we’ll start by adding cybernetic implants to our own brains, closing the gap between man and machine. Indeed, over 30 000 patients with Parkinson’s disease already have neural implants (Grossman). In his future, humans will become more and more machine-like, until there is no longer a distinction. We will ultimately upload our consciousness into machine bodies and achieve immortality. To him, there’s no reason to fear the Singularity, because we will be changing along with it! Machines won’t turn on us because they will be us. Kurzweil sees the word “Singularity” as synonymous with “Utopia”.

Now let us dive into the dark depths of the pessimistic perspective (cue ominous music). The first argument that comes to my mind is: how can we make safe-guards for entities that will be smarter than us? The challenge would at first be akin to a 4-year-old trying to make sure that his parents do exactly what he tells them; later on it would be more like an ant ordering us about. There’s no doubt that there are endless doomsday scenarios for the fertile human mind to imagine. One could say that our society, and especially our media conditions us to fear technological change. Yet even if this is the case, it isn’t necessarily a bad thing. Jeremy Cooperstock, Artificial Intelligence professor at McGill says: “the potential for [the Singularity] to do harm is such that we should look at these doomsday scenarios and consider them… it’s important that we consider the potential of the technology that we create” (Survival of Machines). It is interesting to note that an analysis of the prospects of life on Earth by William McLaughlin established that the decline of humans is very likely within the next 100 years (McLaughlin).

There are many other reasons to be afraid of the Singularity. Bill Joy, founder of Sun Microsystems, wrote an article in 2000 titled “Why the future doesn’t need us”, in which he demonstrates his exasperation at Kurzweil and others for the banality with which they regard the Singularity. This article advocates awareness of the dangers of all technological advancement, but he has several arguments directly against the Singularity. He contemplates Murphy’s law, which states that anything that can go wrong, will. He draws strong parallels to the problems that our technologically-happy society has already created for itself. This is exemplified by how the overuse of antibiotics has led to new strains of disease which are more deadly and resistant than the originals. He cites a text by brilliant mathematician (albeit terrorist) Theodore Kaczynski, which observes that once machines become better than humans at everything, humans will probably be forced to give up control over the machines, simply because we won’t be smart enough to manage the changes which they are making on the world. If we do somehow manage to retain control, then the world will still be controlled by a few “elite” people, who will have a much easier time exerting this control due to these machines. He also recognizes a common evolutionary concept – that throughout history, whenever a better species comes along, it invariably eliminates its lesser competitors. He therefore urges that we proceed with the utmost caution; scientists must take the equivalent of a Hippocratic Oath, and technologies which could lead to the extinction of the human race should be abandoned (Bill).

Obviously it’s impossible to predict the future (at least until super-intelligent machines exist). However in the face of the Singularity, it’s important to have some sort of cohesive plan of action. Of course individual opinions will vary, but as a planet we must decide: are we for the Singularity, or against it? It’s imperative that this be a planet-wide decision, for if even one group embraces the Singularity, then all others must, or forever live in fear of domination. Conversely, if but one group is against the Singularity, they will quickly become obsolete. Therein lies the biggest problem with being an anti-Singularitarian. How would one go about stopping it? I agree with the claim that even planet-wide regulation would drive the progress underground, to disastrous results, since it’s obvious that proponents won’t give up. The appeal of the Singularity is too great for some. Kurzweil himself has an ulterior motive. Since he sees the Singularity as bringing about immortality, he consumes 200 pills a day to keep himself healthy, so that he has the maximum chance to be alive when this breakthrough occurs (Grossman). I’m not saying this to discredit Kurzweil, I have the utmost respect for him and I’m sure that the global benefits are more important to him, I’m just trying to reinforce the point that he’s unlikely to let go of his dream.

I agreed with Kurzweil’s theory that if the Singularity arrives as he expects it, machines will not overthrow man because we will become one of them. I say “agreed”, because upon further consideration, his Utopia has little chance of going to plan. After all, just because technology’s progress is exponential doesn’t mean that common citizens can follow suit. I only bought the laptop that I’m writing on years after it’s model had been on the market. If brain-augmenting cybernetics come on the market, they’ll first be exclusive to the super-rich. It will take many years, even decades for the technology to permeate through to all levels of society. Segregation would take place, and with good reason! Why shouldn’t technologically augmented people consider themselves as superior? Even if humanity survived this trying period without the “cyberhumans” wiping the rest out, what about those who refuse implants? These will undoubtedly be many, from the religious, to the skeptical. What will their fate be? They will be considered lesser forms of life, and if not exterminated, might be kept as pets. This is the good scenario, the one where the Singularity occurs as Kurzweil predicts it. If  instead the first hyper-intelligent machines are in the form of pure artificial intelligence, the consequences could be much, much worse. After reading Joy’s article, I find myself agreeing with many of his points. I fully support the creation of raw, “dumb” computing power, but to me, combining this power with intelligence is simply a risk not worth taking. The surest method in insuring our survival is to mandate the integrity of our scientists. As Joy suggests (and I have in past writings), they must be required to take a kind of Hippocratic Oath for scientists.

However the scariest part of the Singularity is that we are getting closer day by day, while the world at large remains unaware. Even those who are creating the various pieces necessary for it to occur often don’t take into account the implications of their work. In the words of Benoit Boulet, director of the McGill Centre for Intelligent Machines: “We’re not really trained to think about it. We’re highly specialized engineers and mathematicians and scientists, and we don’t really reflect too much on the philosophy of what we’re doing.” The problem with being optimistic about the Singularity (and I surprise myself by saying this, since I often consider myself as an optimist) is that failure means the extinction of our species. The world has never united as one to oppose a common threat. Neither has it faced a situation of this magnitude. Scientists are the horses driving our evolution forward, enticed by the dangling promise of a Utopia. We must prevent them from driving us all off a cliff.

Works Cited

Grossman, Lev. “Sin·Gu·Lar·I·Ty. (Cover Story).” Time 177.7 (2011): 42-49. Canadian Reference Centre. Web. 8 Apr. 2012. <http://dc153.dawsoncollege.qc.ca:2078/ehost/detail?vid=5&hid=11&sid=1e4ca13e-300f-4e67-94aa-b9e5dc9c33a1%40sessionmgr15&bdata=JnNpdGU9ZWhvc3QtbGl2ZSZzY29wZT1zaXRl#db=rch&AN=58585916>

Joy, Bill. “Why the future doesn’t need us.” Wired Magazine. April 2000. Web. 9 Apr. 2012. <http://www.wired.com/wired/archive/8.04/joy.html?pg=1&topic=&topic_set=>

McLaughlin, William. “Evolution in the Age of the Intelligent Machine.” Leonardo , Vol.17, No. 4 (1984), pp. 277-287. Published by: The MIT Press.Article Stable URL: http://www.jstor.org/stable/1575105

Sachs, David. “Survival of the Machines.” The Gazette: B.1. Montreal Gazette. Jul 19 2008. Web. 8 Apr. 2012. <http://dc153.dawsoncollege.qc.ca:2352/docview/434662575/135FEA97E1649341DBB/2?accountid=27014>

Singularity Institute. Singularity Institute for Artificial Intelligence, Inc. Web. 7 Apr. 2012. <http://singinst.org/overview/whatisthesingularity/>

TalkTalk encyclopedia. “Singularity”. Web. 9 April 2012. <http://www.talktalk.co.uk/reference/encyclopaedia/hutchinson/m0028794.html>

Advertisements