Aleksandar (Alex) Vakanski

 

Superintelligence

-- posted July 2016 --

Artificial Intelligence (AI) and its sub-field Machine Learning are among the fastest-growing scientific areas at the present time. We read or hear so often about new achievements and successes in the AI field, albeit mixed with stories about their potential future detriment and risks to society.

AI progress will eventually lead to machines that will reach and overpass human intelligence. One important question is: how long it will take to achieve transhuman intelligence? Of course, no one can know for sure. There is an anecdote that for the last 60 years AI enthusiasts have continuously claimed that we were 5 years away from intelligent machines. At the time of AI inception around 1950s, the idea of intelligent devices stirred great excitement. However, the hardware systems at that time could not support fast data processing, as well as the available algorithms were quite simplistic. The initial excitement was quickly replaced with a sense of disappointment, and resulted in a period called AI winter, with scarce investment in AI research. The emergence of personal computers in the 1990s provided more capable and affordable hardware, and the field began a period of renaissance. The progress has been remarkable since, although it is still far from what most of us, or Hollywood, would expect from intelligent systems. There is another joke related to the recent AI success saying that we were hoping for intelligent machines and we got email spam filters.

Apologies for the digression, but to get back to the question about the superintelligence, based on a survey conducted among leading scientists in the AI discipline, when asked when they expect superintelligence to emerge, the majority believed that it will happen by the end of this century, a few of them believed that it won’t emerge ever, while the optimistic scientists estimated that its manifestation will occur within the next three decades. Therefore, probably it won’t take too long for AI to ascend to superintelligence.

If we take into consideration the current rate of progress in computers’ processing power and extrapolate it in the future, under assumption that the progress will continue with the same rate (based on Moore’s law), we can expect that in about two decades computers will achieve parity in computational capacity with the human brain. If we add another decade for improving the algorithms and related software components, then in about three decades from now it is possible to have machines with transhuman intelligence. On the other hand, we are not exactly sure how the brain works, and there are views that the computational capabilities of our neurons are higher and more complex than we believe. In that case, we may be much farther than we think from reaching the processing power of the brain. In addition, it has been observed that Moore’s law is slowing down and there are predictions that it will reach saturation. So maybe we will never create superintelligence. However, if its creation is technologically feasible then it will be created.

How will AI progress lead to superintelligence? Currently, almost all AI applications are narrow and problem-specific, meaning that an AI system, or an AI algorithm, is designed to do a single task, such as to recognize visual patterns in images, or to translate text into another language, for example. The goal of the AI field is to develop systems that can perform a number of tasks: for instance, if a system knows how to recognize objects in images, it should be able to reuse this knowledge for other tasks, such as text parsing. Such abilities for learning new tasks are called artificial general intelligence, or AGI. As AGI evolves it will reach human-level intelligence. The transition to superhuman intelligence will follow shortly afterwards, maybe within years or maybe in a much shorter period. Free of the limitations of biological minds, the superintelligence will improve its cognitive capacity at an even faster rate in a process of iterative self-improvement. Innovation and creation of new technology by the future superintelligent entities will result in an exponential technological runway, i.e., technological singularity. Our conventional human contribution to the technological progress at the post-singularity era will be negligible: the future intelligent machines will in fact be our last invention.

Another question is how the superintelligent systems will evolve? One hypothesis, proposed by the futurist Ray Kurzweil is that we are going to merge with the advanced intelligence systems. In this case, we will augment our biological brains with the artificial brains of the machines and we will transcend into superintelligent systems. Another possible line of development is that the superintelligence will evolve independently from us. It may wake up unexpectedly in a research lab, or maybe several superintelligent systems will evolve concurrently in different locations. Another alternative is that these systems will create a single distributed network of agents, something like a unified global brain. Or a collective superintelligence may exist and function as a collection of many individual intelligent entities.

For humanity, it will be a great challenge to live in a world of superintelligent machines. Vernor Vinge argues that we will have to evolve too, if we don’t want to get trapped in the role of masters of slaves with god-like abilities. The advanced AI will very easily fulfill any of our wishes: immortality, perfect appearance, fountain of youth – they can all be endowed to us by our ultraintelligent artifacts with almighty abilities in no time by breakthroughs in medicine, nanotechnology, genetics.

Some aspects of the consequences of living in a world where we are no longer the (only) intelligent beings are addressed by Nick Bostrom in the book Superintelligence: Paths, Dangers, Strategies. He posits that the superintelligent entities will develop their own goals and sub-goals to pursue. Among the various goals, self-preservation, cognitive enhancement, and resources acquisition are the ones that are natural to be selected as relevant by a practical entity. The point that Bostrom emphasizes is that superhuman intelligence will be so advanced and powerful that it will be unstoppable by humans. If the advancement and transition to sentient machines are not well controlled, it is also very likely that these advanced systems will end up pursuing goals that may not be consonant with our human values, and may pose an existential risk for the human race. His opinion is that the future superintelligence will probably not hate us or love us, it may just be indifferent toward us and extinct our race if we are on its way to achieving a goal, similarly as we would be indifferent to exterminating a colony of ants occupying a cite on which we would like to build a structure.

The book by Bostrom caused several very influential people to express their worries regarding the AI development and the risk of superintelligence: Elon Musk, Stephen Hawking, Bill Gates, are the most notable examples. Elon Musk with other investors later pledged 1 billion dollars in supporting AI research.

Ok, so the next question is what do we do about it, and how do we mitigate the risk of extinction? Can we undertake some a priory actions to ensure that we can confine superhuman AGI or have control over these artifacts of ours that possess omnipotent abilities? A naïve solution would be to leave a hidden turn-off button somewhere, and in case things turn terribly wrong to be able to shut down the sentient machines. Well, as our AGI descendants will think billions times faster and their cognitive abilities will greatly exceed ours, that wouldn’t work for sure. Or maybe the United Nations can force all governments in the world to sign up an agreement to cease all AI research, and afterwards impose sanctions upon the bad governments that will ignore the agreement (similarly to the attempts to regulate nuclear development in the world). It is very unlikely that any of these, or other similar scenarios, would truly work. There are so many involved parties that would like to employ AI for various reasons and interests, and the competitive advantage of such devices is so compelling that any attempt to prevent the progress toward it will be fruitless.

In his book, Bostrom considers the AI control problem as an essential task of our time. His principal suggestion is that we should incorporate ethic rules in programming AI devices that are compatible with our human values. That way in the future event of questionable behavior of the advanced intelligence the incorporated rules would guide their actions into plausible circumstances where we can survive. This raises many other questions: how do we program ethics, and how to encode our universal human values in a program? Can we believe that these systems will follow our instructions and respect us as their creators?

In an essay entitled Superintelligence: Fears, Promises and Potentials, Goertzel opposes many of the ideas presented in Bostrom’s book, and concentrates on the benefits and positive outcomes of superhuman AI. He holds that superinteilligence can in fact make our world a safer place in the future by protecting us from bioterrorism, cyber attacks, or similar threats by terrorist groups armed with advanced technologies. He further argues that the assumption that ultraintelligent systems will operate based on goals is too anthropomorphic or biomorphic, and that the assumed goals listed above (such as self-preservation, etc.) very much resemble our human goals of survival, resources accumulation, gene perpetuation, power-seeking, and similar. Furthermore, our own intelligence was shaped through millions of years of evolution, which has been filled with violence and brutality. Maybe systems that don’t have similar evolutionary baggage (e.g., natural selection) will be benign in nature and would not care to annihilate other species. Or maybe the superintelligence will eventually just decide to leave us alone, and go to another planet or to another dimension without killing us. In short, he admits that there is a possibility of a negative outcome, but the fact that an event is possible doesn’t mean that it is very likely that it will happen.

Another problem that Goertzel points out in his article is that many of our speculations are based on the idea of a weakly superhuman intelligence, i.e., slightly greater than human intelligence. The idea of a strong superintelligence is simply beyond our comprehension. Therefore, it is paradoxical and difficult to believe that superintelligent entities will rigorously obey human-defined goals and rules as they evolve and advance in humanly inconceivable ways. A similar line of thinking is supported by David Weinbaum who believes that reducing intelligent agents to goal-oriented machines operating under a set of human-instilled rules and goals is a wrong path toward creating advanced intelligence, since competence and goals are correlated and can not easily be considered as independent concepts.

I personally believe that technological advancement will continue to improve our lives, as it always did. The advent of intelligent machines is just an inevitable event in the evolution and a natural outcome of our technological advancement. Similar to Goertzel and others, I also found Kurzweil’s idea of expanding our biological brains with artificial intelligence the most plausible, so that eventually we will become the superintelligent entities ourselves.

Goertzel article can be found by following the link:

http://jetpress.org/v25.2/goertzel.htm

 

Back