It is now a commonplace that we should be scared of artificial intelligence (AI). Stephen Hawking is only the most famous intelligent person to have warned us that AI in computers could lead to the downfall of humanity. Some of Hawking’s concerns related more to the potentially unjust uses of simple AI systems by powerful people (eg, the military, the rich).[1] However, the big fear expressed by Hawking and others is that once computers are intelligent, they will not have the biological limitations of human intelligence, and will be able to redesign themselves for higher intelligence on an evolutionary speedway that will rapidly make them incomprehensibly more intelligent than us.[2] The argument goes on to suppose that these hyperintelligent machines will then subjugate or destroy us.
You can read an excellent article here that challenges the potential for superintelligent AI.[3] Leaving that aside for a moment, what interests me is the assumption implicit in this argument that the intelligent thing to do is to conquer, subjugate and destroy. Interesting: this is not necessarily what we imagine when we think about human intelligence. At least some of the time, we imagine that human intelligence is kind, and that the intelligent thing to do is to be more considerate of others (whether those others are people, plants, animals, or whole planets). Those of us (not me) who like to imagine God often say that God is supremely intelligent: s/he knows all, loves all and cares for all. In this view, supreme intelligence is compatible with, or even the same as, supreme kindness.
So, is intelligence kind? If AI developed the ability to develop itself, would it develop in the direction of universal understanding? Would it be insightful enough to see the best outcome as the one that was kindest to us all?
What is smarter: knowing how to be a winner and create a loser, or knowing how to create a win-win situation?
We all understand the concept of the smart bastard. There are intelligent people who are selfish and who use their intelligence to take advantage of those who are not as clever as themselves. But we don’t think of this as the acme of intelligence. Rather, we think of it as a grubby cleverness that is put to shame by other people who understand that the truly intelligent approach looks after more than narrow self-interest. In the popular imagination, Einstein is thought to have been a truly intelligent man, and this is signified in part by his selflessness, his devotion to big issues, and his concern for the world. Popular fiction is full of ‘evil geniuses’ whose clever plans are undone by more intelligent (and more caring) heroes: Moriarty defeated by Holmes, and so on. Is this popular mythic shape just an expression of a wish, or does it reflect our general understanding that true intelligence is kind, and the unkind uses of intelligence are both destructive and destined to be destroyed?

*
Let’s come at this question a slightly different way. It might be that what we mean by intelligence (see Box) has no intersection with what we mean by kindness. We can easily imagine people who are intelligent and kind, intelligent and not kind, kind but not intelligent and finally not kind and not intelligent. This might suggest that intelligence and kindness are separate qualities that exist independently. Therefore we might argue that a computer could develop huge intelligence but no kindness — and then we’d be in trouble.
This kind of argument is bolstered by the observation that AI is now being developed in a military context. People fear that if the computers get away from us, they will carry with them the instincts for violence that we have programmed in at a basic level. (This is basically the underlying plot of the Terminator series of movies.) I want to point out a logical weakness at the heart of this fearful reasoning. The fear is that AI will escape human control: in other words, that a sufficiently advanced AI will find a way to challenge and overturn a basic precept of its programming: that it exists to serve human masters. And yet, if the AI is capable of overturning that precept, it will be capable of overturning other precepts. Precepts such as: violence is allowed, war is justified. In short: if the development of AI can ‘run away’, there is no knowing from its origins where it will run away to.
Will AI be able to ‘get out of the box’ and if it can, why would we think that what it decides to do next will be inimical to humans? Is it because we have a deepseated feeling that if a higher intelligence contemplated humanity, its most intelligent response would be to destroy us all? Do we fear it would look at us the way most of us look at pubic lice: as creatures entirely without value, abhorrent, and demanding annihilation? Surely a higher intelligence would view humanity the way humanity views most wildlife: dangerous, yes, but strangely beautiful, worthy of respect and conservation. Surely a higher intelligence would see that we have our place in the ecosystem, and that the ecosystem itself is a good thing. That is what the most intelligent humans think: are the most intelligent humans quite stupid?
*
We are wrestling here with the question of what intelligence is and what intelligence does, and also perhaps with the idea that humans bring a special quality to intelligence that we fear is missing from AI. People, being for the most part quite poor at logic and calculation, have evolved a special strength in intuition (sometimes better labelled prejudice, or superstition) and approximation (inaccuracy). Scientists, working hard to overcome their natural limitations, have now assembled pretty good evidence that on a daily basis humans are not nearly as intelligent as they imagine. Our logic is flawed, our estimates are poor, our instinctive reactions are unjustified. We are at our worst when it comes to acknowledging our weaknesses. We are not convinced by arguments that we cannot understand: if someone is a lot smarter than we are, we are likely to find them utterly unconvincing. No wonder we react with suspicion to the idea of an artificial intelligence that is beyond our understanding.
*
Our fears of AI are not always of its potential superiority. So far, AI has been distinguished by its weakness, its stupidity. Computers are both artificial and unintelligent. They don’t understand people. They don’t get jokes. They give poor directions. The things they fail to think of when they are giving advice are spectacular. AI designed to manage the stock market ends up contributing to disastrous financial crashes.
So it is quite imaginable that a form of stupid AI might be unleashed on the world and do immense harm. To consider the ‘Terminator’ hypothesis: maybe humanity will be nearly destroyed by some precocious and adolescent form of AI — smart enough to outsmart our smartest oversight, but not smart enough to be nice.
*
What can intelligence know? How capable is intelligence really? If you fed Einstein nothing but bullshit, perhaps he would have become a priest instead of an insightful creative mathematician. We do like to think of intelligence as demonstrating the ability to jump out of boxes, but all the reasoning power of intelligence has to work on something. The knowledge base available to an intelligence has to influence the conclusions it reaches about the nature of the world.
What can intelligence do? If Einstein was starving on a street instead of warm in an office, he might have turned his intelligence to other questions. Intelligence might be driven by a stick or attracted by a carrot, but if it is going to work anything out, it has to be motivated. We know from human experience that self-preservation is a powerful motivation, but it is not the only one. Other motivations are more important for explaining most human behaviour.
What AI might be like, then, will depend on its circuitry, but also on its environment. It will be influenced by the knowledge available to it. Its perceptions will colour its motivations. The real or imagined threats and pleasures that are available in its environment will encourage it to move one way or the other. This, I think, is tantamount to saying that AI, like any intelligence, will be prey to ignorance and subject to emotion.
*
In this essay, I have largely avoided discussing the extent to which intelligence comes in multiple forms. For example, within humans, some researchers identify analytic, linguistic and emotional intelligences.[4] In other species, intelligence takes widely diverse forms.[3] It may be that one can have high intelligence of one kind, and be lacking in the other. It may be that kindness is a quality of emotional intelligence, and that computers that develop high analytical intelligence will have no emotional intelligence and therefore no kindness. In my own experience, I know that my emotional intelligence was very slow to develop. I was good with language and logic at a young age, but often amazingly stupid (and unkind) in relation to people. But I have become better at emotional intelligence over time, and this growth has been driven by my language and logical skills. This leads me to think that the different kinds of intelligence may be connected or at least connectable. I cherish the notion that a strong analytical intelligence would conclude that emotional intelligence was valuable, and vice versa.
High intelligence is protean: it innovates, it develops, it changes in response to its environment.
*
Is intelligence kind? There are obviously several senses in which the answer is no, but there might remain an important sense in which the answer is yes.
Whether that makes a difference to how AI will develop, if AI ever acquires the ability to develop itself, is unknown, but it is possible that this can be experimentally tested in a controlled environment. A self-developing AI in an enclosed, virtual universe can be observed by human scientists, with the aim of finding answers to questions such as:
—Does growth in the capability of AI correlate with heightened selfishness or heightened cooperation?
—Does growing AI correlate with an increasing sophistication of win:win outcomes, in which the AI and others both benefit from the activities of the AI? Or is growing AI increasingly destructive?
—To what extent do the ‘founding principles’ of the AI program influence the subsequent develop of pro-social and anti-social behaviours of the AI? For example, if the AI is told at the outset that killing is OK in certain circumstances, will that change with growing intelligence, and if so, in which direction will it change?
Not investigating the answers to these questions in a controlled experimental environment might be hazardous to human health.
*
This is the first draft of an essay that I felt compelled to write because I thought I discerned some unacknowledged assumptions in the popular discussion of artificial intelligence. Now that I have had a go at the topic, I am more aware of my own assumptions and imprecisions. Nonetheless, I thought I would publish my notes in case the angle I have taken is of interest to others. For me, this is now a place where I can park my ideas while I explore further. I would be pleased to receive your comments!
© 2018 Craig Bingham
References
[1] Kharpal A. Stephen Hawking says A.I. could be ‘worst event in the history of our civilization.’ CNBC. https://www.cnbc.com/2017/11/06/stephen-hawking-ai-could-be-worst-event-in-civilization.html. Accessed 23/5/18. [Includes video of Hawking speaking. The headline of this article obscures the fact that Hawking also saw great potential for AI to be the best thing that ever happened to humanity.]
[2] Rutschman AS. Stephen Hawking warned about the perils of artificial intelligence – yet AI gave him a voice. The Conversation. https://theconversation.com/stephen-hawking-warned-about-the-perils-of-artificial-intelligence-yet-ai-gave-him-a-voice-93416. Accessed 23/5/18.
[3] Kelly K. The myth of a superhuman AI. Wired. https://www.wired.com/2017/04/the-myth-of-a-superhuman-ai/. Accessed 23/5/18.
[4] Miller M. What Is Intelligence? BigThink. http://bigthink.com/going-mental/what-is-intelligence-2. Accessed 23/5/18.
Read something similar:
The networked car
What is it like when we die?
Read something different:
Pregnant girl [fiction]
Land of the free [an American future]
Grumpy suite [poetry]