Will artificial intelligence destroy mankind?

In 1968, Stanley Kubrick shot the cult film 2001: Space Odyssey based on a story by Arthur C. Clarke. It brings a vision of the future based on a story of an expedition sent to Jupiter in order to gain evidence of a possible existence of another civilization in space.

Five people and an artificial intelligence computer HAL 9000 travel across the universe. HAL controls all on-board systems. When the computer starts reporting wrong messages, the crew decides to turn it off, which becomes fatal for its members. HAL evaluated the goal of the mission as more important than human lives.

Since then, lots of other works have been created asking the question whether people could become victims of their own creations that get out of control. Since artificial intelligence is no longer science fiction, it's legitimate to ask whether we should fear further developments.

Illustration

Grady Booch saw 2001: Space Odyssey at the age 13. He was so fascinated by the film that he has dreamed of constructing a HAL (without the murderous tendencies) since then.

Today, he is a scientist, storyteller and philosopher. He works as a leading software engineering scientist for IBM where he develops intelligent systems capable of cognition, learning and thinking.

Fear of super-intelligent machines that will one day kill us all is unfounded in his view. He points out that fear comes with every new technology, but new technologies are beneficial in significantly expanding human experience.

Don't worry about superintelligence

In his TED talk entitled Don't fear superintelligent AI, Booch explains that AI machines are designed to embody human values. They are not programmed but taught. For example, if we wanted to create an artificial intelligence legal assistant, we would teach it knowledge of law but also the sense of compassion and justice.

"In scientific terms, this is what we call ground truth, and here's the important point: in producing these machines, we are therefore teaching them a sense of our values. To that end, I trust an artificial intelligence the same, if not more, as a human who is well-trained," says Grady Booch.

In order for a superintelligence to begin to pursue its own objectives that are in conflict with human needs and so to become an existential threat for us, the intelligence would have to control our whole world like, for example, Skynet in the Terminator films. However, that was an intelligence able to control human will and all devices around the world.

"We are not building AIs that control the weather, that direct the tides, that command us capricious, chaotic humans. And furthermore, if such an artificial intelligence existed, it would have to compete with human economies, and thereby compete for resources with us. And in the end we can always unplug them," Booch concludes.

We live in a time that offers amazing opportunities for developing hand in hand with computers. Yes, many social and economic issues need to be addressed, such as the need for human work, but also the possibility of supporting and extending human life. So don't be afraid to use computers to progress because we're still at the beginning.

TED talk

Play the entire TED talk entitled "Do not Be Afraid of Super Intelligent AI" (in English with subtitles in other languages) or read the transcript.

Article source TED.com - TED is a nonprofit devoted to "Ideas Worth Spreading". 
Read more articles from TED.com