An intelligent machine must first go through a learning phase when it learns how to function properly. However, it is not possible to teach machines all the situations that may occur in real life. That's why it's always necessary to make sure that machines work the way they should, and nobody misuses them for his or her personal interests, against a group of people, etc.
Machines will replace soldiers and weapons, and wars will no longer take place only on battlefields. Cybernetic security will become a crucial issue. That will, however, include all intelligent machines that could be misused against people, not just military ones. Another issue is the fact that machines may turn themselves against people. What if a machine evaluates killing some of us or even all of us as the best possible solution to a task?
People are at the forefront of the food chain and our world thanks to their intelligence and ingenuity. But what happens when machines overtake us in this area and we no longer have the power to control them? Theoretically, it's possible that we won't be able to simply turn off the machines, as they will anticipate it and defend themselves.
Intelligent machines are becoming more and more people-like. When robots become entities able to perceive, feel and act independently, will they have their own rights? Are we going to treat them like animals with similar intelligence? Will we be touched when these feeling machines suffer from something?
Artificial intelligence has tremendous potential and it is up to us how responsibly we can approach it.
-kk-
Article source World Economic Forum - organizer of the Davos meeting of political and business leaders