Artificial intelligence is a scary concept to the common man. He believes that AI would work like ‘Skynet’- a super intelligent program in the popular movie series ‘Terminator’ and bring the end of the world, or at least their jobs.
However, in the tech circles, AI must face a different set of problems. There are some rules about AI that you need to know before you start venturing out in this segment.
Most laws about robotics and artificial intelligence (often used interchangeably) are quite contradictory to each other. Let’s look at 9 of these rules about robotics and artificial intelligence that clash with each other and make it harder to regulate robots and artificial intelligence.
These three laws concern robotics based on artificial intelligence. They are widely read in the popular culture and are analyzed with equal intensity. The three laws are:
Law 1- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
Law 2- A robot must obey the orders given to it by human beings except for the ones that come in conflict with Law 1.
Law 3- A robot must protect its own existence as long as such protection does not conflict with Law 1 or Law 2.
These laws may make it complicated for machines to handle their actions. Note that there are three types of artificial intelligence- Narrow Intelligence, General Intelligence, and Super Intelligence. These laws will start coming in conflict with each other, as far as a machine is concerned, in handling complicated tasks for the robots or AI systems. Some critics argue that these laws will confuse the robot as he should protect itself and protect all human beings. Additionally, in a condition where a robot should save itself and a human, it would be deeply conflicted as it can neither act nor stand in inaction.
Istvan’s Laws of Transhumanism
There is yet another popular culture reference to AI rules, called Istvan’s laws of transhumanism. Note that these laws are giving the status of omnipotence and independence to the AI race, which coincides with artificial super intelligence concept. The three laws are as follows:
Law 1- A transhumanist must safeguard one’s own existence above all else.
Law 2- A transhumanist must strive to achieve omnipotence as expediently as possible—so long as one’s actions do not conflict with the First Law.
Law 3- A transhumanist must safeguard value in the universe—so long as one’s actions do not conflict with the First and Second Laws.
There is yet another popular school of thought that controls robotics and the AI that goes into designing these machines. They were proposed by one of the pioneers of robotics, Mark W. Tilden. The laws are as follows:
Law 1- A robot must protect its existence at all costs.
Law 2- A robot must obtain and maintain access to its own power source.
Law 3- A robot must continually search for better power sources.
What is common in the laws proposed by Tilden and Istvan is that they are counting robots as a different species or different being altogether who will work in their own best interests. While these rules may appear scary to some, it is important to note that friendly AI is also a blooming concept where AI is designed solely for human support. However, Elon Musk and Stephen Hawking alike have trashed the possibility of a benevolent AI.
Could there really be the end of the world by AI?
Not likely as of now.