The year 2017 has seen talk about and advocacy for granting electronic personhood and rights to advanced robots.

Written by Philip Graves, June 6-7, 2017.

The text has been copy-edited for house style by David Graves, Director of GWS Robotics
 
 

Robot Rights? On the question of rights for robots and artificial intelligence

This year has seen much talk about and advocacy for granting electronic personhood to advanced robots. While some have framed this advocacy in purely legalistic terms, as a device by which to assure the correct attribution of legal responsibility for actions taken by robots and to enable insurance policies against liabilities for damages caused by these actions, others have taken it much further to imply that advanced robots should be granted true personhood, something that would be characterised by rights in addition to responsibilities.

The notion of granting any form of true personhood characterised by rights to robots, when it does not exist in law for non-human animals or other life-forms, could be seen as a rather extreme step.

Perhaps it would be helpful at this stage to attempt to analyse robot rights advocacy from a psychological perspective. What is it that makes people project human-like qualities of experience onto this particular class of non-living machines?

That robots are designed and programmed by humans to respond in real time and in a sophisticated way to external inputs and internally stored data is beyond doubt. But they are ultimately digital processors of binary code, whose output is the predictable product of pre-programmed logic gates handling binary data. It would be a stretch of human imagination to say that robots are taking decisions.

Some robots are now being developed to be equipped with contact sensors that detect a risk of damage to their physical shells and trigger an emergency response, and many others with visual input processing software that detects a risk of collision with people or other objects and triggers avoidance measures. But this does not alter the wholly digital, emotionless nature of the processing involved.

It is perhaps our readiness to project human-like qualities onto objects that appear to be behaving in certain recognisably human-like ways (as they have been programmed to do) that leads us into the perceptual trap of projecting something tantamount to conscious and sentient life onto robots.

There also appears to be a particular fascination among a number of robot developers and robot enthusiasts at the prospect of ultimately creating true independent consciousness in these machines, albeit from artificial beginnings, through the development of ever-more-sophisticated robot designs and programming. This fascination may give rise to a desire to actively experiment to achieve this.

Other voices are driven less by this desire and more by fear that sophisticated robots could develop autonomous consciousness of a kind that ethically requires their being granted rights by society, in order to protect them from various forms of perceived cruelty, such as slavery, confinement, restricted self-determination and freedom, and externally imposed ‘death’, whether through the removal of their power source or their final disassembly.

Some have argued that future robots will be designed to mimic a full range of human emotions. This prospect raises at least three questions:

  • Firstly, whether or not the simulated emotions engender real pleasure and pain at a conscious experiential level for the robot. To this question, our answer would be almost certainly not, provided that it is a robot, and not some kind of a bioengineered hybrid of living tissue with digital processing technology – since robots by themselves are non-living machines, being essentially digital code hardware processors controlled by digital programs to drive and draw data from mechanical appendages;
  • Secondly, to what degree it is even ethical, and at what point it may become unethically misleading, to set out to create, or to permit in law the creation of, robots that simulate the expression of complex emotions such as physical pain, grief, anger and love in a lifelike and persuasive way. Such a lifelike simulation of emotion could give suggestible human onlookers the illusion that these robots are experiencing real human emotions, and elevate their imagined status in the eyes of such onlookers to one of sentient beings with concomitant rights. Could such a focus unhealthily distract from the granting of due rights to truly sentient beings such as other animals, as well as to the whole of humanity itself?
  • Thirdly, to what degree it would be ethical, in the event that we were able to generate true autonomous and sentient consciousness in artificial creations such as robots, with or without the integration of bioengineering, to subject creations of this kind to the experiencing of emotions as a product of their engineering and programming. With the generation of the ability to experience pain in an artificially engineered creation of any character would come a responsibility of care that would at least match that inherent in any pet-keeping or animal-husbandry relationship. This consideration by itself raises a host of problematic ethical issues at the level of research and development, before we even consider the practical implications of letting loose artificially intelligent life forms in the hands of corporate entities or the general public.

 

Advocates of rights for advanced robots contend that these rights should include the right to preservation and rights to autonomy. See for example the recently published article by George Dvorsky in which he reiterates his earlier robot rights manifesto to be applied to all robots that are said to ‘pass the personhood threshold’. The rights for robots that are claimed by Dvorsky comprise:

  • The right not to be disabled against their will
  • The right fully to know their own source code
  • The right not to have their source code changed against their will
  • The right to self-duplication, or to refuse to be duplicated
  • The right to privacy of their own ‘internal mental states’

 

But to establish such rights for robots could be extremely dangerous for humanity, elevating robots, which are essentially machines constructed and initially programmed by humans, to the status of organic beings over which we have no right of control.

We widely control and limit the range of dangerous wild animals in human habitats for our own preservation. But robots can potentially and unpredictably be programmed and equipped by humans with all manner of destructive weaponry that exceeds the dangers from wild animals whose behaviours are limited by nature and are known to us.

To accord rights of autonomy and preservation to these machines that we have built to serve us would be a recipe for creating chaotic consequences. The abuse of robot programming potential by programmers and operators to violent and criminal ends is just one potential scenario. The dystopian scenarios of science fiction in which robots are enabled and permitted to rule over human society should not even be given a foot-hold for actualisation in reality.

Collectively, we ought to legislate for and regulate robots from the standpoint that they are machines (a point echoed by Jonathan Margolis in the Financial Times last month) under full human responsibility, without independent rights, and machines whose actions remain the responsibility of their developers and operators.

We further disagree with Dvorsky’s concluding arguments that granting rights to robots would ‘set an important precedent’ in favour of general social cohesion, justice, protection of humans against a disastrous ‘AI backlash’, and the protection of ‘other types of emerging persons’. Social cohesion, justice, and the protection of other ‘types of persons’ are ends in themselves that can be pursued on their own merits and approached directly. It is neither inherently necessary nor desirable to accord rights to machines as a precedent to the attainment of truly worthy social goals.

robot rights, robotics, human rights, robots, robot, electronic person, electronic personhood, ethics, law, philosophy,