Rise of the robots: Mady Delvaux on why their use should be regulated

From drones to medical equipment, robots are increasingly becoming a part of our everyday life. But although some 1.7 million robots already exist worldwide, their use is still not properly regulated. MEPs debate today a report calling for rules to regulate their use, including creating a separate legal status for robots. They will vote on the report tomorrow.

Interview with Mady Delvaux
Mady Delvaux

Follow the debate and the vote live online here.


In January we talked to report author Mady Delvaux, a Luxembourg member of the S&D group. She explained why the Parliament is looking into this issue and stressesed that robots will not totally replace humans in the workplace: "There will be a cooperation between both."

What kind of robots are we talking about here? Can you give us some examples?


We are not talking about weapons. We define robots as physical machines, equipped with sensors and interconnected so they can gather data. The next generation of robots will be more and more capable of learning by themselves. The most high-profile ones are self-driving cars, but they also include drones, industrial robots, care robots, entertainment robots, toys, robots in farming...


In your report you discuss whether robots should have a legal status. What would that mean in practice?

 

When self-learning robots arise, different solutions will become necessary and we are asking the Commission to study options. One could be to give robots a limited “e-personality” [comparable to "corporate personality", a legal status which enables firms to sue or be sued] at least where compensation is concerned. It is similar to what we now have for companies, but it is not for tomorrow. What we need now is to create a legal framework for the robots that are currently on the market or will become available over the next 10 to 15 years.


So in the meantime who should be responsible in case of damage? The owner, the manufacturer, the designer, or the programmer?

 

We have two options. According to the principle of strict liability it should be the manufacturer who is liable, because he is best placed to limit the damage and deal with providers. The other option is a risk assessment approach according to which tests have to be carried out beforehand [to assess the risks] and  compensation has to be shared by all stakeholders. We also propose there should be compulsory insurance, at least for the big robots.


You also mention that some vulnerable people can become emotionally attached to their care robots. How can we prevent this happening?

 

We always have to remind people that robots are not human and will never be. Although they might appear to show empathy, they cannot feel it. We do not want robots like they have in Japan, which look like people. We proposed a charter setting out that robots should not make people emotionally dependent on them. You can be dependent on them for physical tasks, but you should never think that a robot loves you or feels your sadness.


Why is Parliament looking into this issue?

 

For once, we would like to set common European principles and a common legal framework before every member state has implemented its own and different law. Standardisation is also in the interest of the market as Europe is good in robotics, but if we want to remain leader, we need to have common European industry rules.


Regarding liability, customers need to be sure that they will be insured if damage occurs. The big issue is safety and data protection. Robots cannot function without an exchange of data so there is also a question of who will have access to this data.


People who fear they will lose their jobs are told that robots will actually create new jobs. However, they might only create jobs for highly-skilled people and replace low-skilled workers. How can this be solved?

 

I believe this is the biggest challenge to our society and to our educational systems. We do not know what will happen. I believe there will always be low skilled jobs. Robots will not replace humans; there will be a cooperation between both. We ask the Commission to look at the evolution, what kind of tasks will be taken over by robots. It can be a good thing it they are used for hard work. For example if you have to carry heavy goods of if the job is dangerous. We have to monitor what is happening and then we have to be prepared for every scenario.


The report also deals with the issue of whether we should change our social security systems and think about universal revenue, because if there are many unemployed people we have to assure they can have a decent life. It also calls on member states to reflect on it, because these competences are not within the EU.


This article was first published on 12 January 2017.

MEPs are looking into the ethical questions related to robots and artificial intelligence