Irrespective or where you stand on Brexit, an interesting issue cropped up last year following the publication in May of a draft report on robotics issued by the European Parliament. Now you may have thought that the EU parliament was staffed by a bunch of apparatchiks with not a shred of romance in them, however, it’s difficult not to like a report that begins thus…
“whereas from Mary Shelley's Frankenstein's Monster to the classical myth of Pygmalion, through the story of Prague's Golem to the robot of Karel Čapek, who coined the word, people have fantasised about the possibility of building intelligent machines, more often than not androids with human features”
Unfortunately, having got off to such an entertaining start, the report then descends into more typical bureaucratese…
“Calls for the creation of a European Agency for robotics and artificial intelligence in order to provide the technical, ethical and regulatory expertise needed to support the relevant public actors, at both EU and Member State level, in their efforts to ensure a timely and well-informed response to the new opportunities and challenges arising from the technological development of robotics.”
You don’t need to be a Daily Mail reader to roll your eyes in despair at the prospect of a new European Agency, but in reality this all sort of makes sense - no matter what happens to the UK after we leave. Essentially, the report considers what might happen as robots become more sophisticated and, as a result, whether and/or how they might become liable for their actions. Asimov’s famous Laws* are likely to become far better known as AI increases in importance, but the subject of liability will also become a major issue. Who should be liable for the actions of a robot or robotic device? At present, in the UK, under the 2015 Consumer Rights act (formerly the Sale of Goods Act) if you buy anything (including digital data) then you are protected if what you have bought is not “of satisfactory quality, fit for a particular purpose or as described by the seller” and it’s the person who sold you the item/data/equipment who has to make reparations. But if the robot you buy is deemed capable of “thought” then who is liable – the robot, the person who designed it, the person who made it, or the person who sold it?
The EU report considers these and many more fascinating problems that will undoubtedly arise. For example, at present we have intellectual property rights, but perhaps, as they suggest there is a need for “the Commission to elaborate criteria for an ‘own intellectual creation’ for copyrightable works produced by computers or robots.” This creates visions of robots suing robots for copyright infringements!
What about the new field of medical robots? With chatbots already being touted as a partial solution to NHS England’s understaffed call centres, and computers assisting with brain surgery, who is responsible if something goes wrong?
In case you still think this is all pie in the sky nonsense, in June last year the EU backed the draft plan for “robots to be classed as ‘electronic persons’ under European Law.” This has led to calls for these ‘electronic persons’ to pay social security charges. Moreover, as robots take over more of our jobs, fewer humans will be working and paying tax – and income taxes are where some governments get most of their money from (in the UK 34% of the tax take comes from income/payroll taxes). The likely consequence is that we’ll see increasing demands for robots to be taxed. Indeed, some left-wing pressure groups are already calling for this to happen.
Even if we weren’t leaving the EU, this is going to be a very difficult area to legislate upon: the fact that we are leaving may make it easier or more difficult, but as with everything about Brexit we just don’t know at present.
One thing we do know is that it’s a minefield. That said, there is one group of people who look likely to prosper from it, namely the lawyers … if they haven’t been replaced by robots in the meantime!
Michael Phair, Be-IT Resourcing
• One of the most famous science fiction writers of all time, Isaac Asimov devised his Laws as follows: A robot may not injure a human being or, through inaction, allow a human being to come to harm. (2) A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. (3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws (See Runabout, I. Asimov, 1943) and (0) A robot may not harm humanity, or, by inaction, allow humanity to come to harm.