ETHICAL REGULATIONS ON ROBOTICS IN EUROPE

Michael Nagenborg - Rafael Capurro - Jutta Weber - Christoph Pingel

  
 

     

This paper was part of the research done within the framework of the EU-Project ETHICBOTS. Emerging Technoethics of Human Interaction with Communication, Bionic and Robotic Systems (2005-2008)  (SAS 6 - 017759).
Published in: AI & Society (2008) 22, 349-366. DOI 10.1007/s00146-007-0153-y




Abstract 

There are only a few ethical regulations that deal explicitly with robots, in contrast to a vast number of regulations, which may be applied. We will focus on ethical issues with regard to "responsibility and autonomous robots", "machines as a replacement for humans", and "tele-presence". Furthermore we will examine examples form special fields of application (medicine and healthcare, armed forces, and entertainment). We do not claim to present a complete list of ethical issues nor of regulations in the field of robotics, but we will demonstrate that there are legal challenges with regard to these issues.


1 Introduction

The subject of this paper is the existing ethical regulations concerning the integration of artificial agents into human society. Although there are only a few regulations dealing explicitly with the subject at the moment, these are in contrast to the vast number of regulations, which may be applied.
    Considering existing regulations and allowing for analogical inferences, one could rise the objection that these new technologies might result in a fundamental change regarding our idea of human and of society and that existing regulations must be criticized for being too "human centred". In answer to this we must say that, firstly, this paper is on the status quo and that, secondly, even those who predict a fundamental change foresee it only for a distant future.
    Levy (2006, pp. 393-423) for example argues that we will need a new legal branch, "robotic law", to be able to do justice to an expected changed attitude towards robots, which after some decades will be found in almost every household. But today this is far from being the case. In the context of this paper we therefore assume that artificial entities are not persons and not the bearers of individual, much less civil, rights. this does not imply that in a legal and ethical respect we could not grant a special status to robots nor that the development of artificial persons can be ruled out in principle. However, for the present moment and the nearer future we do not see the necessity to demand a fundamental change of our conception of legality. Thus, we choose a human-centred approach.
    For thoughts on techno-ethical regulations, at the European level the "Charter of Fundamental Rights of the European Union" (2000) is the appropriate frame. The preamble to the charter expresses guiding thoughts, which "should not be underestimated for the interpretation of the Charter's fundamental rights and for the way of understanding them" (Rengeling/Szczekalla 2004, 13f). One essential statement in the preamble is that "the Union is founded on the indivisible, universal values of human dignity, freedom, equality and solidarity". The essential position of human dignity is emphasized again in Para. 1. In this context, the "Explanations" (Charter 4473/00, 3) in Para. 1 of the Charter point out, while referring to the "Universal Declaration of Human Rights" (1948), that human dignity "constitutes the real basis of fundamental rights" (cf. Rengeling/Szczekalla 2004, 323ff.). This outstanding position given to the concept of "human dignity"  in the context of the Charter makes it improbable that accepting robots as "artificial humans" or software agents as "artificial individuals" will happen without considerable resistance.
    Rengeling/Szczekalla (2004, p. 137) also emphasize that fundamental rights serve primarily to protect the citizen from interventions by action by authoritative power. Thus, guaranteeing fundamental rights is about restricting the authoritative power of all Community authorities and institutions in the fields of legislation, execution, administration, and dispensation of justice. The question about how far the Charter is binding for the member states as well as for third parties counts among "the most difficult ones of all of the Community's protection of fundamental rights" (Rengeling/Szczekalla 2004, p. 133). Recently, respect of fundamental rights in the field of research has been confirmed by the signatories of the Code of Conduct for the Recruitment of Researchers" in the context of the "European Charter for Researchers". (1) There (p. 10) it says:

Researchers, as well as employers and founders, who adhere to this Charter will also be respecting the fundamental rights and observe the principles recognised by the Charter of Fundamental Rights of the European Union.

It is remarkable that academic institutions and third parties commit themselves to observe the fundamental rights and principles of the charter. (2)
    In the course of the debate on the necessity and possibility to regulate the development and use of future technologies various authors have stated and still state that this is a useless undertaking. Rodney Brooks (2001, p. 63) writes with respect to integrating artificial entities into the human body:

People may just say no, we do not want it. On the other hand, the technologies are almost here already, and for those that are ill and rich, there will be real desires to use them. There will be many leaky places throughout the world where regulations and enforcements of regulations for these sorts of technologies will not be a high priority. These technologies will flourish.

Using the same argument, however, we might just well argue in favour of giving up on regulating drugs, weapons aso. Most importantly, it misses the status quo because there is already a number of regulations, which concern our subject or may be applied to it. Even if no regulations were to be added, there is the question about whether the existing regulations are restricting possible developments too much.
    As already emphasized, this paper is concerned with current developments and the near future. However, we would like to consider a long-term and maybe fundamental change into account when looking for a possibility to control the development of e.g., "autonomous service robots" in such a way that their positive potential can be used.
    The first demand in this respect is for a long-term solution that will provide the necessary legal security to develop new technologies, which are appropriate to the formulated standards. The second demand is that regulations must be flexible enough to allow for possible but unforeseeable developments. In this context there should be thought given, for example taking the case of "autonomous robots", to a kind of meta-regulation, which might establish a framework of self-control for developers and producers, such that it becomes self-evident to decide, which steps must be made to make a responsible development possible.
    In the following we will consider different ethical issues in robotics, which are or which might become the subject of regulations, especially in Europe. We do not claim to present a complete list of ethical issues nor of regulations in the field of robotics. However, we will demonstrate that there are a lot of legal challenges with regard to robots and we hope to provide solid background information for the necessary discussion on the subject.

2 Regulations on robotics

In the context of this article we define robots as complex sensomotoric machines that extend the human capability to act (Christaller et al. 2001). Furthermore, we define "autonomous machines" functionally by the principle of delegation. The problem of responsibility for unpredictable acting, in regard to these machines, will be discussed.
    Service robots may be defined as machines not being used primarily in the field of (industrial) production. Accordingly, in the following we will focus on the extension of the human capability to act as well as on the new fields of application other than industrial production.


3 Responsibility and "autonomous robots"

This section consists of two parts: at first we will generally discuss the guidelines on machine safety in order to investigate the more particular aspect of responsibility for more complex machines. We will not discuss the possibility and necessity of "robo rights" or "civil rights for robots" here. However, later we will sketch to which extend the positive potential  might or should be used for the self-control of "autonomous robots".

3.1 Machine safety

In  general, the guidelines on product liability and production safety for robots are valid, in particular the following:
  • Council Directive of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products (85/374/EEC), and
  • Directive 2001/95/EC of the European Parliament and of the Council of 3 December 2001 on general product safety.
Given the member states' obligations to achieve a high level of consumer protection, as expressed in the EU's Charter of Fundamental Rights (Par. 38), we expect that the use of machines, which might endanger humans, animals, or the environment, would be strictly limited.
    In the field of man-machine interaction the regulations on occupational safety are particularly instructive, most of all the
  • Directive 2006/42/EC of the European Parliament and of the Council of 17 May on machinery, and amending Directive 95/16/EC (recast).

Directive 2006/42/EC must be implemented at the national level by the member states by 29 December 2009, and replaces

  • Directive 98/37/EC of the European Parliament and the Council of 22 June 1998 on the approximation of the laws of the Member States relating to machinery

Christaller et al. (2001, p. 164) states that a new approach is necessary to protect employees because existing regulations of safety institutions are impractical or unnecessary. "Man is not supposed to get in touch with the machine (the robot), or only if there are special protection measures. However, in some cases this [is] impossible". If we consider this criticism in relation to paragraph 1.3 of the appendix to Directive 98/37/EC, we see that the new version of this paragraph in Directive 2006/42/EC pursues an analogous approach. This leads us to ask if the existing regulations are appropriate for "mixed-human-machine-teams".

Without further discussion of the specific regulations that have resulted from the two afore-mentioned directives, we must point out two aspects, which are emphasized both by the old and the new directive:

    1. The principles of safety integration (Annex i, 1.2.2), and
    2. The extensive obligations to inform.

In Annex I, 1.2.2, Directive 2006/42/RC, it says:

Machinery must be designed and constructed so that ir is fitted for its function, and can be operated, adjusted and maintained without putting persons at risk when these operations are carried out under the conditions foreseen but also taking into account any reasonably foreseeable misuse thereof.

Hence, there is a requirement that machines be designed and constructed in such a way that they will not be a risk to people. If we relate this obligation to avoid or minimize risk to Par. 3 of the "Charter of Fundamental Rights of the EU" (Right ot Freedom of Bodily Harm), we need to ask if the integration of this protection into the design and construction of machines should not be a requirement for other fundamental rights, such as the protection of privacy (Par. 7) (3).

    Furthermore, the paragraph on principles of safety integration emphasises that appropriate information about remaining risks must be named by the operating instructions, in paragraph c. Then paragraph 1.7.4 of Appendix 1 determines that

All machinery must be accompanied by instructions in the official Community language or languages of the Member State in which it is placed on the market and/or put into service.

However, the obligations to inform are not restricted to the operating instructions but also apply to an appropriate design of the human-machine interface. Furthermore, for machines being used by "non-professional operators" the "level of general education" (1.7.4.1) must be taken into account.

    Obligations to inform are also of essential importance in the

  • Council Directive of 12 June 1989 on the introduction of measures to encourage improvements in the safety and health of workers at work (89/391/EEC)

For example, the "provision of information and training" is among the "general obligations on employers" (Par. 6), described in more detail by Par. 20 (worker information) and Par. 12 (training of workers).
    The Directive 2006/42/EC, which highlights the "level of general education" and the  obligation to inform and Directive 89/391/EEC show that dealing with robots outside the workplace also requires an appropriate level of education. Although it might still be valid that robots being highly complex machines cannot be understood by the common citizen (Christaller et al. 2001, p. 147), we must ask how education measures may help citizens with developing an appropriate behaviour towards robots. Also, there mut be a demand that robots supply people with the sufficient information to make, i.e., their behaviour foreseeable. (Christaller et al. 2001, p. 145).


3.2 Responsibility for complex machines

Due to the complexity of robots and software agents, there is the question of to whom the responsibility for the consequences of the use of artificial agents must be attributed. It is possible to take the responsibility for the use of machines that are capable of learning?
    The topic of being responsible for the development and marketing of products must be taken seriously because of its crucial role for the way professionals see themselves. This becomes obvious when we look at the relevant "Codes of Ethics" of professional Associations.
    An outstanding example of this is provided by the "Code of Ethics" of the Institute of Electrical and Electronic Engineers (IEEE) with 370,000 members in 160 countries. It starts with this self-obligation:

We, the members of the IEEE, ... do hereby commit ourselves to the highest ethical and professional conduct and agree:
1. to accept responsibility in making decisions consistent with the safety, health and welfare of the public, ... (italics by the authors)

Another example is the "Code of Ethics" of the Association of Computing Machinery (ACM), where in Section 1, Paragraph 1, there is emphasizing:

When designing or implementing systems, computing professionals must attempt to ensure that the products of their efforts will be used in socially responsible ways, will meet social needs, and will avoid harmful effects to health and welfare. (italics by the authors)

Accordingly, the idea that in the case of highly complex machines, such as robots, the responsibility for the product can no longer be attributed to developers and producers means a serious break of the way professionals define themselves.

    Also, it is not acceptable, in principle, that responsibility for the possible misbehaviour of a machine should not (at least partly) be attributed to developers or producers. However, it may be claimed that from the point of view of most users a simple webbot already seems to be an autonomous entity and hence may be held accountable for morally illegitimate behaviour (Floridi/Sanders 2004). The fact that something appears as an "autonomous object" in the eyes of  many people cannot be the basis for attributing the responsibility for damage to the producers, provider, or user. In practice it may be difficult to precisely attribute responsibility, and we know of cases when attribution seems to be doubtful; but this is not a justification for giving up on attributing responsibility, particularly when faces with cases of dangerous and risky products. More importantly, there is the question of in which way responsibility is to be ascribed and to whom.
    Here, we like to propose the option of meta-regulation. If anybody or anything should suffer from damage this is caused by a robot, which is capable of learning, there must be a demand that the burden of adducing evidence must be with the robot's keeper, who must prove her or his innocence (Christaller et al. 2001, p. 149). For example, somebody may be considered innocent who acted according to the producer's operating instructions. In this case the producer would need to be held responsible for the damage.
    Furthermore, developers and producers of robots could accept their responsibility by contributing to analysing the behaviour of a robot in a case of damage. This could happen by, for instance, creating an appropriate self-control institution. For example it may be possible to supply robots with a "black box", which could then me checked by this institution.
    In this context account must also be taken of the damage being possibly caused indirectly by a  robot. For example, according to German law the keepers of dogs are also responsible for road accidents if they do not act according to their obligatory supervision to manage their dogs behaviour where it causes irritation for road users. It is plausible to apply this analogously to the case of robots. The reason for such irritating behaviour may, e.g., be examined by an appropriate group of experts as mentioned above, and this should be done particularly if despite the appropriate behaviour of the user the robot could not be controlled to the degree necessary.
    Christaller et al. (2001, p. 144) has questioned whether the liability of animal keepers could be used as a model for the liability of robot keepers. However, the national regulations concerning dogs have become much more detailed and in the context of our discussion they should definitely be taken into account.
    Furthermore, the example of dogs indirectly causing road accidents shows how important it is for citizens to know about possible (mis) behaviour of robots, in order to enable them to react appropriately to artificial entities.

3.3 Prospect: roboethics and machine ethics

Future technologies are not only a source of danger but may also contribute to preventing or reducing risks. There is currently discussion on if and how ethical norms could become part of self-control and steering capabilities of (future) robots and software agents.
    Unfortunately the subject is only partly discussed and about very spectacular cases, such as the one cited by Allen/Wallach/Smit (2006, 12) in their essay "Why Machine Ethics?" (2006):

A runaway trolley is approaching a fork in the tracks. If the trolley runs on its current track, it will kill a work crew of five. If the driver steers the train down the other branch, the trolley will kill a lone worker. If you were driving the trolley, what would you do? What would a computer or  robot do?

However, this dramatic example is not helpful for a discussion on "ethical regulations". There needs to be a requirement that every possible step be taken to prevent the situation described above. For example, the German constitutional court declared that from 15 February 2006, Par. 14 Sect. 3 of the German Air Security Act was a violation of the constitution. This paragraph was supposed to allow us "to shoot down an airplane by immediate use of weapons if it shall be used against the lives of humans" (1BvR 357/05). The constitutional court said that this was not according to the Right to Life (Par. 2. Basic Law): if "people on board are not involved in the deed" (Christaller et al. 2001, p. 144).
    This verdict is interesting for our context because the constitutional court expressively refers to Section 1, Paragraph 1 of the German Basic Law ("Man's dignity is inviolable. Every state power must respect and protect it"), which is topically equivalent to Par. 1 of the "Charter of Fundamental Rights of the EU", the reasons for the judgement that the state must not question the human status of crew and
passengers. An authorization to shoot the airplane down

... disregards those concerned, who are subjects of their own dignity and own inalienable rights. By using their death as a means to save others they are made objects and at the same time they are deprived of their rights; by the state one-sidedly deciding about their lives the passengers of the airplane, who, being victims, are themselves in need of protection, are denied the value, which man has just by himself. (1BvR 357/05, par. 124)

Likewise we may conclude that there can be no legal regulation, which determines in principle that the few may be made victims for the many of the trolley example (or vice versa).
    This does not mean that the potential for self-control should no be used to oblige autonomous systems to behaviour, which is in keeping with norms, especially if this serves the safety of human beings. Even if one might intuitively agree with the statement that grave decisions should only be made by humans, we must not overlook that in legal practice this is not always seen to be the case. As early as in 1975 and 1981 US courts decided that a pilot who fails to resort to the auto-pilot in a crisis situation may be considered to be acting negligently (Freitas 1985).
    Thus, there is the question of how far regulations may contribute to opening up a leeway for using the potential without opening the door to over-hastily delegating responsibility to artificial agents. Nevertheless, development of appropriate agents needs further inter-disciplinary research work and this can and should be supported by appropriate research policy insofar and as long as this approach promises success. The possibility to use agents for enforcing legal norms should not be judged uncritically in certain fields. If it is advisable, however, to distinguish the legally conforming behaviour of agents from the problem of an appropriate norm setting.


4 Machines as a replacement for humans

Although robots are being discussed here as an extension of the human ability to act, robots can also replace humans. This is often been seen as a major ethical issue, but sometimes from the point of view of human rights, replacing human by robots may be seen as a positive option. One prominent example of this is the use of robots in the United Emirates and other countries as jockeys for camel races, instead of children. This development was positively emphasized, e.g., by the "Concluding observations: Qatar" (4) of the "Committee of the Rights of the Child" of the United Nations. In this case, replacing humans by robots served the goals of the

  • Optional Protocol to the Convention on the Rights of the Child on the sale of children, child prostitution and child pornography (2000), and thus the
  • Convention on the Rights of the Child (1989)

pf the United Nations. Surely, other cases can be imagined where child labour and trade can be avoided by the use of robots. Furthermore robots can do work, which would not be acceptable for human workers, e.g., because of health hazards.

    However, to stay with this example, not every kind of replacement would be unproblematic, e.g., in the case of child prostitution, as Par. 1 of  the

  • Optional Protocol to the Convention on the Rights of the Child on the sale of children, child prostitution and child pornography

defines "child pornography" as follows:

Child pornography means any representation, by whatever means, of a child engaged in real or simulated explicit sexual activities or any representation of sexual parts of a child for primarily sexual purposes.

Thus, robots looking like children and serving sexual purposes might definitely be included into the prohibition of child pornography.
    In general, it cannot be ruled out that also in the future humans may lose their jobs due to the use of robots. We must here think particularly of workers with a low level of qualification. Given the fact, that his group of workers is already at a higher unemployment risk than people with a higher level of qualification, we should look carefully at what kind of jobs are going to be delegated to machine.
    Although there is also the opinion that the use of industrial robots must be considered as an alternative to moving production to foreign countries and that in so far it secures jobs in the countries of production (5) this is only true from a restricted, local point of view, which places more value on jobs in one's own country than in other countries. The effects of the increasing use of robots in the world of work (particularly in opening up new fields of action for service robots) cannot be judged only by looking at those countries where these robots are used. One must also ask about the effects upon other countries.
    In this context we consider that the work, which may be delegated to agents is not of the kind where meaning is imparted by humans. One may even argue that robots could take over most of the inhumane work. However, we must be very careful with legitimating the delegation of work to machines due to the inhumane nature of certain kind of work. In this case, a "robotic divide" between rich and poor countries would not only mean that in some countries certain tasks are taken over by robots but that - according to his way of augmenting - workers in other countries are expected to do inhumane work
.

5 Tele-presence

Those effects on other countries must also be taken into account when talking about the possibilities of tele-presence, which counts among the most remarkable extensions of human possibilities of action.
    Here, tele-presence means the possibility to act within the world by help of agents, although the person who controls the agent  (direct tele-presence) or on whose behalf the agent acts (indirect tele-presence) is not at the place concerned. The reasons for a person not to be at the place may vary: e.g., the environment in which the agent is acting may be hostile to life and not accessible to humans. Examples for this are found in space travel or deep sea research, but also in the fields of nuclear technology or war. But tele-presence may also serve for making it possible for certain humans to work at places where they themselves do not want to or cannot be. here a wide spectre can be imagined, which includes both the expert's tele-presence, whose skills and knowledge are made useful at a far-away place, i.e., in the field of tele-medicine, and tele-work, which is done at far-away places for low wages by help of High Tech. For example, Brooks (2002) describes the possibility to create jobs in countries with a low level of wages by help of appropriate service robots. But tele-presence may also give rise to xenophobia if this technology is used for staying away from people. Thus, we have to ask of this will result in establishing societal developments and social forms of exclusion, which are lamented elsewhere.
    From the legal point of view, the possibility of tele-presence is particularly challenging, as the human actor may be in another country that the tool he uses accordingly, at the place where the robot is used other legal regulations may be valid than at the place where the control unit is. Also, e.g., in the field of tele-medicine, it may be imagined that the use of the robot occurs in another country whose laws allow operations, which are allowed neither in the patient's nor in the physician's home country (Dickens/Cook 2006, pp. 74-75). This challenge was emphasized by the

  • World Medical Association Statement on Accountability, Responsibilities and Ethical Guidelines in the Practice of Telemedicine. Adopted by the 51th World Medical Assembly Tel Aviv, Israel, October 1999.

However, it was annulled at the WMA General Assembly 2006 (Pilanesberg, South Africa). According to information by the WMA, a new version of the guideline may be expected this year. Paragraph (3) of the old "statement" says:

The World Medical Association recognizes that, in addition to the positive consequences of telemedicine, there are may ethical and legal issues arising from these new practices. Notably, by eliminating a common site and face-to-face consultation, telemedicine disrupts some of the traditional principles, which govern the physical-patient relationship. Therefore, there are certain ethical guidelines and principles that must be followed by physicians involved in telemedicine.

It becomes obvious that the "Codes of Ethics" of international professional associations must be taken into account for the field of "ethical regulations", even if the formulation of "certain ethical guidelines and principles" in this document are considered vague. According  to the

  • World Medical Association International Code of Medical Ethics

passed for the first time in 1949 and newly accepted in 2006, the WMA is provided with a basis for developing appropriate guidelines. According to Dickens/Cook (2006, p. 77), the WMA in its statement from 1999 emphasizes that

... regardless of the telemedicine system under which the physical is operating, the principles of medial ethics globally binding upon the medical profession must never be compromised. These include such matters as ensuring confidentiality, reliability of equipment, the offering of opinions only when possessing necessary information, and contemporaneous record-keeping.

It cannot be expected that in this respect the new version will be different. Dickes/Cook (2006, p. 77) also point to the risk "that these technologies may aggravate migration of medical specialists from low-resource areas, by affording them means to serve the countries or areas they leave, by electronic and robotic technologies". The possibilities of tele-presence must me judged also with their effect on (potential) brain drain.
    Of course, this challenge does not only exist in the field of medicine. But the challenges posed by tele-presence in the field of medicine are an appropriate topic for discussion here as the possibilities it opens up are judged positively. Hence, possible conflicts are addressed much more clearly than in the case of a possible use, which is anyway seen with reservation. Here, Dickens and Cook (2006, pp. 74, 78) give the examples of "procedures that terminate pregnancy", "methods of medically assisted reproduction ... such as preimplantation genetic diagnosis and using sex-selection techniques" as well as "female genital cutting", which is respect of the possibility of tele-presence may at least cause legal doubts. Again these special examples can be generalized. It may be questioned whether in a company, which is located in the EU, an EU citizen is allowed to control a robot in a country whose security demands are not appropriate to European standards of occupational safety, or if by using a robot a European researcher is allowed to carry out experiments outside the EU, which are not allowed within the EU.
    Another question is that it must be obvious for third parties to know if an agent is tele-operated. And there is the general requirement that humans having contact with machines should know, which behaviour is to be expected from them. Further, it must be made clear which information the agent of the provider must offer. Is it sufficient to know that control is (partly) taken over by a human? Or must additional information be offered, such as the country from where the machine is controlled? The latter is relevant in respect of valid regulations of data protection.
    Even if cross-border data  travel is not taken into account, particularly in the case of direct tele-presence, i.e., when an agent is under the direct control of a human or a group of humans, there are obvious challenges with regard to the possibility of far-reaching interventions into the protected zone of the private.


6 Special fields of application

The "Charter of Fundamental Rights of the EU" can be used for judging legally on the purpose of robots. It is of decisive importance if a possible use may be considered an intervention into the fundamental rights. In the fields of memdicine, armed forces, and entertainment the use of robots shall be examined.

6.1 Medicine and healthcare

In general, for the use of robots in the field of the aforementioned

  • Council Directive 93/42/EEC of 14 June 1993 concerning medical devices is of essential significance, and according to Par. 1 Section 5 must not be applied on
  • Active implantable devices covered by Council Directive of 20 June 1990 on the approximation of the laws of the Member States relating to active implantable medical devices (90/385/EEC);

here is currently discussion on how far the existing directives on "medical devices" must be worked over and adjusted to each other. (6) At the time of writing this article the result of this debate was still open.
    Baxter et al. (2004, p. 250) point to the fact that in respect of defining "medical devices" Directive 93/42/EEC is vague: "... one can claim that if the technology is sometimes used by people without disease, injury or handicap then it is not primarily intended for 'diagnosis, prevention, monitoring, treatment or alleviation' of those afflictions and so the regulation does not apply". This, they say, is problematic as keeping the standards for "medical devices" is connected to high costs. Thus, companies were tempted to avoid existing regulations by using machines, which were developed for other purposes. But there were not always appropriate to the needs of those persons who are supposed to be helped by these machines. This might be of concern, for example, with regard to the use of service robots in the field of nursing.
    In general, the extension of human possibilities to act in medicine and nursing must surely be judged positively. From the point of view of surgery, Diodat et al. (2004, p. 802) conclude
:

The introduction of robotics technology into the operating room has the potential to transform our profession. For the first time in history, surgeons will not be confined by their inherent physical limitations. These systems have the potential not only to improve the performance of traditional surgery, but also to open entirely new realms of technical achievement previously impossible.

Similarly to Directive 2006/42/EC, Directive 93/42/EEC names extensive obligations to inform (particularly Annex I, Par. 13). Diodato et al. (2004, p. 804) must be taken very seriously when pointing out the fact that due to the increasing use of robots

... surgeons will need to become lifelong learners, since there will be almost continuous evolution of our surgical techniques as our technical ability becomes more coupled to increasing computer power. As surgeons it will be our duty to direct this progress in close partnership with engineers, computer scientists, and industry to advance the surgical treatment of diseases. Most important, we must provide ethical and moral direction to the application of this technology to enhance both the art and the science of our profession.

Thus, not only is the physicians' self-obligation to the ethos of their profession addressed but also there is a demand for close co-operation between developers and users.
    In the field of medicine there is a particular obligation to inform the patient. The expert's report by Schräder (2004, p. 59) on the assessment of methods by the example of
Robodoc® is of special interest here because patients took legal action against the use of the robot in Germany after it had become known that such an operation was more risky. Even when action for compensation was finally rejected by  the Federal Supreme Court of Justice (Germany) on 13 June 2006, (VI ZR 323/04), the court pointed to "lack of information".
    In our opinion, challenges occur must of all where humans might become dependent on the machine (even physicians, nurses, or the patient of the nursed person may be concerned) as well as where the machine replaces a human. Thus, in respect of the "Charter of Fundamental Righs" it should be questioned if replacing human nurses by machines can be justified where the contact with nurses is one of the last possibilities left for someone who is old and/or ill to interact and communicate with other humans. Here, according to Par. 26 (Integration of disabled people) there might be the requirement that nursing by machines needs special justification. Also, one can ask if companies and perhaps the state might have a special obligation to support users with maintenance
.

    Finally we must ask how to deal with the fact that in the context of using artificial entities for the nursing of old-aged people there is the possibility of violating the right to respect for privacy and family life (Par. 7 of the  Charter of Fundamental Rights). Paragraph 25 emphasizes the right of old-aged people to a life of dignity, which indeed includes the right to privacy (Par. 7) There are analogous regulations concerning children (Par. 24) and disabled people (Par. 26). The latter demand for "respect of privacy" is also emphasized at the international level in Par. 22 of the United Nations'
  • Convention on the Rights of Persons with Disabilities. Adopted on 13 December 2006 during the 61th session of the General Assembly by resolution A/RES/61/106. (A/RES/61/106).


6.2 Armed forces

According to Par. 1 Section 2, Directive 2006/42/EC is not valid for "weapons, including firearms" as well as "machinery specially designed and constructed for military or police purposes". Seemingly, a common regulation following the above mentioned directive, does not exist at the European Level.
    However, robots are included in the "Common Military List of the European Union" (2007/197/CFSP), which serves for export control in the context of the

  • European Union Code of Conduct on Arms Exports (1988

where the member states are obliged not to allow any export, which violates the criteria of this code, which includes "respect of human rights in the country of final destination" (Criterion 2):

Having assessed the recipient country's attitude towards relevant principles established by international human rights instruments, Member States will:
a. Not issue an export licence if there is a clear risk that the proposed export might be used for internal repression.
b. Exercise special caution and vigilance in issuing licences, on a case-by-case basis and taking into account of the nature of the equipment, to countries where serious violations of human rights have been established by the competent bodies of the UN, the Council of Europe or by the EU;

Robots "specially designed for military use" are explicitly included into this obligation.
    Robots, which are able to kill or hurt humans have raised much attention, as shown by the example of armed surveillance robots, which are supposed to be used by Southern Korea at the border with Northern Korea. In this context German comments reminded about the so called "auto-fire systems", which were used at the border of the German Democratic Republic. The German Federal Supreme Court of Justice has repeatedly criticized these "blind killing automats" for being a grave violation of human rights (e.g., the verdict from 26 April 2001 - AZ 4 StR 30/01). However, from the legal point of view two aspects must be taken into account:

1. Different from the so called "auto-fire systems" of Type SM-70, today's systems are not "blind". And one might argue that the new technologies are even more able to fulfil these tasks than humans.
2. From the technological point of view, the SM-70 was an "Anti-Personnel" and not a complex machine. The SM-70 and comparable technologies do thus count amontg the topical field of the

  • United Nations convention on prohibitions or restrictions on the use of certain conventional weapons, which may be deemed to be excessively injurious or to have indiscriminate effects (1980), particularly
  • Protocol on prohibitions or restrictions on the use of mines, booby-traps and other devices as amended on 3 May 1996 (Protocol II to the 1980 Convention as amended on 3 May 1996), as well as the
  • Convention on the prohibition of the use, stockpiling, production and transfer of anti-personnel mines and on their destruction, 18 September 1997.

Thus, clarification is needed about whether robots count among the topical field of these conventions.
    Another challenge for export control exists in the so-called "dual-use". This is the possibility to use civil technologies for the purpose of war. Robots being developed for military purposes, however, may also be considered an example of "bi-directional dual-use". Here there exists a challenge,  e.g., regarding the question of if and how machines, which were developed for military purposes are to be regulated and used for Policy purposes. This challenge is even bigger these days particularly in the context of foreign missions where armed forces often take over Police tasks (e.g., riot control).
    Finally, there is a general challenge regarding the question of how we shall deal with documents, which are produced by using robots or that could be produced this way. The challenge of their use in war but also in Police and rescue actions is how we shall deal with those video and audio recordings as well as further data which are recorded by artificial agents at the place or in the case of tele-presence at the control unit. On the one hand, these data open up the possibility of control, e.g., if regulations of international law are kept. On the other hand, new possibilities of manipulation are opened up that could undermine this control. Additionally, we must take into account that in case of a conflict between two warring parties between which there is a "robotic divide" there may develop a kind of media or informational superiority on the side, which is provided with the appropriate technology.


6.3 Entertainment

We have  already raised the fact raised by the problem of child pornography that the use of robots in certain fields of "entertainment" may be judged critically. However, concerning this there is no common legal practice within the European Union, whereas in Germany the

  • Innerstate Treaty on the Protection of Human Dignity and Youth Protection in Radio and Television Media from 10-27 September 2002, last version by the Eighth Interstate Treaty on Changes of the Broadcasting System from 8/15 October 2004 expressively equates virtual depictions with real pictures, and in Italy by the
  • Provisions on the fight against sexual exploitation of children and on child pornography on the internet (6 February 2006)
"virtual pornography" is also punished. The legal situation in other member states does not seem to be as clear, for example, in the Netherlands there is an attempt to create certainty of justice by help of an exemplary case (Reuters, agency report from 21 February 2007).
    The example of "virtual child pornography" in online offers such as "Second Life" shows that similar regulations must be expected also for humanoid robots if they, being media products, are not included into the appropriate laws. In general, we must assume that humanoid robots, as far as they represent specific individuals, are not allowed to violate the personal rights of those depicted, and that as far as no personal rights are at stake they are allowed to be produced and used only within the frame of valid laws. Concerning this, Par. 1 of the Charter of Fundamental Rights (Human Dignity) may be supposed to be a point of reference, as it can be found, e.g., in the

  • Recommendation of the European Parliament and of the Council of 20 December 2006 on the protection of minors and human dignity and on the right of reply in relation to the competitiveness of the European audiovisual and on-line information services industry (2006/952/EC)
Furthermore, it is important to point out that in respect of robots in the field of "entertainment" there already exist those challenges as mentioned in the section on "Tele-presence". For example, in Germany selling the "Teddycam" (7) was prohibited, and a combination of covered surveillance technology with an object of daily use is not allowed according to the German Act on Telecommunication.
    However, we must emphasize that we do not intend to give the impression that the use of robots for entertainment purposes should be restricted more that other entertainment products. However, in the context of this article, which aims at presenting the status quo, there is a need to point out existing regulations. Indeed, example of the robot jockey has already been mentioned and this is useful in considering the use of artificial agents in this field.


7 Conclusions

In this paper we have presented some of the existing regulations, which might be applied on robotic agents. By starting with the "Charter of Fundamental Rights of the European Union" we pointed out the fact that the term "human dignity" in the context of the Charter of Fundamental Rights of the European Union makes it improbable that accepting robots as "artificial humans" or software agents as "artificial individuals" will happen without considerable resistance. Thus, we chose a human centred approach.
    In particular, we addressed the challenges that come along with tele-presence. Here, we made the point that the effects of the increasing use of robots in the world of work cannot be judged only by looking at those countries where these robots are used. There must also be questioning about the effects on other countries (brain drain, loss of jobs, etc.) and the relationship between countries that might be affected by what we call the "robotic divide".
    Finally, we took a look at some fields of use, including medicine and healthcare, warfare applications, and entertainment, where we found a broad range of regulations as well as open questions.
    It is important to bear in mind that his paper is about the status quo and there is indeed a question about whether the existing regulations are restricting possible developments too much. For that reasons we proposed the option of meta-regulation. This is to establish a body of self-control for developers and producers whereby a fixed legal framework may decide by itself which steps must be made to make a responsible development possible.


Notes

1. Commission Recommendation from 11 March 2005 on the European Charter for Researchers and on a Code of Conduct for the Recruitment of Researchers.

2. The "Charter for Researchers" has meanwhile been signed by more than 70 institutions from 18 nations (Austria, Belgium, Cyprus, Czech Republic, France, Germany, Greece, Hungary, Ireland, Israel, Italy, Lithuania, Norway, Poland, Romania, Slovak Republic, Spain, and Switzerland) as well as by the international EIROforum.

3. Software agents, i.e., may support users with purposefully releasing or hiding information. Beyond this, Allen/Wallach/Smith (2006) suggested to develop agents being able to recognize private situations and to react appropriately. There may also be reminding to the suggestion by Rosen (2004) to build "blob machines" instead of "naked machines". Such thoughts are also found, e.g. in around the "Semantic Web", where in the context of the "Platform for Privacy Preferences" (P3P) Project (www.w3.org/P3P) there is trying to describe the collecting and use of data in a way which could be read by machines and to this way control the flow of these data. In a general sense, also developments towards the "Policy-Aware Web" (Kolovski et al. 2005) must be taken into account here. However, together with Borking (2006) we must, e.g., state: "Building privacy rules set down in the Directive 95/46/EC and 2002/58/EC into information systems for protecting personal data poses a great challenge for the architects."

4. Consideration of Reports submitted by States Parties under Article 12 (1) of the optional Protocol to the Convention on the Rights of the Child on the Sale of Children, Child Prostitution and Child Pornography (CRC/C/OPSC/QAT/CO/1) (2 June 2006).

5. See the statements by Jean-Francois Germain (Ichbiah 2005, p. 247).

6. Proposal for a Directive of the European Parliament and of the Council amending Council Directives 90/385/EEC and 93/42/EEC and Directive 98/8/EC of the European Parliament and the Council as regards the review of the medical device directives (22.12.2005).

7. http://www.smarthome.com/7853.html

References


Allen C, Wallach W, Smit I (2006) Why machine ethics? In: IEEE Intelligent Systems, July/August 2006, pp. 12-17

Baxter GD, Monk AF, Doughty K, Blythe M, Gewsbury G (2004) Standards and the dependability of electronic assistive technology. In Keatas S, Clarkson J, Langdon P, Robinson P (eds) Designing a more inclusive world. Springer, London, pp. 247-256

Borking J (2006) Privacy rules - a steeple case for system architects. Position paper. w3c workshop on languages for privacy policy negotiation and semantics-driven enforcement, 17 and 18 October 2006, Ispra/Italy. http://www.w3.org/2006/07/privacy-ws/papers/04-borking-rules/

Brooks R (2001) Flesh and machines. In Peter J. Denning HG (eds) The invisible future. McGraw-Hill, New York, pp. 53-63

Brooks R (2002) Flesh and machines. Pantheon, New York

Christaller T. Decker M, Gilsbach JM, Hirzinger G, Lauterbach KW, Schweighofer E, Schweitzer G, Sturma D (2001) Robotik. Perspektiven menschlichen Handeln in der zukünftigen Gesellschaft. Springer, Berlin

Dickens BM, Book RJ (2006) Legal and ethical issues in telemedicine and robotics. Int J Bynecol Obstet 94:73-78

Diodato MD, Prosad SM, Klingensmith ME, Damiano RJ (2004) Robotics in surgery. Curr probl surg 41(9):752-810

Floridi L, Sanders JW (2004) On the morality of artificial agents. Minds Mach 14(3):349-379

Freitas RA (1985) The legal rights of robots. In: Student lawyer 13 (Jan 1985), pp. 54-56. Web version: http://www.rfreitas.com/Astro/LegalRightsOfRobots.htm

Ichbiah D (2005) Roboter. Geschichte - Technik - Entwicklung. München: Knesebeck

Kolovski V, James Hendler YJ, Tim Berners-Lee DW (2005) Towards a policy-aware web.

Levy D (2006) Robots unlimited. A.K. Peters, MA

Rengeling RW, Szczekalla P (2004) Grundrechte in der Europäischen Union. Charta der Grundrechte und Allgemeine Rechtsgrundsätze. Heymanns, Cologne

Rosen J (2004) The naked crowd. Random House, New York

Schräder P (2004) Roboterunterstüzte Fräsverfahren am coxalen Femur bei Hüftgelenkstotalendoprothesenimplantation. Methodenbewertung am Beispiel "Robodoc
®", http://informed.mds-ev.de/



Last update: April  23, 2017
 



     

Copyright © 2017 by Rafael Capurro, all rights reserved. This text may be used and shared in accordance with the fair-use provisions of U.S. and international copyright law, and it may be archived and redistributed in electronic form, provided that the author is notified and no fee is charged for access. Archiving, redistribution, or republication of this text on other terms, in any medium, requires the consent of the author.
 

 
Back to Digital Library 
 
Homepage Research Activities
Publications Teaching Interviews