ETHICS AND ROBOTICS

Rafael Capurro
 

 

 
This paper was a contribution to the workshop "L'uomo e la macchina. Passato e presente (Pisa 1967-2007)" organized by the Università di Pisa, Dipartimento di Filosofia Pisa, May 17-18, 2007. Published in Italian: Etica e robotica. I robot, maschere del desiderio umano in: I quaderni di Athenet. La rivista dell'Università di Pisa. No. 20, July 2007, 9-13. Spanish translation in Revista interdisciplinaria de bioética, Puebla (México) Enero-Junio, 1 (1), 2010.
Published in: Rafael Capurro and Michael Nagenborg (eds.). Ethics and Robotics. Heidelberg: Akademische Verlagsgesellschaft 2009, 117-123. See the Introduction to this volume.
The ideas developped in this paper were discussed at several meetings of the EU Project ETHICBOTS.



 
  

CONTENTS


Introduction
 

1. Epistemological, ontological, and psychoanalytic implications
2. Ethical Aspects of Man-machine Relations

Conclusion

Acknowledgments

References
 
 


 

INTRODUCTION


From which standpoint do we – as ethicists – speak? And for whom?
What are the consequences and what is the (potential) field of application of an ethics of  human interaction with communication, bionic and robotic systems (in the following techno-ethics)? An important part of it should be an ethics of technology design and production. Techno-ethics should supports strong, contestatory democratic practice and citizen activity that is involved in the creation of techno-scientific artifacts. The leading question is how to design an interdisciplinary process that also involves engineers and technology designers in the ongoing discussion.

A second question is, whether or how is it possible (and desirable) to develop a general ethics for any kind of robots and agents.  In which case(s) do we need a differentiation of fields of application and types of robots / agents with regard to ethical concerns? On a socio-technical level robots are described as “sensomotorical machines which expand the human ability to move. They consist of mechatronic components, sensors and computing-based control and steering functions. The complexity of a robot is bigger than that of other machines because of its higher degree of freedom and its multiplicity and amount of behaviours." (Christaller et al. 2001, Transl. Jutta Weber). Relevant questions to discuss are whether there is a qualitative difference between classical, trans-classical machines and autonomous systems.

A third question should also be “cui bono?” For whom and by whom are robots developed? Who fits the standards that robots and robotic devices like AIBO, Pino, Paro, Kismet etc. embody? Do they contribute to deeper equality, keener appreciation of hetero­ge­neous multiplicity, and stronger accountability for livable worlds?

Besides that a reflection on the socio-cultural context of the debate on robots and agents is needed. What kind of societal conflicts and power relations are intertwined in the production and usage of agents and robots? How does the fusion of science, technology, industry and politics come into play? What about the military interest in robotics and agents?

Last but not least a central task for techno-ethics is to learn the lessons from the discussion on bioethics. For example: We should avoid abstract discussions of the agency or intentionality of agents and robots and reflect whether they are helpful to work out the contest on the future development and use of agents and robots.

The massive use of robots will change society probably in a similar way as cars and airplanes (and in former times: ships etc.) did and it already changed society – think of industrial robots in the workplace who are an important factor with regard to the growing unemployment in Europe. This broad view of societal changes and consequently of the view(s) of ourselves, including our (moral) values, is fundamental  There may be a re-definition of what it means to be human  For instance the EU Charter of Human Rights is human centered. The massive use of robots may challenge this anthropocentric perspective.

Why do we want to live with robots? What do we live with robots for? There are different levels of reflection when answering these questions, starting with the trivial one that robots can be very useful and indeed indispensable for instance in today’s industrial production or when dealing with situations in which the dangers for humans are big. But before reflection in this direction let us take the perspective of what René Girard calls “mimetic desire” (Girard 1972).

1. Epistemological, Ontological, and Psychoanalytic Implications 

The relation between humans and robots can be conceived as an envy relation in which humans either envy robots for what they are or they envy other humans for having robots that they do not have. In the first case, envy can be positive in case the robot is considered either as a model to be imitated or  negative in case the relationship degenerates into rivalry. This last possibility is exemplified in many science fiction movies and novels in which robots and humans are supposed to compete. Robots are then often represented as emotion-free androids, lacking moral sense and therefore less worth than humans. Counter examples are for instance 2001: A Space Odyssey (Stanley Kubrick 1968) or Stanislaw Lem’s novel “Golem XIV” (Lem 1981). The mimetic conflict arises not only by the fact of imitating what a robot can do but more basically of imitating what ‘it’ is supposed to desire. But a robot’s desires are paradoxically our own since we are the creators. The positive and negative views of robots shine back into human self-understanding leading to the idea of enhancing human capabilities for instance by implanting artificial devices in the human body. When robots are used by humans for different tasks, this creates a situation in which the “mimetic desire” is articulated either as a question of justice (a future robot divide) or as new kind of envy. This time is the object of envy not the robot itself but the other human using/having it. The foundational ethical dilemma with regard to robots is thus not just the question of their good or bad use but the question of our relation to our own desire with all its creative and destructive mimetic dynamism that includes not only strategies such as envy, rivalry and model but also their trivial use as a tool that eventually turns to be a question of social justice.

Robots can be seen as masks of human desire. Our “mimetic desire” might influence (but how far?) the exchange value they get in the market place. Our love affair with them opens a double bind relationship that includes the whole range of human passions, from indifference through idealization until rivalry and violence although this might not be the case with regard to the contemporary state of the art in robotics as they still lack intelligence and unpredictable behaviour. It is the task of ethical reflection to go beyond the economic dimension in order to discover the mechanism that makes possible the invention, production, and use of robots of all kinds. These mechanisms are based on the human mimetic passion(s) on an individual as well as on a societal scale. In a mythical sense robots are experienced by our secularized and technological society as scapegoats for what is conceived the humanness of humanity whose most high and global expression is the Universal Declaration of Human Rights. From this mythical perspective, robots are the bad and the good conscience of ourselves. They give us the possibility of a moral discourse on our selves but at the same time it takes away our attention from the intolerable situation of infringement of these rights with regard to real human beings. In other words, an ethical reflection on robots must take care of these pitfalls particularly when considering the dangers of the mimetic desire with regard to human dignity, autonomy or data protection. It must reflect the double bind relationship between humans and robots. If robots mirror our mimetic desire we should develop individual and social strategies in order to unmask the unattainable object we strive for that turns into a danger when it looks like a fulfilment in view of which everything including ourselves should be regarded as mean to an end. The concept of human dignity is a hallmark above and beyond our own desire. It is a hallmark of self transcendence independently of technological and/or religious promises. It allows us to avoid ideological or fundamentalist blockades by regulating at the same time the dynamic of mimetic desire.

The concept of robot is ambiguous. According to Karel Čapek who first coined the term, a robot is a human like artificial device, an android, that is able to perform autonomously, i.e., without permanent human guidance, different kind of tasks particularly in the field of industrial production. Anthropomorphic robots but also artificial devices imitating different kinds of living beings have a long tradition. Today’s industrial robots are often not human like. There is a tension between technoid and naturoid artificial products [Negrotti  1995, 1999, 2002]. The concept of artificiality itself is related to something produced by nature and imitated by man. Creating something similar but not identical to a natural product points to the fact that anything to be qualified as artificial should make a difference with regard the natural or the “original” (Negrotti). Robots are mostly conceived as physical agents. With the rise of information technology  softbots or software agents have been developed that have also impact in the physical world so that it is difficult to draw a clear border. This is also the case with regard to the hybridization between humans and robots (cyborgs). In fact, not only individuals but society as a whole is concerned with a process of cyborgization.

What are robots? They are products of human dreams (Brun 1992, Capurro 1995). Every robotic idea entails the hidden object of our desire. Robots are thus like the images of the gods (Greek: agalma) inside the mask of a satyr. According to Jacques Lacan’s psychoanalytic interpretation (Lacan 1991), following the Platonic narrative of the love encounter between Socrates and Alcibiades in the “Symposium” (Symp. 222), such “small objects” are the unattainable and impossible goal of human desire. Plato describes in the “Timaeus” the work of the demiurge shaping the world as a resemblance (agalma) of the divine as a work of joy and therefore as an incentive to make the copy more similar to the original (parádeigma) (Tim. 37c).

In sum, our values or the goal of our desire are embedded into all our technological devices and particularly in the kind of products that mimicry our human identity. Therefore, the question is not only which values are we trying to realize through them but why are we doing this? Robots are a mirror of shared cultural values that show to us and to others who we want to be. We redefine ourselves in comparison with robots in a similar way as we redefine ourselves in comparison with animals or with gods. Theses redefinitions have far-reaching economic and cultural implications.

But, who is the “we” of this kind of psychoanalytic discourse?  What about an engineering culture which is mostly involved in the development & design of robots? In gender approaches “we” have the claim of a masculine culture of technology production. So do all people have the same kind of double-bind relationship to robots? And what about cultural differences?

2. ETHICAL ASPECTS OF MAN-MACHINE RELATIONS

How do we live in a technological environment? What is the impact of robots on society?  How do we (as users) handle robots? What methods and means are used today to model the interface between man and machine?

What to think about the mimicry of emotions and stereotypes of social norms? What kind of language / rhetorics is used in describing the problem of agent and bots – and which one do we want to use? In AI and robotics we can often find a sloppy usage of language which supports anthropomorphising agents. This language often implies the intentionality and autonomy of agents – for example when researcher speak of learning, experience, emotion, decision making (and so on) of agents. How are we going to handle in science and in our social practices this problem?

Robots are not ready-made products of engineers and computer scientists but devices and emerging technologies in the making:

  • What are the consequences of the fact that today ICT devices are developed by computer scientists and engineers only?
  • What is the meaning of the relation master-slave with regard to robots?
  • What is the meaning of robot as a partner in different settings?

Recent research on social robots is focussing on the creation of interactive systems that are able to recognise others, interpret gestures and verbal expressions, which recognize and express emotions and that are capable of social learning. A central question concerning social robotics is how "building such technologies shapes our self-understanding, and how these technologies impact society" (Breazeal 2002, 5).

To understand the implications of these developments it is important to analyse central concepts of social robotics like the social, sociality, human nature and human-style interactions. Main questions are: What concepts of sociality are translated into action by social robotics? How is social behaviour conceptualised, shaped, or instantiated in software implementation processes? And what kind of social behaviours do we want to shape and implement into artefacts? 

There is a tendency to develop robots modeling some aspects of human behavior instead of  developing an android (Arnall 2003). Relative autonomy is a goal for physical robots as well as for softbots. What is the meaning of the concept of autonomy in robotics? What are the affinities and differences between the robotic discourse and the philosophical discourse? Obviously, we can experience a strong bidirectional travel of the concept of autonomy (as well as that of sociality, emotion and intelligence) between very diverse discourses and disciplines. How does the concept transfer between the disciplines and especially the strong impact of robotics change the traditional meanings of concepts like autonomy, sociality, emotion and intelligence?

Having regard to the EU Charter of Fundamental Rights (Art. 1, 3, 6, 8, 25, 26) following questions arise:

(a) Who is responsible for undesired results of actions carried out by human-robot hybrid teams?

(b) How is the monitoring and processing of personal data by AI agents to be regulated?

(c) Can bionic implants be used to enhance, rather than restore, physical and intellectual capabilities?

All three questions address possibilities that have an immediate impact on single human beings, since

  • responsibility is traditionally attributed to single actors (with include individuals),
  • the human right to privacy protects the ability to live autonomously, and 
  • enhancements are for the benefits of a singular person.

But the importance of robot-human-integration goes beyond the level of the single individual, and address the question about how society or community could and should look like in which bots are integrated. Probably only certain members of a society or community will interact with certain kind of bots, for instance entertainment bots for rich people, service bots for elderly or ill people etc. This kind of interaction with bots may also build new forms of communities. Close attention should be paid to what groups of individuals are likely to interact with certain kind of bots in a certain context while at the same time keeping the perspective on the impact of the specific interactions on the communities and societies  in which this specific forms of interactions take place.

All three forms of human-bot integration may include aspects of violation as well as fostering of human rights and dignity. It may not even ruled out that one and the same technology may do have both positive and negative effects. Surveillances infrastructures may be considered harmful with regard to privacy, but they may also enable us to create new kinds of communities.

CONCLUSION

The potential benefits or harm may be caused by certain forms of human-bot-integration. How to dissolve arising conflicts, especially if there is a conflict between the individual perspective and the perspective of a society or community? Such kind of enhancements might be considered a benefit to an individual but also raise new questions such as whether only an elite might be able to transform themselves into cyborgs or – a worst case scenario – whether the unemployed would be forced to have some sorts of implants to enable them to do certain jobs.

At the time given, there is no need to address the issue of whether bots should be seen as persons. Present ethical questions raise the point of human responsibility as a fundamental issue to be addressed in an ethical enquiry on techno-ethics. This includes questions such as:

(a) Who and how should according to which principles adscript responsibility to whom in cases that involve human-bot integration? and what should be the consequences of such an adscription?

(b) Who is responsible for designing and maintaining an infrastructure in which information about persons is collected and processed?

(c) How does the possibility of invasive human-bot integration have influence on the concept of responsibility? This includes

(i) Does the fact that a human being is enhanced lead to a special kind of responsibility?

(ii) What are the consequences for whose who are responsible for providing the technology used for enhancement?

When addressing the question of responsibility we should take into account that there are different levels of responsibility even when ascribing responsibility to an individual which might be held responsible for something with regard to her/his personal well-being, to the social environment (friends, family, community), to his/her specific (professional or private) role also as a citizen who is responsible to the society or the state someone lives in, or as a human being at all. Furthermore this does include the question whether and how responsibility might be delegated and whether institutions might be moral responsible with regard to robots.

Robots are less our slaves – which is a projection of the mimetic desire of societies in which slavery was permitted and/or promoted – than a tool for human interaction. This throws questions of privacy and trust (Arnall 2003, 59) but also of the way we define ourselves as workers in industry, service and entertainment. This concerns different kinds of cultural approaches to robots in Europe and in other cultures that may have different impact in a global world. Different cultures have different views on autonomy and human dignity.

ACKNOWLEDGEMENTS

Thanks to Guglielmo Tamburrini (University of Naples), Michael Nagenborg (University of Karslruhe), Jutta Weber (University of Duisburg-Essen) and Christoph Pingel (Karlsruhe Center for Art and Media) for ongoing discussions on the relationship between ethics and robotics within the framework of the EU Project ETHICBOTS.

REFERENCES

Adam, Alison (1998). Artificial Knowing. Gender and the Thinking Machine. London

Arnall, Alexander Huw (2003). Future Technologies, Today’s Choices. Nanotechnology,

Artificial Intelligence and Robotics (2003). In: A technical, political and institutional map of emerging technologies. A report for the Greenpeace Environmental Trust.

Becker, Barbara (1992). Künstliche Intelligenz: Konzepte, Systeme, Verheißungen. Frankfurt a.M. / New York

Becker, Barbara (2000). Cyborgs, Robots und Transhumanisten. Anmerkungen über die Widerständigkeit eigener und fremder Materialität. In: dies. / Irmela Schneider (Hg.): Was vom Körper übrig bleibt. Körperlichkeit - Identität - Medien. Frankfurt a.M. / New York, 41-70

Bowker, Geoffrey C., Star, Susan Leigh (1999). Sorting Things Out: Classification and Its Consequences. Cambridge, MA: MIT Press

Breazeal, Cynthia (2002). Designing Sociable Robots. Cambridge, MA

Brooks, Rodney (1986). Achieving Intelligence Through Building Robots. A.I. Memo 899.

Brooks, Rodney (1991). Intelligence without Representation. In: Artificial Intelligence, 47, 139-160.

Brooks, Rodney (2002). Flesh and Machines. New York: Pantheon Books

Brun, Jean (1992). Le rêve et la machine. Technique et Existence, Paris.

Caporael, Linda R. (1995). ‘Sociality: Coordinating Bodies, Minds and Groups,’ Psycoloquy 6(01), Groupselection 1:

Capurro, Rafael (2005). Philosophical Presuppositions of Producing and Patenting Organic Life. In Andrzej Wierciński (ed.): Between Description and Interpretation. The Hermeneutic Turn in Phenomenology. Toronto: The Hermeneutic Press, 571-581.

Capurro, Rafael (1995). Leben im Informationszeitalter. Berlin.

Capurro, Rafael (1995). On Artificiality. IMES (Istituto Metodologico Economico Statsitico, Università di Urbino), IMES-LCA WP-15 November.  

Capurro, Rafael (1993). Ein Grinsen ohne Katze. Von der Vergleichbarkeit zwischen 'künstlicher Intelligenz' und 'getrennten Intelligenzen‘ In: Zeitschrift für philosophische Forschung, 47, 93-102.

Capurro, Rafael (1990). Ethik und Informatik. In Informatik-Spektrum, 13, 311-320.

Christaller, Thomas / Decker, Michael / Gilsbach, Joachim-Michael / Hirzinger, Gerd / Lauterbach, Karl / Schweighofer, Erich / Schweitzer, Gerhard / Sturma, Dieter (2001). Robotik. Perspektiven für menschliches Handeln in der zukünftigen Gesellschaft. Berlin et al.

Christaller, Thomas / Wehner, Josef (2003). Autonome Maschinen. Wiesbaden

Crutzen, Cecile (2003). ‘ICT-Representations as transformative critical rooms,’ in: Gabriele Kreutzner & Heidi Schelhowe (eds.), Agents of Change. Virtuality, Gender and the Challenge to the Traditional University, Leske + Budrich, Opladen, pp. 87-106.

Fong, Terence / Dautenhahn, Kerstin / Nourbakhsh, Illah (2003). A Survey of Socially Interactive Robots. In: Robotics and Autonomous Systems, 42, 143-166:

Girard, René (1972). La Violence et le sacré. Paris: Grasset.

Haraway, Donna J. (1985 / 1991). Manifesto for Cyborgs: Science, Technology, and Socialist Feminism in the Late Twentieth Century, in Haraway, Donna (1991) Simians, Cyborgs, and Women: the Reinvention of Nature. London: Routledge (originally printed in Socialist Review 80, 1985)

Hayles, Katherine (2003). Computing the Human. In: Jutta Weber / Corinna Bath (Hg.): Turbulente Körper und soziale Maschinen. Feministische Studien zur Technowissenschaftskultur. Opladen: Leske & Budrich

Hayles, N. Katherine (1999). How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics. Chicago / London: Chicago University Press

Lacan, Jacques (1991). Le séminaire. Livre VIII.  Le transfert. Paris: Seuil.

Lem, Stanislaw (1981). Golem XIV. Crocow.

Negrotti, Massimo (1995). Artificialia. La dimensione artificiale della natura umana.. Bologna: CLUEB.

Negrotti, Massimo (1999). The Theory of the Artificial. Exeter: intellect.

Negrotti, Massimo (2002). Naturoids. On the Nature of the Artificial. New Jersey: World Scientific.

Pfeifer, Rolf / Scheier, Christian (1999). Understanding Intelligence. Cambridge, MA

Plato (1973). Opera. Ed. I. Burnet. Oxford.

Star, Susan Leigh (1991). Power, Technology and the Phenomenology of Conventions: on Being Allergic to Onions, in John Law (ed.) A Sociology of Monsters. Essays on Power, Technology and Domination. London / New York: Routledge, pp.26-56.

Suchman, Lucy (1987). Plans and Situated Actions The Problem of Human-machine Communication. Cambridge

Suchman, Lucy (2003). Human / Machine Reconsidered.

Weber, Jutta (2005). Helpless Machines and True Loving Caregivers. A Feminist Critique of Recent Trends in Human-Robot Interaction. In: Journal of Information, Communication and Ethics in Society. Vol. 3, Issue 4, Paper 6

Weber, Jutta (2005). Ontological and Anthropological Dimensions of Social Robotics. In: Proceedings of the Symposium on Robot Companions: Hard Problems and Open Challenges in Robot-Human Interaction. AISB 2005 Convention Social Intelligence and Interaction in Animals, Robots and Agents at the University of Hertfordshire, Hatfield, UK, 12-15th April 2005.

Last update: 5.7.2017



    

Copyright © 2008 by Rafael Capurro, all rights reserved. This text may be used and shared in accordance with the fair-use provisions of U.S. and international copyright law, and it may be archived and redistributed in electronic form, provided that the author is notified and no fee is charged for access. Archiving, redistribution, or republication of this text on other terms, in any medium, requires the consent of the author.
 

 
Zurück zur digitalen Bibliothek 
 
Homepage Forschung Veranstaltungen
Veröffentlichungen Lehre Video/Audio