Updated contribution to the Workshop organized by Cybernics, University of Tsukuba (Japan), September 30, 2009 (PowerPoint). Published in: Cybernics Technical Reports. Special Issue on Roboethics. University of Tsukuba 2011, pp. 39-59 (CYB-2011-001 -CYB-2011-008 March 2011).
Also presented at the IEEE Symposium on Technology and Society (IEE ISTAS 2010) in June 7-10, 2010, University of Wollongong, New South Wales, Australia (PowerPoint). See Video.
See this video on Ethical Issues of ICT Implants in the Human Body (See EGE Opinion No. 20).
See: An Intercultural Dialogue on Roboethics by Makoto Nakada and Rafael Capurro. In: Makoto Nakada and Rafael Capurro (eds.): The Quest for Information Ethics and Roboethics in East and West. Research Report on trends in information ethics and roboethics in Japan and the West. ReGIS (Research Group on the Information Society) and ICIE (Internatinal Center for Information Ethics), March 31, 2013, 13-22.
See: TEDXUWollongong 2012 presentation by Katina Michael, Associate Professor, School of Information Systems and Technology, University of Wollongong. The talk was a part of a series of talks on medical bionics. More info here.
See: The Cloud-Uberveillance A narrative video (3 parts) from Nothing at All. Song written by Martin Hale & Greg Barnett & Katina Michael, 2015: https://www.youtube.com/watch?v=qZaLHrmSfl0
Living with Online Robots (2015)
Robotic Natives. Leben mit Robotern im 21. Jahrhundert (2016)
as well as:
Interrcultural Roboethics for a Robot Age. In: Makoto Nakada, Rafael Capurro and Koetsu Sato (Eds.): Critical Review of Information Ethics and Roboethics in East and West. Master's and Doctoral Program in International and Advanced Japanese Studies, Research Group for "Ethics and Technology in the Information Eta"), University of Tsukuba 2017 (ISSN 2432-5414), 13-18.
See: Tanja Lewis: Don't Let Artificial Intelligence Take Over. Top Scientists Warn. In: livescience, January 12, 2015.
See this comprehensive bibliography on Robot Ethics by Vincent C. Müller (University of Leeds, Anatolia College/ACT)
See Oxford Internet Institute
See Joanna J. Bryson
- The meaning of the EPSRC principles of robotics (2016)
- AI Ethics: Artificial Intelligence, Robot, and Society (2017)
euRobotics 'ethical legal and social issues' (ELS): 3 Workshops at the European Robotics Forum 2017 (Organisation: Vincent C. Müller), 22-23 March 2017, Edinburgh:
ELS 1: Ethics
ELS 2: Legal
ELS 3: Economic
Last update: April 22, 2017
I. Recent Research in Roboethics
Ethics and robotics are two academic disciplines, one dealing with the moral norms and values underlying implicitly or explicitly human behaviour and the other aiming at the production of artificial agents, mostly as physical devices, with some degree of autonomy based on rules and programmes set up by their creators (Capurro and Nagenborg 2009). Since the first robots arrived on the stage in the play by Karel Čapek (1921) visions of a world inhabited by humans and robots gave rise to countless utopian and dystopian stories, songs, movies, and video games.
Human-robot interaction raises serious ethical questions right now that are theoretically less ambitious but practically more important than the possibility of the creation of moral machines that would be more than machines with an ethical code. The term ‘roboethics” was coined by the engineer Gianmarco Veruggio (Veruggio 2006).
The aim of this paper is give a brief account of subjects, projects, groups and authors dealing with ethical aspects of robots. I first start with recent research on roboethics in two EU projects namely ETHICBOTS (2005-2008) and ETICA (2009-2011). I report on the activities of Roboethics.org and particularly of the Technical Committee (TC) on Roboethics of the IEEE and list some ethical issues and principles currently discussed. I also report briefly on the Machine Ethics Consortium.
In the second part I present some views on robotics and robots as discussed particularly in Japan leading to what I call intercultural roboethics, i.e., to an in-deep analysis of the way(s) in which robots are perceived in different cultures with different social and moral backgrounds, values and principles. An intercultural ethical analysis should make possible to be aware of these differences as a basis for a comparative normative ethics of robots (genitivus obiectivus) that is still in its infancy (Capurro and Nagenborg 2009). In the present debate this difference between thinking ethically about robots (genitivus objectivus) and trying to make robots think ethically (genitivus subjectivus) is sometimes blurred or remains implicit. This is for instance the case when discussing about 'moral machines'. See: Coby MacDonald: The Good, The Bad and The Robot: Experts Are Trying to Make Machines Be "Moral". In California Magazine, UC Berkeley, June 4, 2015 as well as Constanze Kurz: Wie man Robotern ethisches Verhalten beibringt. In Netzpolitik (June 8, 2015) with my comments. For a critical epistemological view on robots see Massimo Negrotti: The Reality of the Artificial. Nature, Technology and Naturoids (Heidelberg and Berlin 2012).
In the third part I briefly discuss the relationship between roboethics and digital ontology. In the conclusion I point to some topics and questions for a future agenda of intercultural roboethics. In the Annexes I list recent conferences and publications in the field. I refer to the Korean Robot Ethics Charter as well as to Asimov's Laws of Robotics.
How do we regulate roboethics? Interview with Joanna Bryson by Yueh-Hsuan Weng, 2016.
One being for two Origins: A new perspective on roboetics. Interview with Hiroko Kamide by Yueh-Hsuan Weng, 2016.
On the ethics of research on robotics. Interview with Raja Chatila by Yue-Hsuan Weng, 2015.
Yueh-Hsuan Weng, Peking University, Yusuke Sugahara, Tokyo Institute of Technology, Kenji Hashimoto, Waseda University, Atsuo Takanishi, Waseda University: Intersection of “Tokku” Special Zone, Robots, and the Law: A Case Study on Legal Impacts to Humanoid Robots. In: International Journal of Social Robotics (2015).
Yueh-Hsuan Weng, NCA, Ministry of the Interior, Republic of China, Chien-Hsun Chen, National Nano Device Laboratories, Chuen-Tsai Sun, National Chiao Tung University: Toward the Human-Robot Co-Existence Society: On Safety Intelligence for Next Generation Robots. In: International Journal of Social Robotics (2009).
The Quest for Roboethics: an Interview with Rafael Capurro, Febr. 14, 2017.
TCL (Tech and Law Center) Interviews
Spyros G. Tzafestas: Roboethics: A Navigating Overview
1 Introductory Concepts and Outline of the Book
2 Ethics: Fundamental Elements
3 Artificial Intelligence
4 The World of Robots
5 Roboethics: A Branch of Applied Ethics
5.2 General Discussion of Roboethics
5.3 Top-Down Roboethics Approach
5.4 Bottom-Up Roboethics Approach
5.5 Ethics in Human-Robot Symbiosis
5.6 Robot Rights
5.7 Concluding Remarks
6 Medical Roboethics
7 Assistive Roboethics
8 Socialized Roboethics
9 War Roboethics
10 Japanese Roboethics, Intercultural, and Legislation Issues
10.2 Japanese Ethics and Culture
10.3 Japanese Roboethics
10.4 Intercultural Philosophy
10.5 Intercultural Issues of Infoethics and Roboethics
10.6 Robot Legislation
10.7 Further Issues and Concluding Remarks
11 Additonal Roboethics Issues
11.2 Autonomous Cars Issues
11.3 Cyborg Technology Issues
11.4 Privacy Roboethics Issues
11.5 Concluding Remarks
12 Mental Robots
1. EU Project ETHICBOTS (2005-2008)
Emerging Technoethics of Human Interaction with Communication, Bionic and Robotic Systems (2005-2008).
The project aimed at identifying crucial ethical issues in these areas such as
- the preservation of human identity, and integrity
- applications of precautionary principles
- economic and social discrimination;
- artificial system autonomy and accountability;
- responsibilities for (possibly unintended) warfare application
- nature and impact of human-machine cognitive and affective bonds on individuals and society.Following issues were analyzed
For an analysis of some epistemological, ontological and psychoanalytic implications of robots as contribution to this project see my Ethics and Robotics as well as the complete documentation in the ETHICBOTS website.
- Human-softbot integration, as achieved by AI research on information and communication technologies;
- Human-robot, non-invasive integration, as achieved by robotic research on autonomous systems inhabiting human environments;
- Physical, invasive integration, as achieved by bionic research.
Vol. 6 - December 2006
Ethics and Robotics
edited by Daniela Cerqui, Jutta Weber, Karsten Weber
pdf-fulltext (1.341 KB)
Editorial: On IRIE Vol. 6
pdf-fulltext (17 KB)
Roboethics: a Bottom-up Interdisciplinary Discourse in the Field of Applied Ethics in Robotics
by Gianmarco Veruggio and Fiorella Operto
abstract: This paper deals with the birth of Roboethics. Roboethics is the ethics inspiring the design, development and employment of Intelligent Machines. Roboethics shares many 'sensitive areas' with Computer Ethics, Information Ethics and Bioethics. It investigates the social and ethical problems due to the effects of the Second and Third Industrial Revolutions in the Humans/Machines interaction's domain. Urged by the responsibilities involved in their professions, an increasing number of roboticists from all over the world have started - in cross-cultural collaboration with scholars of Humanities - to thoroughly develop the Roboethics, the applied ethics that should inspire the design, manufacturing and use of robots. The result is the Roboethics Roadmap.
pdf-fulltext (78 KB)
What Should We Want From a Robot Ethic?
by Peter M. Asaro
abstract: There are at least three things we might mean by "ethics in robotics": the ethical systems built into robots, the ethics of people who design and use robots, and the ethics of how people treat robots. This paper argues that the best approach to robot ethics is one which addresses all three of these, and to do this it ought to consider robots as socio-technical systems. By so doing, it is possible to think of a continuum of agency that lies between amoral and fully autonomous moral agents. Thus, robots might move gradually along this continuum as they acquire greater capabilities and ethical sophistication. It also argues that many of the issues regarding the distribution of responsibility in complex socio-technical systems might best be addressed by looking to legal theory, rather than moral theory. This is because our overarching interest in robot ethics ought to be the practical one of preventing robots from doing harm, as well as preventing humans from unjustly avoiding responsibility for their actions.
pdf-fulltext (95 KB)
Neo-Rawlsian Co-ordinates: Notes on A Theory of Justice for the Information Age
by Alistair S. Duff
abstract: The ideas of philosopher John Rawls should be appropriated for the information age. A literature review identifies previous contributions in fields such as communication and library and information science. The article postulates the following neo-Rawlsian propositions as co-ordinates for the development of a normative theory of the information society: that political philosophy should be incorporated into information society studies; that social and technological circumstances define the limits of progressive politics; that the right is prior to the good in social morality; that the nation state should remain in sharp focus, despite globalization; that liberty, the first principle of social justice, requires updating to deal with the growth of surveillance and other challenges; that social wellbeing is a function of equal opportunities plus limited inequalities of outcome, in information as well as material resources; and that political stability depends upon an overlapping consensus accommodating both religion and secularism. Although incomplete, such co-ordinates can help to guide policy-makers in the twenty-first century..
pdf-fulltext (76 KB)
When Is a Robot a Moral Agent?
by John P. Sullins
abstract: In this paper Sullins argues that in certain circumstances robots can be seen as real moral agents. A distinction is made between persons and moral agents such that, it is not necessary for a robot to have personhood in order to be a moral agent. I detail three requirements for a robot to be seen as a moral agent. The first is achieved when the robot is significantly autonomous from any programmers or operators of the machine. The second is when one can analyze or explain the robot's behavior only by ascribing to it some predisposition or 'intention' to do good or harm. And finally, robot moral agency requires the robot to behave in a way that shows and understanding of responsibility to some other moral agent. Robots with all of these criteria will have moral rights as well as responsibilities regardless of their status as persons.
pdf-fulltext (116 KB)
Fundamental Issues in Social Robotics
by Brian R. Duffy
abstract: Man and machine are rife with fundamental differences. Formal research in artificial intelligence and robotics has for half a century aimed to cross this divide, whether from the perspective of understanding man by building models, or building machines which could be as intelligent and versatile as humans. Inevitably, our sources of inspiration come from what exists around us, but to what extent should a machine's conception be sourced from such biological references as ourselves? Machines designed to be capable of explicit social interaction with people necessitates employing the human frame of reference to a certain extent. However, there is also a fear that once this man-machine boundary is crossed that machines will cause the extinction of mankind. The following paper briefly discusses a number of fundamental distinctions between humans and machines in the field of social robotics, and situating these issues with a view to understanding how to address them.
pdf-fulltext (84 KB)
Social Robots - Emotional Agents: Some Remarks on Naturalizing Man-Machine Interaction
by Barbara Becker
abstract: The construction of embodied conversational agents - robots as well as avatars - seem to be a new challenge in the field of both cognitive AI and human-computer-interface development. On the one hand, one aims at gaining new insights in the development of cognition and communication by constructing intelligent, physical instantiated artefacts. On the other hand people are driven by the idea, that humanlike mechanical dialog-partners will have a positive effect on human-machine-communication. In this contribution I put for discussion whether the visions of scientist in this field are plausible and which problems might arise by the realization of such projects.
pdf-fulltext (94 KB)
Learning Robots and Human Responsibility
by Dante Marino and Guglielmo Tamburrini
abstract: Epistemic limitations concerning prediction and explanation of the behaviour of robots that learn from experience are selectively examined by reference to machine learning methods and computational theories of supervised inductive learning. Moral responsibility and liability ascription problems concerning damages caused by learning robot actions are discussed in the light of these epistemic limitations. In shaping responsibility ascription policies one has to take into account the fact that robots and softbots - by combining learning with autonomy, pro-activity, reasoning, and planning - can enter cognitive interactions that human beings have not experienced with any other non-human system.
pdf-fulltext (85 KB)
Invisibility and the Meaning of Ambient Intelligence
by C. K. M. Crutzen
abstract: A vision of future daily life is explored in Ambient Intelligence (AmI). It contains the assumption that intelligent technology should disappear into our environment to bring humans an easy and entertaining life. The mental, physical, methodical invisibility of AmI will have an effect on the relation between design and use activities of both users and designers. Especially the ethics discussions of AmI, privacy, identity and security are moved into the foreground. However in the process of using AmI, it will go beyond these themes. The infiltration of AmI will cause the construction of new meanings of privacy, identity and security because the "visible" acting of people will be preceded, accompanied and followed by the invisible and visible acting of the AmI technology and their producers.
A question in this paper is: How is it possible to create critical transformative rooms in which doubting will be possible under the circumstances that autonomous 'intelligent agents' surround humans? Are humans in danger to become just objects of artificial intelligent conversations? Probably the relation between mental, physical, methodical invisibility and visibility of AmI could give answers.
pdf-fulltext (117 KB)
On the Anticipation of Ethical Conflicts between Humans and Robots in Japanese Mangas
by Stefan Krebs
abstract: The following contribution examines the influence of mangas and animes on the social perception and cultural understanding of robots in Japan. Part of it is the narrow interaction between pop culture and Japanese robotics: Some examples shall serve to illustrate spill-over effects between popular robot stories and the recent development of robot technologies in Japan. The example of the famous Astro boy comics will be used to help investigate the ethical conflicts between humans and robots thematised in Japanese mangas. With a view to ethical problems the stories shall be subsumed under different categorical aspects.
pdf-fulltext (74 KB)
In Between Companion and Cyborg: The Double Diffracted Being Else-where of a Robodog
by Maren Kraehling
abstract: Aibo, Sony's robodog, questions the relations between nature, technology, and society and directs the attention to the difficult and changing triad between machines, humans and animals. Located at the boundaries between entertainment robot, dog, and companion Aibo evokes the question which relationship humans and Aibo can have and which ethical issues are being addressed. Promoted by Sony as a 'best friend', it is useful to analyze Aibo within the theoretical framework of feminist philosopher and biologist Donna Haraway, who develops alternative approaches of companionships between humans and dogs. Therefore, I am going to ask how Aibo challenges the understanding of other life forms by humans and how concepts of friendship are at stake. Ethical questions about human perceptions of dogs in the age of doglike robots must be approached. However, Aibo itself follows no predefined category. Aibo does neither live in a merely mechanistic 'elsewhere' nor in the 'elsewhere' of animals but in an intermediate space, in a doubled diffracted 'elsewhere'.
pdf-fulltext (90 KB)
'Rinri': An Incitement towards the Existence of Robots in Japanese Society
by Naho Kitano
abstract: Known as the "Robot Kingdom", Japan has launched, with granting outstanding governmental budgets, a new strategic plan in order to create new markets for the RT (Robot-Technology) Industry. Now that the social structure is greatly modernized and a high social functionality has been achieved, robots in the society are taking a popular role for Japanese people. The motivation for such great high-tech developments has to be researched in how human relations work, as well as in the customs and psychology of the Japanese. With examining the background of the Japanese affirmativeness toward Robots, this paper reveals the Animism and the Japanese ethics, "Rinri", that benefit the Japanese Robotics. First the introduction describes the Japanese social context which serves in order to illustrate the term "Rinri". The meaning of Japanese Animism is explained in order to understand why Rinri is to be considered as an incitement for Japanese social robotics.
pdf-fulltext (83 KB)
Robotics and Development of Intellectual Abilities in Children
by Miguel Angel Pérez Alvarez
abstract: It is necessary to transform the educative experiences into the classrooms so that they favor the development of intellectual abilities of children and teenagers. We must take advantage of the new opportunities that offer information technologies to organize learning environments which they favor those experiences. We considered that to arm and to program robots, of the type of LEGO Mind Storms or the so called "crickets", developed by M. Resnik from MIT, like means so that they children them and young people live experiences that favor the development of their intellectual abilities, is a powerful alternative to the traditional educative systems. They are these three tasks those that require a reflective work from pedagogy and epistemology urgently. Robotics could become in the proper instrument for the development of intelligence because it works like a mirror for the intellectual processes of each individual, its abilities like epistemologist and, therefore, is useful to favor those processes in the classroom.
pdf-fulltext (58 KB)
On Designing Machines and Technologies in the 21st Century. An Inter-disciplinary Dialogue.
by Dirk Söffker und Jutta Weber
abstract: Is an autonomous robot, designed to communicate and take decisions in a human way, still a machine? On which concepts, ideas and values is the design of such machines to be based? How do they relate back to our everyday life? And finally, in how far are social demands the guideline for the development of such innovative technologies. Using the form of a dialogue theoretical, ethical and socio-political questions concerning the design of interactive machines are discussed especially with regards to the accelerated mechanization of our professional and private life. Developed out of an Email dialogue and further elaborated the discourse spanning from engineering to research in the field of science and technology deals with the question, if the men-machine relationship changes.
pdf-fulltext (128 KB)
3. EU Project ETICA (2009-2011)
“The ETICA project will identify emerging Information and Communication Technologies (ICTs) and their potential application areas in order to analyse and evaluate ethical issues arising from these. By including a variety of stakeholders and disciplinary perspectives, it will grade and rank foreseeable ethical risks. Based on the study governance arrangements currently used to address ICT ethics in
Europe, ETICA will recommend concrete governance structures to address the most salient ethical issues identified. These recommendations will form the basis of more general policy recommendations aimed at addressing ethical issues in emerging ICTs before or as they arise.
Taking an inclusive and interdisciplinary approach will ensure that ethical issues are identified early, recommendations will be viable and acceptable, and relevant policy suggestions will be developed. This will contribute to the larger aims of the Science in Society programme by developing democratic and open governance of ICT. Given the high importance of ICT to further a number of European policy goals, it is important that ethical issues are identified and addressed early. The provision of viable policy suggestions will have an impact well beyond the scientific community. Ethical issues have the potential to jeopardise the success of individual technical solutions. The acceptance of the scientific-technological basis of modern society requires that ethical questions are addressed openly and transparently. The ETICA project is therefore a contribution to the European Research Area and also to the quality of life of European citizens. Furthermore, ethical awareness can help the European ICT industry gain a competitive advantage over less sensitive competitors, thus contributing to the economic well-being of
ETICA is funded by the European Commission under the 7th framework programme.
"The Institute of Electrical and Electronics Engineers (IEEE), a non-profit organization, is the world's leading professional association for the advancement of technology.
The IEEE Robotics and Automation Society (RAS) is interested in both applied and theoretical issues in robotics and automation.
The IEEE-RAS Technical Committee (TC) on Roboethics aims to provide the IEEE-RAS with a framework for analyzing the ethical implications of robotics research, by promoting the discussion among researchers, philosophers, ethicists, and manufacturers, but also by supporting the establishment of shared tools for managing ethical issues in this context.
The IEEE-RAS Technical Committee on Roboethics was founded in 2004.
Co-Chairs: Gianmarco Veruggio, Ronald Arkin, Atsuo Takanishi.
Founding Co-Chairs: Paolo Dario, Ronald Arkin, Kazuo Tanie."
Scope of the TC:
"The focus of the TC includes the unintended warfare uses of robotics research results, the preservation of human integrity in the interaction with robotic (even bionic) systems, and the study and development of the robot-ethics concept. The TC pursues its objectives by organizing focussed events and publications at RAS-sponsored conferences and elsewhere."
- Advanced production systems
- Adaptive robot servants and intelligent homes
- Network Robotics
- Outdoor Robotics
- Health Care and Life Quality
- Military Robotics
See more here
Ethical issues shared by Roboethics and Information Ethics:
- Dual-use technology
- Anthropomorphization of the Machines
- Humanisation of the Human/Machine relationship
- Technology Addiction
- Digital Devide
- Fair access to technological resources
- Effects of technology on the global distribution of wealth and powr
- Environmental impact of technology
See more here.
Ethical Principles to be followed in Roboethics:
- Human Dignity and Human Rights
- Equality, Justice and Equity
- Benefit and Harm
- Respect for Cultural Diversity and Pluralism
- Non-Discrimination and Non-Stigmatization
- Autonomy and Individual Responsibility
- Informed Consent
- Solidarity and Cooperation
- Social Responsibility
- Sharing of Benefits
- Responsibility towards the Biosphere
See more here.
During the third day organised by EURON, the workshops were targeted more towards academia, nevertheless of interest for the EUROP community as well. In particular, the sessions on Ethical, Legal and Societal issues / non-technical constraints, and on State-of-the-art robotics products and R&D challenges were organised by EUROP members.
Ethical, Legal and Social Issues: non-technical constraints.
“Machine Ethics is concerned with the behavior of machines towards human users and other machines. Allowing machine intelligence to effect change in the world can be dangerous without some restraint. Machine Ethics involves adding an ethical dimension to machines to achieve this restraint. Further, machine intelligence can be harnessed to develop and test the very theory needed to build machines that will be ethically sensitive. Thus, machine ethics has the additional benefits of assisting human beings in ethical decision-making and, more generally, advancing the development of ethical theory.”
Implementing Ethical Advisors
"In order to add an ethical dimension to machines, we need to have an ethical theory that can be implemented. Looking to Philosophy for guidance, we find that ethical decision-making is not an easy task. It requires finding a single principle or set of principles to guide our behavior with which experts in Ethics are satisfied and will likely involve generalizing from intuitions about particular cases, testing those generalizations on other cases and, above all, making sure that principles generated are consistent with one another.
We are developing prototype systems based upon action-based ethical theories that provide guidance in ethical decision-making according to the precepts of their respective theories— Jeremy , based upon Bentham's Hedonistic Act Utilitarianism, W.D., based upon Ross' Theory of Prima Facie Duties, and MedEthEx, based upon Beauchamp's and Childress' Principles of Biomedical Ethics. MedEthEx (see online demo) uses an ethical principle discovered via machine learning techniques to give advice in a particular type of ethical dilemma in medical ethics."
Machine Ethics Research Group
"We are working on advancing Ethical Theory by making ethics precise enough to be programmed. We are, also, working on the problem of developing a decision procedure for determining the correct action in a multiple duty ethical theory such as W.D. Ross' Theory of Prima Facie Duties. Since we believe that such a decision procedure will come from abstracting from intuitions about particular cases, we are developing a database of ethical dilemmas and analyzing them according to Ross' theory."
Living with Robots and Interactive Companions (FP 7)
Preparatory Studies and Ethics for Companion Design (LIREC Deliverable D10.1)
The research questions in WP10 are:
- How can we understand how to design for long-term human companions?
- How can methods and techniques from interaction design inform human companion design?
- What are the ethics involved in designing companions?
9. Fourth Workshop on Roboethics: "Better Robots, Better Life"
Shanghai, China, 9-13 May, 2011
Robot Ethics (Decision procedures/algorithms for moral behavior)
Technical Dependability (Availability; Reliability; Safety; Security)
Military application of robotics (Acceptability, Advantages and Risks, Codes)
Health (Robotics in surgery; Robotics in health care, assistance, prosthetics and therapy)
Service (Social robotics, Personal assistants, Companions)
Economy (Replacing humans in the workplace; Robotics and the job market)
Psychology (Position of humans in the control hierarchy; Robots and children)
Law (Robots and liability; Deployment of autonomously acting robots)
Environment (Sustainable exploitation of resources; Cleaning nuclear and toxic waste)
10. Ethics & IT: Special Issue: Robots and Ethics
Vol. 14, Issue 1, March 2012
The reality of friendship within immersive virtual worlds
Nicholas John Munn
Resolving the gamer’s dilemma
An ethical framework in information systems decision making using normative theories of business ethics
Granny and the robots: ethical issues in robot care for the elderly
Amanda Sharkey, Noel Sharkey
Robots and reality: a reply to Robert Sparrow
Can we trust robots?
Robots: ethical by design
Gordana Dodig Cnkovic, Baran Çürüklü
11. Mathias Gutmann, Michael Decker, Julia Knifka (Eds.):
Evolutionary Robotics, Organic Computing and Adaptive Ambience.
Epistemological and Ethical Implications of Technomorphic Descriptions of Technologies.
Vienna: LIT, 2015
Life and Other Functions
Klaus Mainzer: Life as Machine? From Life Science to Cyberphysical Systems.
Herman T. Tavani & Jeff Buechner: Autonomy and Trust in te Context of Artificial Agents.
Mathias Gutmann & Julia Knifka: Biomorphic and Technomorphic Metaphor.s
Agency and Its Imlications
Rafael Capurro: Toward a Comparative Theory of Agents.
Klaus Wiegerling: Artificial Bodies and Embodiment of Autonomous Systems.
Karsten Weber: Is there Anybody out there? On Our Disposition and the (Pretended) Inevitableness to Anthromoporphize Machines.
Ethics and Applications
Florian Nafz, Hella Seebach, Jan-Philipp Steghöfer & Wolfgang Reif: Controlling Software-Induced Self-Organizing Bevavior.
Jorge Slis & Atsuo Takanishi: Human-Friendly Robots for Entertainment Purposes and Their Possible Implications.
António B. Monitz: Robots and Humans as Co-Workers? The Human-Centred Perspective of Work with Autonomous Systems.
Michale Decker: Technology Is Getting Closer. Preliminary Technology Assessment of Adaptive Systems.
Bernd Carsten Stahl & Job Timmermans: Ethical Aspects of Autonomous Systems: Foresight and Governance.
12. Raising Robotic Natives
Artefacts for generations growing up with robots.
"Why do future visions of robotics incite discomfort in our generation? Could robots truly render us obsolete or is it our fear of losing control? And are these fears conditioned or instinctive?
Raising Robotic Natives explores interactions between children and robots that could raise them as the first generation of robotic natives.
Just like digital natives grow up in the digital world, robotic natives are born into an environment that is adapting to robots. As a result of unbiased, childlike enthusiasm, they are socialized with the technology early on. Through constant robotic interactions and formalized education, robotic natives get to think differently about robots than we do. It will be their responsibility to shape the future of robotics, not ours—besides we’re robotic immigrants, after all.
While the media portrays robots as two-legged sentient humanoids, the reality is that robotics is just not there yet. This project is about the near future: about what we might encounter on the way to implement science fiction.
Apart from appliances like Roombas sweeping our floors, we imagine that post-industrial robots could soon find their way into our homes—in fact, they pop up on Kickstarter already. We’re seeing a similar development as with 3D printers in the last years: prices for industrial robots drop while performance increases. The availability of these technologies has invited a whole new ecosystem of people to hack, tinker, and dream up new applications.
We assume that this ecosystem of people — creatives, early adopters, etc — will be the innovators helping their children become robotic natives.
Raising Robotic Natives presents four objects that each stand for one influencing factor, a condition or step towards becoming a robotic native."
PT-AI.org. (Philosophy & Theory of Artificial Intelligence)
See: Vincent C. Müller: Robot Ethics, Phil-Papers
See my: Living with Online Robots (2015)
There have been recent discussions about the social and ethical dimensions of robotics and robots in different societies and cultures. I take as an example the case of Japan.
1. Robots and Roboethics in Japan:
"Robots that look human tend to be a big hit with young children and the elderly," Hiroshi Kobayashi, Tokyo University of Science professor and Saya's developer, said yesterday. "Children even start crying when they are scolded."
"Simply turning our grandparents over to teams of robots abrogates our society's responsibility to each other, and encourages a loss of touch with reality for this already mentally and physically challenged population," Kobayashi said Noel Sharkey, robotics expert and professor at the
, believes robots can serve as an educational aid in inspiring interest in science, but they can't replace humans. Universityof Sheffield
Kobayashi says Saya is just meant to help people and warns against getting hopes up too high for its possibilities. "The robot has no intelligence. It has no ability to learn. It has no identity," he said. "It is just a tool." Source here.
TOKYO (Reuters) - Robots could fill the jobs of 3.5 million people in graying Japan by 2025, a thinktank says, helping to avert worker shortages as the country's population shrinks.
Japan faces a 16 percent slide in the size of its workforce by 2030 while the number of elderly will mushroom, the government estimates, raising worries about who will do the work in a country unused to, and unwilling to contemplate, large-scale immigration.
The thinktank, the Machine Industry Memorial Foundation, says robots could help fill the gaps, ranging from microsized capsules that detect lesions to high-tech vacuum cleaners
Rather than each robot replacing one person, the foundation said in a report that robots could make time for people to focus on more important things.“ (Source here.)What kind of „more important things“? This is a question to intercultural roboethics..
could save 2.1 trillion yen ($21 billion) of elderly insurance payments in 2025 by using robots that monitor the health of older people, so they don't have to rely on human nursing care, the foundation said in its report. Japan
What are the consequences for relying on robot nursing? This is a question to intercultural roboethics.
"Caregivers would save more than an hour a day if robots helped look after children, older people and did some housework, it added. Robotic duties could include reading books out loud or helping bathe the elderly.“
How will children and elderly react to robots taking „care“ of them? This is a question to intercultural roboethics.
"Seniors are pushing back their retirement until they are 65 years old, day care centers are being built so that more women can work during the day, and there is a move to increase the quota of foreign laborers. But none of these can beat the shrinking workforce," said Takao Kobayashi, who worked on the study.
"Robots are important because they could help in some ways to alleviate such shortage of the labor force."
How far will they alleviate such shortage of the labor force? And with what consequences? This is a question to intercultural roboethics.
"Kobayashi said change was still needed for robots to make a big impact on the workforce."
"There's the expensive price tag, the functions of the robots still need to improve, and then there are the mindsets of people," he said.
"People need to have the will to use the robots."
The „mindsets of people“ and their interplay with robots: this is a question to intercultural roboethics
2. Further Contributions
Workshops and conferences at the University of Tsukuba:
- Cybernics, University of Tsukuba, September 30, 2009 (PowerPoint)
Friedrich-Ebert-Foundation and University of Tsukuba Joint Symposium: Robo-Ethics and "Mind-Body-Schema" of Human and Robot - Challenges for a Better Quality of Life, University of Tsukuba (Japan), Keynote: Robo-Ethics, January 23, 2015. See my: Living with Online Robots (2015)
See: Rafael Capurro - Makoto Nakada: An Intercultural Dialogue on Roboethics (2013)
See also the contributions to the meeting Computing and Philosophy (AP-CAP 2009)
Keynote: Hiroshi Ishiguro: Developing androids and understanding humans
- Carl Shulman, Nick Tarleton, and Henrik Jonsson: Which Consequentialism? Machine Ethics and Moral Divergence
- Kimura Takeshi: Introducing Roboethics to Japanese Society: A Proposal
- Soraj Hongladarom: An Ethical Theory for Autonomous and Conscious Robots
- Carl Shulman, Enrik Johnsson, and Nick Tarleton: Machine Ethics and Superintelligence
Keith Miller, FrancesGrodzinsky, Marty Wolf: Why Shouldn’t Have to Guess Turing
- Gene Rohrbaugh: On the Design of Moral and Amoral Agents
The relation between humans and robots can be understood as a relation between rationality and freedom or between the digital and the existential casting of Being. In a recent article the Australian philosopher Michael Eldred writes:
"For example, a computer-controlled robot on a production line can bring the robot's arm into a precisely precalculated position, which is always a rational number or an n-tuple thereof. The robot's arm, however, will always be in a real, physical position, no matter how accurate the rational position calculated by the computer is. There is therefore always an /indeterminacy/ in the computer-calculated position, a certain /quivering/ between a rational position and an infinity of irrational, but real positions. An irrational, real position can never be calculated by a computer, but only approximated, only approached. This signals the /ontological/ limit to the calculability of physical reality for mathematical science. It is not an experimental result, but is obtained from phenomenological, ontological considerations. We must conclude: /physical reality is irrational/. "
"Hence the state of any real physical being is always an indeterminate quivering around a rationally calculable state. Physical reality, even on a banal macroscopic level, therefore always exceeds what can be logically, mathematically, rationally calculated/. This holds true all the more for those physical beings — ourselves— whose essential hallmark is spontaneous, /free/ movement.
Let me end therefore with a quote from Goethe: "Es waren verständige, geistreiche, lebhafte Menschen, die wohl einsahen, daß die Summe unserer Existenz, durch Vernunft dividiert, niemals rein aufgehe, sondern daß immer ein wunderlicher Bruch übrig bleibe." ("They were rational, clever, lively people who saw very well that the sum of our existence, divided by reason, never goes evenly, but always leaves the remainder of a queer fraction." (Wilhelm Meisters Lehrjahre, 4. Buch, 18. Kap.)." (Eldred 2010)There is not only a tension but an abyss or a "queer" fraction ("ein wunderlicher Bruch") between the human mode of existence and the mode robots are. Humans die, robots break down or go kaput. This insight is important not only for philosophers but also for roboticists in order to avoid waste of time for instance trying to build a robot like a human or to develop a theory where robots are to be considered as moral beings.
This analysis of the ontological difference between the modes of being of robots and humans presupposes what I call, in accordance with Eldred, digital ontology (Capurro 2005). The consequence of this analysis is that robotics should be founded in the difference and not in the similarity between the modes of being of humans and robots. This analysis takes a critical stance against some perennial myths regarding the idea of robots becoming "like" humans.
Cosima Wagner: Robotopia Nipponica - Recherchen zur Akzeptanz von Robotern in Japan. Marburg: Tectum Verlag 2013.
"[...] on the one hand, as a Japanese Studies research topic"social robots" illustrate the "negotiation character of the creation and use of technological artefacts" (Hörning), which for example includes the rejection of military applications of robot technology in Japan. On the other hand, as a cultural topos, they mirror dreams, desires and needs of human beings at a certain time and therefore have to be interpreted as political objects as well.
As a source for a Japanese history of objects "social" robots exemplify the cultural meaning of robots, the expectations of the Japanese state and economy, the mentality of Japanese engineers and scientists and last but not least the socio-cultural change, which the ageing Japanese society is about to face."
Fahrer entlasten, nicht ersetzen. Interview für die Zeitschrift Flotte.de, 1/2017, 76-77 (pdf)
The Quest for Roboethics: an Interview with Yue-Hsuan Weng:
- Robohub, Febr. 14, 2017
- Tech & Law Center, Febr. 3, 2017
Verband der Automobilindustrie (VDA): Dialogreihe Mobilität von morgen. Gespräch mit dem Vorstandsvorsitzenden der Continental AG, Dr. Elmar Degenhart. Moderation: Ines Arland, Berlin, 29. November 2016. Siehe auch: Continental AG.English translation 2025 AD, The Year of Automatic Driving
Kongress: Auto motor und sport i-Mobility: Inmobilität der Zukunft. Philosophischer Zwischenruf: Erst tun, dann denken: Das geht nicht Rafael Capurro and Ralph Alex (Chefredakteur), Stuttgart, April 9, 2015. See my: Living with Online Robots (2015)IROS: IEEE/RSJ International Conference on Intelligent Robots and Systems, Hamburg 2015:
Björn Giesler: Video: Robots, Politics, and Ethics: How Autonomous Driving Transforms our Way of Thinking About Machines.
George A. Bekey, Univ. of Southern California: Robot Ethics: Video: Robot Ethics in the Era of Self Driving Automobiles.
1) Patrick Lin: Here is a terrible idea: robot cars with adjustable ethics settings. In: Wired, 8.18.2014.
"Did your robot car make the right decision? This scene, of course, is based on the infamous “trolley problem” that many folks are now talking about in AI ethics. It’s a plausible scene, since even cars today have crash-avoidance features: some can brake by themselves to avoid collisions, and others can change lanes too.The thought-experiment is a moral dilemma, because there’s no clearly right way to go. It’s generally better to harm fewer people than more, to have one person die instead of five. But the car manufacturer creates liability for itself in following that rule, sensible as it may be. Swerving the car directly results in that one person’s death: this is an act of killing. Had it done nothing, the five people would have died, but you would have killed them, not the car manufacturer which in that case would merely have let them die.
The point is this: Even with an ethics setting adjusted by you, an accident victim hit by your robot car could potentially sue the car manufacturer for (1) creating an algorithm that makes her a target and (2) allowing you the option of running that algorithm when someone like her—someone on the losing end of the algorithm—would predictably be a victim under a certain set of circumstances.
Punting Responsibility to Customers
Even if an ethics setting lets the company off the hook, guess what? We, the users, may then be solely responsible for injury or death in an unavoidable accident. At best, an ethics setting merely punts responsibility from manufacturer to customer, but it still doesn’t make progress toward that responsibility. The customer would still need to undergo soul-searching and philosophical studies to think carefully about which ethical code he or she can live with, and all that it implies.
And it implies a lot. In an important sense, any injury that results from our ethics setting may be premeditated if it’s foreseen. By valuing our lives over others, we know that others would be targeted first in a no-win scenario where someone will be struck. We mean for that to happen. This premeditation is the difference between manslaughter and murder, a much more serious offense.
In a non-automated car today, though, we could be excused for making an unfortunate knee-jerk reaction to save ourselves instead of a child or even a crowd of people. Without much time to think about it, we can only make snap decisions, if they’re even true decisions at all, as opposed to merely involuntary reflexes.
Deus in Machina
So, an ethics setting is not a quick workaround to the difficult moral dilemma presented by robotic cars. Other possible solutions to consider include limiting manufacturer liability by law, similar to legal protections for vaccine makers, since immunizations are essential for a healthy society, too. Or if industry is unwilling or unable to develop ethics standards, regulatory agencies could step in to do the job—but industry should want to try first.
With robot cars, we’re trying to design for random events that previously had no design, and that takes us into surreal territory. Like Alice’s wonderland, we don’t know which way is up or down, right or wrong. But our technologies are powerful: they give us increasing omniscience and control to bring order to the chaos. When we introduce control to what used to be only instinctive or random—when we put God in the machine—we create new responsibility for ourselves to get it right."
2) Noah J. Goodall: Vehicle Automation and the Duty to Act. In: 21st World Congress on Intelligent Transfport Systems, Detroit, Sept. 2014
The act of driving always carries some level of risk. With the introduction of vehicle automation, it is probable that computer-driven vehicles will assess this changing level of risk while driving, and make decisions as to the allowable risk for itself and other road users. In certain situations, an automated vehicle may be forced to select whether to expose itself and its passengers to a small risk in order to protect other road users from an equal or greater amount of cumulative risk. In legal literature, this is known as the duty to act. The moral and legal responsibilities of an automated vehicle to act on the behalf of other road users are explored.
Understandably, the owner of an automated vehicle may have a strong incentive to maximize the safety of his own vehicle and its occupants over the safety of other roadway users. Some may go further, and reject any unnecessary risk to their vehicle. Even if laws are introduced specifying under what conditions an automated vehicle has a duty to rescue, an automaker could design the moral component of the software to be more cautious and risk-averse. Automakers might be unable to advertise or demonstrate their ability to self-protect, but vehicle models may develop reputations for being more risk-averse than others. Due to the complexity of the underlying software, and the difficulty to trace back the decision process behind a vehicle’s actions, it may be difficult to ever prove that a vehicle is intentionally avoiding risk. Testing may be needed to ensure compliance, performed either by government or through industry self-regulation."
The automation of road vehicle introduces several new problems, including the need for some type of moral reasoning, either by engineers when developing crash avoidance strategies, or encoded directly in the vehicle’s own path planning algorithms. A particularly difficult moral problem is determining when an automated vehicle must subject itself (and its passengers) to a small risk in order to greatly reduce the risk of others. We have shown that an automated vehicle programmed to foremost protect the safety of its own passengers can produce morally unacceptable results. Common law does not require intervention in an emergency in most cases, even if there is no risk for the potential rescuer. The ethics literature provides a language for discussing these problems, and several possible solutions. Should society decide that advanced automated vehicles should occasionally subject their occupants to small levels of avoidable risk in order to protect other users, regulation is needed to ensure that industry does not hide excessive self-protection tendencies within complex software. This article has defined this problem and discussed initial directions."
3) Jason Millar: You should have a say in your robot's car code of ethics. In: Wired, 9.2.2014.
Recently, WRITING IN WIRED , ethicist Patrick Lin argued that building a programmable ethics button into future autonomous cars is not the right approach to dealing with the moral nuance of this new technology. But isn’t a car that ignores your moral choices worse? There is a middle path, and we need only look to modern healthcare to find it.
A Solution Already Exists
In healthcare, when moral choices must be made it is standard practice for nurses and physicians to inform patients of their reasonable treatment options, and let patients make informed decisions that align with personal preferences. This process of informed consent is based on the idea that individuals have the right to make decisions about their own bodies. Informed consent is ethically and legally entrenched in healthcare, such that failing to obtain informed consent exposes a healthcare professional to claims of professional negligence.
You could also argue that informed consent merely punts responsibility to the user. Critics argue that informed consent unfairly burdens individuals with difficult, often troubling, choices that they are ill prepared to make.
Yet, despite the challenges and complexity introduced by informed consent, it is hard to imagine that people would accept a return to a healthcare system where doctors and nurses could make difficult moral decisions about their treatment without first seeking their consent.
Why, then, would we accept designers and engineers making deeply moral decisions on our behalf, the kind represented by the tunnel problem, without first obtaining our explicit consent? One solution to this ethical problem is to adopt the same approach in engineering that has been tried and tested in healthcare: a robust standard of informed consent. Of course, one way to accomplish this in practice (there are likely others) is to build reasonable ethics settings into robot cars.
Is It Time to Rethink Robot Liability?
It’s a safe bet that lawyers will continue to sue people no matter what design approach roboticists adopt. It is also entirely possible that if we stick to a traditional model of product liability, the introduction of ethics settings could expose users and manufacturers to complicated new kinds of liability suits, just as informed consent requirements have in healthcare.
However, there is a growing belief that autonomous cars and other robots require a significant legal rethinking if we are to regulate them appropriately.
We Must Embrace Complexity
If we embrace robust informed consent practices in engineering the sky will not fall. There are some obvious limits to the kinds of ethics settings we should allow in our robot cars. It would be absurd to design a car that allows users to choose to continue straight only when a woman is blocking the road. At the same time, it seems perfectly reasonable to allow a person to sacrifice himself to save a child if doing so aligns with his moral convictions. We can identify limits, even if the task is complex.
Robots, and the ethical issues they raise, are immensely complex. But they require our thoughtful attention if we are to shift our thinking about the ethics of design and engineering, and respond to the burgeoning robotics industry appropriately. Part of this shift in thinking will require us to embrace moral and legal complexity where complexity is required. Unfortunately, bringing order to the chaos does not always result in a simpler world."
4) Alexander Mankowsky: "Harmonie ist gefährlich". Ein Interview mit dem Zukunftsforscher Alexander Mankokwsky. In. dasfilter.com, 9.7.2014.
"Welche Rolle spielen Autos in diesem Zusammenhang?
Die Frage ist: Was ist ein Auto morgen? Man könnte sagen, es handelt sich um eine Sphäre, die mobil ist. Gerade wenn man sich die Verdichtung in Stadtzentren anschaut, wird eine mobile Privatsphäre einen ungeheuren Wert haben. Es wird nicht alle Probleme lösen, wird aber etwas sein, das Menschen wollen. Das denke ich synonym mit der Weiterentwicklung von Autos.
Nun haben wir heute bereits „connected cars“, mit Internet und einer Blackbox, die Fahrten aufzeichnet. Google und Apple drängen mit ihrer Software auf das Armaturenbrett. Wie privat kann so ein Auto überhaupt sein?
Das wird man bedenken und auch an Lösungen arbeiten müssen. Privat heißt hier erstmal, dass es sich um eine physische Hülle handelt. Das Private wird sich aber auch ändern. Konzepte wie Fairness spielen da eine Rolle. Erachte ich die Dienstleistung, die ich erhalte, für fair, wenn ich dafür Daten von mir preisgebe? Naiv stelle ich mir eine Art Fairness-Regler vor, den man immer anpassen kann. So etwas wird man gerade nach der Diskussion um Big Data und Überwachung wollen. Menschen möchten die Kontrolle zurück über die Informationen, die sie an Firmen weitergeben. Davon hat jeder seine eigene Vorstellung, die man respektieren können muss."
5) Wolfgang Bernhard (Daimler, Vorstand) im Gespräch mit Georg Meck: "Die Maschine fährt sicherer als der Mensch." In: Frankfurter Allgemeine Sonntagszeitung, 26. Juli 2015, Nr. 30, 24-25.
"Ihr Vorstandsvorsitzender Dieter Zetsche hat für das fahrerlose Fahre eine ethische Debatte angemahnt: Für wen soll der Computer im Zweifel bremsen? Für das Kind auf dem Rag oder für die Oma auf dem Gehweg? Welches Leben ist mehr wert?
Für solche Situationen bracuht die Software sehr viel Rechnerzeit. Selbst wenn das gelingt, geht wertvolle Zeit verloren für Ausweichmannöver. Das ist aber entscheidend. Unser Ansatz ist es, Abwehrmechanismen anzubringen, damit überhaupt niemandem etwas passiert. Daneben dürfen wir eines nicht vergessen: Der Computer erhöht die Sicherheit deutlich, 97 Prozet der Unfälle beruhen auf menschlichen Faktoren. Die Maschine fährt insgesamt sicherer als der Mensch.
Aber sie ersetzt nicht den Menschen. Warum sollte ein Spediteur also in den fahrerlosen Truck investieren, wenn er den Fahrer trotzdem nicht sparen kann?
Der Spediteur weiß, dass er mit dem System seinen Spritverbrauch verbessern kann. Außerdem erhöht er, wie gesagt, die Sicherheit für seine Fahrer und seine Fahrzeuge. Müde wird die Maschine auch nicht. Wir haben gemessen: mit einem teilautonomen System werden die Fahrer 25 Prozent langsamer müde.
Das bedeutet: Sie können länger fahren, damit die Sache sich lohnt?
In Amerika ist das ein Anreiz, da haben die Spediteure sofort gesehen: Wenn die Fahrer länger fit bleiben, können sie auch länger fahren. Und weil der Lohn der Fahrer an den gefahrenen Kilometern hängt, würde die auch gerne länger fahren, schließlich verdienen sie dadurch mehr Geld. Für eine Stunde autonomes Fahren wäre vielleicht eine fünf Münuten längere zulässige Lenkzeit möglich. In Europa wird das eher kritisch gesehen. Ich kann mir eine Diskussion über solche Regeln vorstellen.
Unser Truck liefert Daten zu Temperatur, Luftfeuchtigkeit, Straßenzustand, Wetter. Sie kriegen über die Scheinwerfer mit, ob es dunkel oder hell ist. All diese Daten sind verfügbar. Ein Truck ist eine Datengoldmine.
Zugegeben, das sind jede Menge Daten, aber wie wird daraus Gold? Wem hilft dieser Wust an Informationen? Wer legt dafür Geld auf den Tisch?
Erst Mal müssen die Daten in die Cloud, wir brauchen die Verbindung zum Internet - und dann Leute, die Apps schreiben, die aus diesen Daten Nutzen generieren. Wir beginnen gerade erst zu verstehen, wieweit das führen kann. Wie viel Wert da drinsteckt, wenn 100 000 Trucks durch Deutschland fahren, permanent mit dem Netz verbunden sind und Daten erzeugen: Für uns als Hersteller geht da ein Kronleuchter auf."
Siehe: Georg Meck: Daimler schickt selbstfahrende Trucks auf die Autobahn, FAZ 25.7.2015.
6) "Eine Maschine vor Gericht? Vielleicht eine gute Idee". In: Süddeutsche Zeitung, 22.4.2014.
"Selbstbewusst haben die Autohersteller daher schon eine Art Fahrplan festgelegt, um menschliche Fahrer zu entmachten:
Stufe 0: Manuelles Fahren
1: Assistiertes Fahren mit Abstandmessern, Einparkhilfen, Spurhaltesystemen.
2: Teilautomatisiertes Fahren: Der Fahrer muss in bestimmten Situationen, etwa auf der Autobahn, nicht mehr steuern, bleibt aber achtsam.
3: Hochautomatisiertes Fahren: Das Fahrzeug findet den Weg alleine, aber der Mensch sitzt noch hinter dem Steuer.
4: Vollautomatisches Fahren: Für bestimmte Anwendungen, etwa beim Parken und Rangieren im Parkhaus, ist kein Fahrer mehr nötig.
5: Fahrerloses Fahren: In diesem "Robotertaxi" sitzt gar kein Mensch mehr am Steuer.
Momentan arbeiten die deutschen Autobauer daran, alle bestehenden Fahrassistenzsysteme zu fusionieren. "Da bewegen wir uns die nächsten fünf Jahre zwischen Stufe zwei und drei", sagt Etemad."
Isaac Asimov: Visit to the Worlds's Fair of 2014. In: The New York Times, August 16, 1964.7) Björn Giesler - Robots, Politics, and Ethics: How Autonomous Driving Transforms Our Way of Thinking About Machines, Video at the IROS: IEEE/RSJ International Conference on Intelligent Robots and Systems, Hamburg 2015. (See also: Video at the IAA Frankfurt 2015).
"Gadgetry will continue to relieve mankind of tedious jobs. Kitchen units will be devised that will prepare "automeals," heating water and converting it to coffee; toasting bread; frying, poaching or scrambling eggs, grilling bacon, and so on. Breakfasts will be "ordered" the night before to be ready by a specified hour the next morning. Complete lunches and dinners, with the food semiprepared, will be stored in the freezer until ready for processing. I suspect, though, that even in 2014 it will still be advisable to have a small corner in the kitchen unit where the more individual meals can be prepared by hand, especially when company is coming.
Robots will neither be common nor very good in 2014, but they will be in existence. The I.B.M. exhibit at the present fair has no robots but it is dedicated to computers, which are shown in all their amazing complexity, notably in the task of translating Russian into English. If machines are that smart today, what may not be in the works 50 years hence? It will be such computers, much miniaturized, that will serve as the "brains" of robots. In fact, the I.B.M. building at the 2014 World's Fair may have, as one of its prime exhibits, a robot housemaid*large, clumsy, slow- moving but capable of general picking-up, arranging, cleaning and manipulation of various appliances. It will undoubtedly amuse the fairgoers to scatter debris over the floor in order to see the robot lumberingly remove it and classify it into "throw away" and "set aside." (Robots for gardening work will also have made their appearance.)
General Electric at the 2014 World's Fair will be showing 3-D movies of its "Robot of the Future," neat and streamlined, its cleaning appliances built in and performing all tasks briskly. (There will be a three-hour wait in line to see the film, for some things never change.)
Much effort will be put into the designing of vehicles with "Robot-brains"*vehicles that can be set for particular destinations and that will then proceed there without interference by the slow reflexes of a human driver. I suspect one of the major attractions of the 2014 fair will be rides on small roboticized cars which will maneuver in crowds at the two-foot level, neatly and automatically avoiding each other.
Even so, mankind will suffer badly from the disease of boredom, a disease spreading more widely each year and growing in intensity. This will have serious mental, emotional and sociological consequences, and I dare say that psychiatry will be far and away the most important medical specialty in 2014. The lucky few who can be involved in creative work of any sort will be the true elite of mankind, for they alone will do more than serve a machine.
Indeed, the most somber speculation I can make about A.D. 2014 is that in a society of enforced leisure, the most glorious single word in the vocabulary will have become work!"
- Matt Novak: Asimov's 2014 Predictions were shockingly conservative for 1964. In: Paleofuture 8.19.2013.
"Fully Automated Driverless "Robot" Cars"
Visions of driverless cars were nothing new in 1964. In fact, they're at least as old as the 1939 New York World's Fair, when the superhighways of tomorrow in GM's Futurama exhibit were shown to have fully automated capabilities. The 1957 print ad above showed how the family of tomorrow would soon enjoy a relaxing board game, rather than have to keep their eyes on the road. The driverless cars of tomorrow (with their "robot brains") were a certainty. Car companies and popular futurists of the 1950s were banking on it."
- Kim Gittleson: World's Fair: Isaac Asimov's predictions 50 years on. In: BBC News, 22 April 2014"Robot-brain" surely has a better ring than "self-driving car".
Asimov's other transport predictions - while just as catchy - still remain the stuff of dreams. The aquafoils, which "skimmed over the water with a minimum of friction" and impressed World's Fair visitors in 1964, haven't caught on. Neither have their successors - jet packs and hovercraft.- Erik van Rheenen: 12 Predictions Isaac Asimov Made About 2014 in 1964. In mental_floss, Jan 2, 2014.
Asimov predicted more - and got more right, or semi-right - than is possible to list here. His fears about population growth and birth control could be the stuff of an entirely separate article. But perhaps his most Rprescient observation, or warning, was that while technology, both then and now, has the power to transform lives, without efforts towards equal access, it can hurt, rather than help, the goal of "peace through understanding"."
"5. Cars would fly — sort of
Roads and bridges would be rendered all but obsolete: "Jets of compressed air will also lift land vehicles off the highways, which, among other things, will minimize paving problems...cars will be capable of crossing water on their jets, though local ordinances will discourage the practice."
6. There would be robots
But they'd lack in quantity and quality: "Robots will be neither common nor very good in 2014, but they will be in existence." Asimov predicted one Jetsons-ish advancement in robotics with his idea for a General Electric "robot housemaid...large, clumsy, slow-moving but capable of general picking-up, arranging, cleaning, and manipulation of various appliances." Another of Asimov's predictions picked up on by The Jetsons was...
7. Moving sidewalks, raised above traffic
Which Asimov determined would only be be functional for "short-range travel." The writer also envisioned that "compressed air tubes will carry goods and materials over local stretches, and the switching devices that will place specific shipments in specific destinations will be one of the city's marvels."."
- Rebecca Rosen: In 1964, Isaac Asimov Imagined the World in 2014. In: The Atlantic, Dec 31, 2014.
"His notions were strange and wonderful (and conservative, as Matt Novak writes in a great run-down), in the way that dreams of the future from the point of view of the American mid-century tend to be. There will be electroluminescent walls for our windowless homes, levitating cars for our transportation, 3D cube televisions that will permit viewers to watch dance performances from all angles, and "Algae Bars" that taste like turkey and steak ("but," he adds, "there will be considerable psychological resistance to such an innovation").
He got some things wrong and some things right, as is common for those who engage in the sport of prediction-making. Keeping score is of little interest to me. What is of interest: what Asimov understood about the entangled relationships among humans, technological development, and the planet—and the implications of those ideas for us today, knowing what we know now.
Wikipedia: Autonomous Cars
"An autonomous car, also known as a driverless car, self-driving car and robotic car, is an automated or autonomous vehicle capable of fulfilling the main transportation capabilities of a traditional car. As an autonomous vehicle, it is capable of sensing its environment and navigating without human input. Robotic cars exist mainly as prototypes and demonstration systems. As of 2014, the only self-driving vehicles that are commercially available are open-air shuttles for pedestrian zones that operate at 12.5 miles per hour (20.1 km/h).
Autonomous vehicles sense their surroundings with such techniques as radar, lidar, GPS, and computer vision. Advanced control systems interpret sensory information to identify appropriate navigation paths, as well as obstacles and relevant signage. By definition, autonomous vehicles are capable of updating their maps based on sensory input, allowing the vehicles to keep track of their position even when conditions change or when they enter uncharted environments.
Some demonstrative systems, precursory to autonomous cars, date back to the 1920s and 30s. The first self-sufficient (and therefore, truly autonomous) cars appeared in the 1980s, with Carnegie Mellon University's Navlab and ALV projects in 1984 and Mercedes-Benz and Bundeswehr University Munich's EUREKA Prometheus Project in 1987. Since then, numerous major companies and research organizations have developed working prototype autonomous vehicles."
See also: Wikipedia: History of Autonomous Cars
"Open Roboethics initiative (ORi) is a roboethics think tank that aims to foster active discussions of ethical, legal, and societal issues of robotics (roboethics). Headquartered in
, ORi is an interdisciplinary, international group of people passionate about roboethics in general. Vancouver, Canada
Just like how the web has allowed us to produce and maintain contents of Wikipedia and software designs of Linux by means of mass collaboration, ORi is experimenting with the idea that roboethics discussions and robot designs can benefit from the power of mass collaboration as well. By creating a web space where policy makers, engineers/designers, and users and other stakeholders of the technology can freely share and access roboethics related contents, we hope to accelerate roboethics discussions and inform robot designs.
We first presented our idea at the We Robot 2012 conference:
Don’t want to read the whole paper? Check out our presentation from the conference instead.http://tinyurl.com/openroboethics."
International Review of Information Ethics
Vol. 20 - December 2013
edited by Jürgen Altmann, Francesca Vidal
"Cyber warfare - when we planned this issue already some time ago we thought of being once again on the leading edge of reflecting the implications of ICTs on global society and our modern life. And once again we have been surpassed by reality.
At first, if we look at the various physical war zones of today we can see more and more cyber weapons in place and in heavy use as well. Nearly every warring party blames the other of using means of hacking to conduct sabotage or espionage in the course of the physical acts of war. And yes, you can bomb the power plant of your opponent or ‘stuxnet’ it – and of course as the missile can be misguided the virus could also infect the IT infrastructure of a hospital instead. No, a cyber war is not a clean war by defini-tion. But then, what is the difference of killing a combatant with a gun or by a click?
Yet, much more attention has been drawn to the debate of cyber warfare where there is no physical war taking place at all.
Chinaand the e.g. are not at war with each other (at least in the classical sense of having diplomatically declared it to be so or having crossed each other’s borders with armed forces wear-ing uniforms). But in the cyber sphere they do cross their virtual borders all the time and they do attack each other. Let us not be naïve: it is not that they just suspect or blame each other to do so (what they extensively do) – as a matter of fact they are if not yet at war at least testing their capabilities and con-tinuously increase them. Even if the scale is yet more comparable to shooting bullets across the border than to deploying heavy artillery but yes, we have entered this new dimension of the digital sphere now also in the area of warfare. And according to the rising budgets spent every year to improve the effective-ness as well as the camouflage of the respective techniques one can easily foresee their growing im-portance and also assume their probable social dominance one day. US
And that leads to what finally makes the debate red-hot at the very moment: the threats of cyber war or even cyber armament for the civil society also in times and zones of alleged peace. In the name of de-fending against terrorism and counter espionage and being prepared for possible physical and cyber at-tacks the super powers have launched an unprecedented ICT infrastructure of mass surveillance and control and do not hesitate to use it also against friendly nations as the NSA scandal made publically clear. Our privacy is under attack by military forces at the very moment. And one could ask if this hap-pens for a greater good. But that only confirms that it happens.
So if cyber war has become a reality even if on a very small scale that one wouldn’t call a war yet and if the means of cyber warfare do not stop at concerning also the civil society what is more demanding than asking for ethical reflection of these developments. For the very interesting yet not calming answers please see for yourself in this issue - small in size but rich in content.
Ethics of cyber warfare
by Jürgen Altmann, Francesca Vidal
Cyber War: Will it define the Limits to IT Security?
by Ingo Ruhmann
abstract: Cyber warfare exploits the weaknesses in safety and security of IT systems and infrastructures for political and military purposes. Today, not only have various units in the military and secret services become known to engage in attacks on adversary’s IT systems, but even a number of cyber attacks conducted by these units have been identified. Most cyber warfare doctrines aim at a very broad range of potential adversaries, including civilians and allies, thus justifying the involvement of cyber warfare units in various IT security scenarios of non-military origin. Equating IT security with cyber warfare has serious consequences for the civil information society.
Google Glass: On the implications of an advanced military command and control system for civil society
by Ute Bernhardt
abstract: In the early 1990ies, the U.S. Army presented the first experimental units of a future soldier’s equipment, featuring a soldier with a networked video camera, various sensors, and connecting the system to the world wide military command and control network. In June, 2012, Google unveiled its prototype Google Glass, a device capable of video and audio capturing with additional augmented reality functions.
In this article, a comparison between those military and civilian augmented reality systems and typical application settings will be used to ask for the implications of this kind of technology for the civil society. It will especially be focused on the consequences for civil safety, when the full range of cooperation capabilities available with Google Glass-like devices will be employed by organized groups of criminals or terrorists. In conclusion, it will be argued to assess the implications of this technology and prepare for a new degree of coordination in the activities of groups in the civilian space.
Uma análise sobre a política de informação para a defesa militar do Brasil: algumas implicações éticas
by Bruno M. Nathansohn
abstract: Some ethical implications: It is presented the development of the information policy for the military defense of Brazil, taking into consideration information actions, which were implemented during the Brazilian history, and in the context of the regions where the country carries out geostrategic influence. The hypothesis is that there is a dilemma of the Brazilian state between cooperative international relations, based on a multilateral perspective, and the threats to its critical information infrastructure. Besides, technically there is a fragility of the cybernetics infrastructure because of the lack of an appropriate information policy, which could contribute to the position of Brazil in the international system of power, in accordance with its potentialities. Questions that imply ethics dilemmas about the threshold between the cooperative interchange, on the one hand, and the preservation of sovereignty, on the other, related with what should, or should not, be shared in the cyberspace.
Der Moment des Triumphs. E-Mail-Dialog über ein Bild. In: Hans-Arthur Marsiske (Hrsg.): Kriegsmaschinen - Roboter im Militäreinsatz. Hannover: Heise Verlag 2012, 11-30.
Creating a secure cyberspace – Securitization in Internet governance discourses and dispositives in Germany and Russia
by David Gorr, Wolf J. Schünemann
abstract: This article deals with the phenomenon of securitization in the emerging policy field of Internet governance. In essence, it presents a combination of theoretical reflections preparing the grounds for a comparative analysis of respective discourses and so-called dispositives as well as preliminary findings from such a comparative project. In the following sections we firstly present some theoretical reflections on the structural conditions of Internet regulation in general and the role and relevance of securitization in particular. Secondly, we shed light on how securitization is constructed and how it might affect the build-up process of instruments of Internet regulation. How does securitization happen, how does it work in different societies/states? Which discursive elements can be identified in elites’ discourses? And which politico-legal dispositives do emanate from discourse? In a third section we illustrate our reflections with some preliminary findings from a comparison of cybersecurity discourses and dispositives in Germany and Russia.
Wer ist der Mensch? Überlegungen zu einer vergleichenden Theorie der Agenten. In: Hans-Arthur Marsiske (Hrsg.): Kriegsmaschinen - Roboter im Militäreinsatz. Hannover: Heise Verlag 2012, 231-238.
Anna Maria Kellner: Widerstand ist zwecklos. Wiedas Militär den Angriff auf unseren freien Willen probt. In: IPG (Internationale Politik und Gesellschaft), 26.20.2015..
Gesellschaft für Informatik, Fachgruppe Informatik und Ethik: Workshop "Verkörperung von Algorithmen: Drohnen", Vortrag: Kriegsmaschinen. Von Körpern, Leibern und Digitalen Agenten (PP). Humboldt-Universität, Berlin, 15.-16. Oktober 2015.
International Conference on Cyberlaw, Cybercrime & Cyber Security, November 19, 2015, New Delhi, India.
Aimee van Wynsberghe, Assistant Professor of Philosophy of Technology at the Department of Philosophy,
, writes: Universityof Twente
"In my thesis, entitled “Designing Robots with Care: Creating an ethical framework for the future design and implementation of care robots”, I addressed robots intended to be designed for nurses in their role as care giver. These robots are hoped to help with the increase in care demands of society on healthcare systems across the globe. Alongside the foreseen benefits there are a variety of ethical concerns related to this emerging technology. Such issues include; how the standard or quality of care might change when human nurses are no longer the sole care providers or how this technology might displace care workers from their role as the stewards of care? I do not claim that care robots (robots in healthcare) should be made and used for any care purpose but I also do not claim that care robots should never be made or used. Instead, my goal has been to explore the ethical limits within which these robots can be made and used. To do this I have created a novel framework for their design and implementation (Care Centered Value Sensitive Design) that relies on the care ethics tradition along with the Value-Sensitive Design approach. The hope is that by steering the design of this technology in a manner that incorporates care values into the technical content of the care robot, robot designers can avoid the majority of negative ethical concerns or risks." (Wynsberghe, Homepage)
Wynsberghe, Aimee van (2016). Healthcare Robots. Ethics, Design and Implementation. London and New York: Routledge.
Erster Studiengang Healthcare Robotics in den USA:
Der Healthcare Robotics Lab: http://robotics.gatech.edu/node/373
Yaskawa Motoman, Mitsui, Honda, und Kuka are partners of the Robotics Programm at Georgia Tech:
See articles on robots particularly in the OP (some 1400 in the US and 52 in Germany) in the Journal of the American Medical Association and in the British Medical Journal.
IX. ROBOT LAWJohanna Seibt, leader of the Pensor project writes:
Beck, Susanne, Leibniz-Universität Hannover
European Parliament: Working Group on Robotics and Artificial Intelligence.
Hilgendorf, Eric: Forschungsstelle RobotRecht, Universität Würzburg
- Bertolini, Andrea: presentation at the Juri Committee of the European Parliament (2016) (video)
Delvaux-Stehres, Mady: “A European Perspective on Robot Law” TLC Forum for Robots & Society (2016)
The Regulation of Robotics in Europe: Legal, Ethical and Economic Implications. International Summer School. Pisa (Italy), July 11-16, 2016 (Brochure)
RoboLaw: Regulating Emerging Technologies in Europe: Robotics Facing Law and Ethics.
Schriftenreihe Robotik und Recht, E. Hilgendorf, Susanne Beck (Hrsg.), Nomos Verlag.
- Susanne Beck (Hrsg.): Jenseits von Mensch und Maschine. Ethische und rechtliche Fragen zum Umgang mit Robotern, Künstlicher Intelligenzi und Cyborgs, 2012.
X. Social Robotics
Research Network for Transdisciplinary Studies in Social Robotics
Univ. of Aarhus, Denmark
What Social Robots Can and Should Do?
Robophilosophy 2016 - TRANSFOR 2016
1. Why is it important to have a conference on the subject of ‘social robotics’ at this moment in time?
-We are at the onset of the ”robot revolution”—a technological revolution with more transformative potential than any other so far. According to a recent study by CEVEA, during the next two decades Denmark may lose one third of its current jobs to automatization. This will have profound socio-political and socio-economic consequences. We all know about the importance of being employed—politicians continue to promise “new jobs”—but the current trend towards automatization may be steering a large part of the population into unemployment. However, the primary focus of our conference is not socio-economic change but the equally disruptive socio-cultural consequences on the horizon. Our socio-cultural values are realized in human social interactions - if we manufacture new patterns of social interactions by putting robots into our workplaces and at home, we are engineering cultural change in ways we have never done before. The problem is that this sort of ”cultural engineering” is currently undertake without involving experts on culture, namely, researchers in the Humanities. The interdisciplinary field of Human-Robot Interaction Studies currently includes only few researchers from the Humanities. Neither the public nor policy makers are aware of the fact how urgent it is to include Humanities research into social robotics now.
2. You have succeeded in attracting international researchers with considerable professional expertise in the field – what can they contribute to the conference?
-As in our previous conference in 2014, we could gain as plenary speakers 13 of the most visible international top-researchers in the area who will present the larger trajectories of the current debate, such as: Can and should robots take on social roles? Can and should robots reason ethically and exhibit ‘good judgement’? Will the boundaries between humans and robots vanish, e.g., if we use sex-robots and machine-enhanced mobility (prosthetics)? What should our ethical relations towards robots be, if any? What are our responsibilities now, at the onset of the “robot revolution”? But we have also 74 talks in sessions and workshops presenting research on many different aspects of social robotics, e.g., discussing whether robots can/should have emotions, and specially, empathy, and whether they can/should become agents that act on norms like we do. Many talks focus on methodological questions, including reflections on the role of robot art, and conceptual and ethical implications of social robotics. The conference will also set particular focus on children-robot interaction—among other talks on the subject, the Center for Children’s Speculative Design, founded by Harvard and MIT researchers, will hold workshop on “Co-Designing Children-Robot Interaction,” accompanied by a (free) exhibition on “Children’s Imagined Robots” featuring drawings by children from 3 continents. The conference will also set particular focus on children-robot interaction—among other talks on the subject, the Center for Children’s Speculative Design, founded by Harvard and MIT researchers, will hold workshop on “Co-Designing Children-Robot Interaction,” accompanied by a (free) exhibition on “Children’s Imagined Robots” featuring drawings by children from 3 continents.
3. Is the overall aim of your research and the conference to get one step closer to achieving responsible social robotics?
-The aim of our own research project, supported by a Semper Ardens grant of the Carlsberg Foundation, is to create a new paradigm for how to regulate social robotics and their application. By integrating robotic research with empirical and conceptual research anchored in the humanities and social sciences, our project aims to compile a methodological foundation that makes it possible to develop responsible social robotics. The aim of the conference is the same. In many ways, the spread of social robots marks a turning point in human history, which requires us to address the consequences of the technological development. The robotics market is growing very fast and concerned roboticists and computer scientists have begun to call for regulations and value-driven designs. But in order to realize such value-driven design and to arrive at responsible robotics applications, social robotics needs to integrate the expertise of researchers in philosophy, anthropology, linguistics, art, sociology, education and communication science into human-robot Interaction research as currently undertaken by robotics, psychology, and cognitive science. There are currently two global initiatives for ‘responsible robotics’, which both will hold workshops at the conference, and we will discuss how responsible robotics can be implemented in concrete detail.
4. You both come from an academic background in philosophy. Why is it important to integrate humanities research into this field?
-Research on human-robot interaction has shown that due to biological mechanisms people have a strong tendency to interpret their dealings with robots as social interactions. So by designing a robot in a certain way I will elicit this or that likely emotional and conceptual reaction in a human interaction partner. This amounts to engineering of culture in ways which ethicists find potentially problematic, since it involves a form of manipulation. On the other hand, the specific cognitive effects that robots have on humans may be used in ways that are perfectly in agreement with, or even enhance, the socio-cultural values that we currently endorse, such as just justice, self-realization, autonomy etc. It is the business of the Humanities to analyze and describe socio-cultural practices, norms, and values, and their dynamics. To build social robotics, i.e., to engage in cultural engineering without involving the expertise of the Humanities, is not only irresponsible but also imprudent—it may lead to market products that a country’s ethical council or the public at large will reject. Vice versa, given that rapid development of social robotics, the Humanities can and should show that they have an indispensable role to play in society. It is an irony - the tragic kind - that the Danish government boosts research and education in engineering and reduces Humanities research and education at a time when they are most needed in engineering.
5. If you were to answer the general question posed in the conference title, how would you answer it? What can and should social robots do?
- There are many applications, whose usefulness and value seem fairly uncontroversial, e.g., in autism therapy, while other applications are more ambiguous, e.g., as dietary coaches or destressing bystanders in patient-doctor conversation. The final answer is very simple: Social robots should help us to realize what is good in a human life. Of course that is an ‘empty’ answer—when it comes to human fulfillment we cannot describe a determinate goal but only how we need to conduct our search. If we conceive of the design of social robotics applications as a process of joint creation, involving roboticists, and, among other scientific disciplines, researchers from the Humanities, we have the best chances to make best use of the positive transformative potential of social robotics.
Pensor. Philosophical and Transdisciplinary Enquiries into Social Robotics
"Social robotics is a transdisciplinary research area in the intersection of robotics, cognitive science, psychology, anthropology, and philosophy. The goal of building robots that participate in the space of human social interaction raises technological, empirical, and normative questions. PENSOR has been established as a new transdisciplinary research platform at Aarhus University with the aim especially to address research questions in social robotics that require expertise in various philosophical disciplines, including the new area of intercultural philosophy of technology. Social robotics is not only among the socially most relevant research areas for scholars in the Humanities, it also marks a new relationship between technology and the Humanities--social robotics needs the Humanities to inform the development of technological design, but it also offers new insights on the conditions of human interaction."
"Social robotics is a new field of robot research in the 21st century which aims to develop machines with social intelligence. If this endeavour succeeds, it will amount to a technological revolution leading to significant socio-economic and socio-cultural changes. Which is why we need research into how to handle the new technology in a responsible manner.
“’Social robots’ are designed to appear as social agents and to interact with human beings in accordance with social norms. If we start to interact with social robots everywhere, at work and in the home, it is very likely that this will lead to socio-cultural changes. Therefor it’s important to invest research funds in integrating humanities research even at the early stage when social robots are designed. The aim is to ensure that their use complies with or perhaps even amplifies the ethical and cultural values, which we wish to preserve. We believe that socio-cultural preferences should drive the way social robots are used,” explains Johanna Seibt, professor with special responsibilities at Aarhus University and the leader of the project.
The project will develop a new approach known as “Integrative Social Robotics” or ISR, which systematically combines research into social robot technology with research in the Humanities, and the Human and Social Sciences, and will be used at the early stage when new social robots are developed."
See my: Living with Online Robots
Aarhus University: Research Unity for Robophilosophy
Spyros G. Tzafestas: An Introduction to Robophilosophy. Cognition, Intelligence, Autonomy, Consciousness, Conscience, and Ethics. Delft: Rivers Publihers 2016.
If within our present digital ontology – in case we agree that this view of Being is a pervading one today, as Eldred also remarks – leads us to equating all beings (including humans) as being digitally quantifiable and re-producible, then it is an important philosophical, i.e., critical or ethical task to question these metaphysical ambitions that blur phenomenological differences. Digital reductionism is not bad per se but only when it becomes dogmatic in theory and/or in practice.
What is the difference between a “program” and an "agent"? Michael Nagenborg writes:
„One major difference between a „program“ and an „agent“ is, that programs are designed as tools to be used by human beings, while „agents“ are designed to interact as partners with human beings. […] An AMA [artificial moral agent, RC] is an AA [artificial agent, RC] guided by norms which we as human beings consider to have a moral content. […] Agents may be guided by a set of moral norms, which the agent itself may not change, or they are capable of creating and modifying rules by themselves. […] Thus, there must be questioning about what kind of „morality“ will be fostered by AMAs, especially since now norms and values are to be embedded consciously into the „ethical subroutines“. Will they be guided by „universal values“, or will they be guided by specific Western or African concepts.“ (Nagenborg 2007, 2-3)
The concepts of autonomy, learning, decision etc. are analogies of the human agent deprived of its historical, political, societal, bodily and existential dimensions. A moral code programmed in a microprocessor has nothing in common with the capacity of practical reflexion even in case there is a feed-back that mimic (human) theoretical and/or practical reason. The evaluation and ‘decisions’ coming out of such programmes remain lastly dependent on the programmer himself. It is cynical to speculate and to spend public funds on the supposed creation of artificial agents towards whom we would be morally (and legally) responsible (and vice versa!) given the present situation of some six billion human beings on this planet and the lack of such responsibility towards them. We might say that artificial agents are only prima facie agents. They are basically patients of human agency. They will surely breakdown (Winograd and Flores 1986; Flores Morador 2009).
In contrast, the question of what kind of transformation is being operated in human societies when billions of human beings interact in digital networks that are interwoven with their bodies is highly relevant today and in the future. If roboticists want to create useful robots they have to think about them within the background of different cultures and moralities.
“What is it like to be a robot? Wittgenstein’s famous dictum that “if a lion could speak, we would not understand him” (Wittgenstein, 1984, p. 568) points to the issue, that human language is rooted in what he calls “forms of life.” Humans and lions have orthogonal forms of life, i.e., they construct their reality based on systemic differences. What is it like to be a human?” (Capurro & Nagenborg 2009)
Roboethics means ethics of robots (genitivus objectivus) not ethics of robots (genitivus subjectivus).
Anderson, Michael & Anderson, Susan Leigh: Machine ethics: Creating an ethical intelligent agent. AI Magazine | December 22, 2007.
Asimov, Isaac: Visit to the Worlds's Fair of 2014. In: The New York Times, August 16, 1964.
Capurro, Rafael (2015): Living with Online Robots
Capurro, Rafael (2007): Ethics and Robotics
Capurro, Rafael (2005). Towards an ontological foundation of information ethics
Capurro, Rafael and Nagenborg, Michael (Eds.) (2009), Introduction. In: ibid.: Ethics and Robotics. Berlin: Akademische Verlagsgesellschaft.
Cerqui, Daniela; Weber, Jutta; Weber, Karsten (Guest Editors) (2006): Ethics in Robotics, International Review of Information Ethics.
Eldred, Michael (2010). Digital Being, the Real Continuum, the Rational and the Irrational
Gates, Bill (2007). Roboter für jedermann
Gittleson, Kim: World's Fair: Isaac Asimov's predictions 50 years on. In: BBC News, 22 April 2014
Goodall, Noah, J.: Vehicle Automation and the Duty to Act. In: 21st World Congress on Intelligent Transfport Systems, Detroit, Sept. 2014.
Kurz, Constanze: Wie man Robotern ethisches Verhalten beibringt. In Netzpolitik (June 8, 2015).
Lin, Patrick, Abney Keith and Bekey George A. (eds.): Robot Ethics. The Ethical and Social Implications of Robotics, The MIT Press 2012.
Lin, Patrick: Here is a terrible idea: robot cars with adjustible ethics settings. In: Wired, 8.18.2014
Flores Morador, Fernando (2009). Broken Technologies. The Humanist as Engineer. University of Lund.
Floridi, L. and Sanders, J.W. (2004). On the Morality of Artificial Agents. In: Minds and Machines, 14, 3, 349-379.
MacDonald, Coby: The Good, The Bad and The Robot: Experts Are Trying to Make Machines Be "Moral". In California Magazine, UC Berkeley, June 4, 2015.Millar, Jason: You should have a say in your robot's car code of ethics. In: Wired, 9.2.2014.
Nagenborg, Michael (2007). Artificial moral agents: an intercultural perspective. In: International Review of Inforamtion Ethics.
Negrotti, Massimo: The Reality of the Artificial. Nature, Technology and Naturoids. Heidelberg and Berlin 2012.
Novak, Matt: Asimov's 2014 Predictions were shockingly conservative for 1964. In: Paleofuture 8.19.2013.
Rheenen, Erik van: 12 Predictions Isaac Asimov Made About 2014 in 1964. In mental_floss, Jan 2, 2014.
Rosen, Rebecca: In 1964, Isaac Asimov Imagined the World in 2014. In: The Atlantic, Dec 31, 2014.
Shim, H.B. (2007). Establishing a Korean Robot Ethics Charter.
Veruggio, Gianmarco & Operto, Fiorella: Roboethics: Social and Ethical Implications. In: Bruno Siciliano & Oussama Khatib (Eds.): Handbook of Robotics. Springer 2008, Part G, pp. 1499-1524.
Veruggio, Gianmarco (2006): EURON Roboethics Roadmap.
Wallach, Wendell & Allen, Colin (2009). Moral Machines: Teaching Robots Right from
Press. Oxford University
Winograd, Terry and Flores, Fernando (1986). Understanding Computers and Cognition. Ablex, NJ. 1986.
“Past research concerning the relationship between technology and ethics has largely focused on responsible and irresponsible use of technology by human beings, with a few people being interested in how human beings ought to treat machines. In all cases, only human beings have engaged in ethical reasoning. The time has come for adding an ethical dimension to at least some machines.
Recognition of the ethical ramifications of behavior involving machines, as well as recent and potential developments in machine autonomy, necessitates this. In contrast to computer hacking, software property issues, privacy issues and other topics normally ascribed to computer ethics, machine ethics is concerned with the behavior of machines towards human users and other machines.
We contend that research in machine ethics is key to alleviating concerns with autonomous systems—it could be argued that the notion of autonomous machines without such a dimension is at the root of all fear concerning machine intelligence.
Further, investigation of machine ethics could enable the discovery of problems with current ethical theories, advancing our thinking about ethics. We intend to bring together interested participants from a wide variety of disciplines to the end of forging a set of common goals for machine ethics investigation and the research agendas required to accomplish them.”Topics of interest include, but are not restricted to the following:
- Improvement of interaction between artificially and naturally intelligent systems through the addition of an ethical dimension to artificially intelligent systems
- Enhancement of machine-machine communication and cooperation through an ethical dimension
- Design of systems that provide expert guidance in ethical matters
- Deeper understanding of ethical theories through computational simulation
- Development of decision procedures for ethical theories that have multiple prima facie duties
- Computability of ethics
- Theoretical and practical objections to machine ethics
- Impact of machine ethics on society
ECAP (European Computing and Philosophy) 2007
European Computing and Philosophy Conference, Enschede, The
, 2007: Philosophy and Ethics of Robotics Netherlands
- G. Veruggio: Roboethics: an interdisciplinary approach to the social implications of Robotics
- Ishii Kayoko: Can a Robot Intentionally Conduct Mutual Interactions with Human Beings
- Ronald C. Arkin: On the Ethical Quandaries of Practicing Roboticist: A First Hand Look
- Jutta Weber: Analysing Material, Semiotic and Socio-Political Dimensions of Artificial Agents
- Daniel Persson: Ethics of Intelligent Systems – Artefacts, Producers and Users
- Merel Noorman: Exploring the Limits to the Autonomy of Artificial Agents
- Susana Nascimento: Autonomous Anthropomorphisms: Robot Narratives and Critical Social Theories
- Peter Asaro: How Just Could A Robot War Be?
- Edward H. Spence: Robot Rights: The Moral Life of Androids
Workshop on Roboethics, ICRA 2009,
, 2009 Kobe
- Social (Robotics and job market; Cost benefit analysis etc.)
- Psychological (Robots and kids; Robots and elderly, etc.)
- Legal (Robots and liability, Identification of autonomously acting robots etc.)
- Medical (Robots in health care and prosthesis etc.)
- Warfare application of robotics (Responsibility, International Conventiuons and Laws etc.)
- Environment (Cleaning nuclear and toxic waste, Using renewable energies, etc.)
SPT (Society for Philosophy and Technology) 2009
- Mark Coeckelbergh: Living with Robots
- Aimee van Wynsberghe: What Care Robots say about Care
- Susana Nascimento: Self-operating Machines and (Dis)engagement in Human Technical Actions
- Allan Hanson: Beyond the Skin Bag: On the Moral Responsibility of Extended Agencies
- Scott Sehon: Robots and Free will
- Peter Asaro: The Convergence of Video Games & Military Robotics
- Martintje Smits: Social Robots: How to bridge the Gap Between Fantasies and Practices?
De Preester: The (Im)possibilities of Reembodim Helena
- Guido Nicolosi: Restless Creatures
- Gianmarco Veruggio: Ethical, Legal and Societal Issues in the Strategic Agenda for Robotics in
Keynote: Hiroshi Ishiguro: Developing androids and understanding humans
- Carl Shulman, Nick Tarleton, and Henrik Jonsson: Which Consequentialism? Machine Ethics and Moral Divergence
- Kimura Takeshi: Introducing Roboethics to Japanese Society: A Proposal
- Soraj Hongladarom: An Ethical Theory for Autonomous and Conscious Robots
- Carl Shulman, Enrik Johnsson, and Nick Tarleton: Machine Ethics and Supertintelligence
Keitz Miller, FrancesGrodzinsky, Marty Wolf: Why Shoudn’t Have to Guess Turin
- Gene Rohrbaugh: On the Design of Moral and Amoral Agents
During the third day organised by EURON, the workshops were targeted more towards academia, nevertheless of interest for the EUROP community as well. In particular, the sessions on Ethical, Legal and Societal issues / non-technical constraints, and on State-of-the-art robotics products and R&D challenges were organised by EUROP members.Ethical, Legal and Social issues: non-technical constraints.
Friedrich-Ebert-Foundation and University of Tsukuba Joint Symposium: Robo-Ethics and "Mind-Body-Schema" of Human and Robot - Challenges for a Better Quality of Life, University of Tsukuba (Japan), Keynote: Robo-Ethics, January 23, 2015
We Robot 2015. Fourth Annual Conference on Robotics, Law and Policy. University of Washington School of Law
ICSR 2015 Seventh International Conference on Social Robotics, October 26-30, Paris, France.
The topics of interest include, but are not limited to the following:
Robots with personality
Robots that can adapt to different users
Robots to assist the elderly and persons with disabilities
Socially assistive robots to improve quality of life
Affective and cognitive sciences for socially interactive robots
Personal robots for the home
Social acceptance and impact in the society
Robot ethics in human society
Context awareness, expectation, and intention understanding
Control architectures for social robotics
Socially appealing design methodologies
Safety in robots working in human spaces
Human augmentation, rehabilitation, and medical robots
Robot applications in education, entertainment, and gaming
Björn Giesler: Video: Robots, Politics, and Ethics: How Autonomous Driving Transforms our Way of Thinking About Machines.
University of Miami, School of Law
Shim, H.B. (2007). Establishing a Korean Robot Ethics Charter.
Lovgren, Stefan (2007). Robot Code of Ethics to Prevent Android Abuse, Protect Humans. National Geographic News:
"The government of South Korea is drawing up a code of ethics to prevent human abuse of robots—and vice versa.
The so-called Robot Ethics Charter will cover standards for robotics users and manufacturers, as well as guidelines on ethical standards to be programmed into robots, South Korea's Ministry of Commerce, Industry and Energy announced last week.
South Korea boasts one of the world's most high-tech societies.
The country's Ministry of Information and Communication is working on plans to put a robot in every South Korean household by 2020.
The new charter is part of an effort to establish ground rules for human interaction with robots in the future.
"Imagine if some people treat androids as if the machines were their wives," Park Hye-Young of the ministry's robot team told the AFP news agency.
Laws of Robotics
Familiar to many science-fiction fans, the laws were first put forward by the late sci-fi author in his short story "Runaround" in 1942.
The laws state that robots may not injure humans or, through inaction, allow humans to come to harm; robots must obey human orders unless they conflict with the first law; and robots must protect themselves if this does not conflict with the other laws.
Robot researchers, however, say that Asimov's laws—and the South Korean charter—belong in the realm of science-fiction and are not yet applicable to their field.
"While I applaud the Korean effort to establish a robot ethics charter, I fear it might be premature to use Asimov's laws as a starter," said Mark Tilden, the designer of RoboSapiens, a toylike robot.
"From experience, the problem is that giving robots morals is like teaching an ant to yodel. We're not there yet, and as many of Asimov's stories show, the conundrums robots and humans would face would result in more tragedy than utility," said Tilden, who works for Wow Wee Toys in Hong Kong."
Three laws of robotics
"In science fiction, the Three Laws of Robotics are a set of three rules written by Isaac Asimov, which almost all positronic robots appearing in his fiction must obey. Introduced in his 1942 short story "Runaround", although foreshadowed in a few earlier stories, the Laws state the following:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law
Zeroth Law added
Asimov once added a "Zeroth Law"—so named to continue the pattern of lower-numbered laws superseding in importance the higher-numbered laws—stating that a robot must not merely act in the interests of individual humans, but of all humanity.
A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
Asimov addresses the problem of humanoid robots ("androids" in later parlance) several times. The novel Robots and Empire and the short stories "Evidence" and "The Tercentenary Incident" describe robots crafted to fool people into believing that the robots are human. On the other hand, "The Bicentennial Man" and "—That Thou art Mindful of Him" explore how the robots may change their interpretation of the Laws as they grow more sophisticated. (Gwendoline Butler writes in A Coffin for the Canary, "Perhaps we are robots. Robots acting out the last Law of Robotics... To tend towards the human.")
"—That Thou art Mindful of Him", which Asimov intended to be the "ultimate" probe into the Laws' subtleties, finally uses the Three Laws to conjure up the very Frankenstein scenario they were invented to prevent. It takes as its concept the growing development of robots that mimic non-human living things, and are therefore given programs that mimic simple animal behaviours and do not require the Three Laws. The presence of a whole range of robotic life that serves the same purpose as organic life ends with two humanoid robots concluding that organic life is an unnecessary requirement for a truly logical and self-consistent definition of "humanity", and that since they are the most advanced thinking beings on the planet, they are therefore the only two true humans alive and the Three Laws only apply to themselves. The story ends on a sinister note as the two robots enter hibernation and await a time when they conquer the Earth and subjugate biological humans to themselves, an outcome they consider an inevitable result of the "Three Laws of Humanics".
This story does not fit within the overall sweep of the Robot and Foundation series; if the George robots did take over Earth some time after the story closes, the later stories would be either redundant or impossible. Contradictions of this sort among Asimov's fiction works have led scholars to regard the Robot stories as more like "the Scandinavian sagas or the Greek legends" than a unified whole.
Indeed, Asimov describes "—That Thou art Mindful of Him" and "Bicentennial Man" as two opposite, parallel futures for robots that obviate the Three Laws by robots coming to consider themselves to be humans — one portraying this in a positive light with a robot joining human society, one portraying this in a negative light with robots supplanting humans. Both are to be considered alternatives to the possibility of a robot society that continues to be driven by the Three Laws as portrayed in the Foundation series. Indeed, in the novelization of "Bicentennial Man", Positronic Man, Asimov and his cowriter Robert Silverberg imply that in the future where Andrew Martin exists, his influence causes humanity to abandon the idea of independent, sentient humanlike robots entirely, creating an utterly different future from that of Foundation.
"The Fifth Law of Robotics"A robot must know it is a robot
The Fifth Law was introduced by Nikola Kesarovski in his short story "The Fifth Law of Robotics". The plot revolves around a murder. The forensic investigation discovers that the victim was killed from a hug by a humaniform robot. The robot violated both the First and the Fourth Laws because it did not establish for itself that it was a robot.
In the July/August 2009 issue of IEEE Intelligent Systems, Robin Murphy and David D. Woods proposed "The Three Laws of Responsible Robotics":
- A human may not deploy a robot without the human-robot work system meeting the highest legal and professional standards of safety and ethics.
- A robot must respond to humans as appropriate for their roles.
- A robot must be endowed with sufficient situated autonomy to protect its own existence as long as such protection provides smooth transfer of control which does not conflict with the First and Second Laws."
See this comprehensive bibliography on Robot Ethics
by Vincent C. Müller (University of Leeds, Anatolia College/ACT)
Beavers, Anthony (Guest Editor): Special Issue: Robot Ethics and Human Ethics. In: Ethics and Information Technology, Volume 12, Number 3 / September 2010.
Borenstein, Jason and Keith Miller: Robots and the Internet. Causes for Concern. In IEEE Technology & Society Magazine Spring 2013, 61-65.
Capurro, Rafael: Interrcultural Roboethics for a Robot Age. In: Makoto Nakada, Rafael Capurro and Koetsu Sato (Eds.): Critical Review of Information Ethics and Roboethics in East and West. Master's and Doctoral Program in International and Advanced Japanese Studies, Research Group for "Ethics and Technology in the Information Eta"), University of Tsukuba 2017 (ISSN 2432-5414), 13-18.
Capurro, Rafael: Living with Online Robots (2015)
Capurro, Rafael: Robotic Natives. Leben mit Robotern im 21. Jahrhundert (2016).
Decker, Michael and Gutmann, Mathias (eds.): Robo- and Informationsethics. Zürich, Berlin 2011
Decker, Michael: Robotherethik. In: Jessica Heesen (Hg.): Handbuch Medien- und Informationsethik. Stuttgart: Metzler 2016, 351-357.
Kurz, Constanze: Wie man Robotern ethisches Verhalten beibringt. In Netzpolitik (June 8, 2015).
Lin, Patrick, Abney Keith and Bekey George A. (eds.): Robot Ethics. The Ethical and Social Implications of Robotics, The MIT Press 2012.
MacDonald, Coby: The Good, The Bad and The Robot: Experts Are Trying to Make Machines Be "Moral". In California Magazine, UC Berkeley, June 4, 2015.Marco Nørskov: Social Robots. Boundaries, Potential, Challenges, Routledge 2015
Salvaggio, Eryk: When robots go to art school. In. nexttrends, 2017.
Tamburrini, Guglielmo: On the ethical framing of research programs in robotics. In: AI & Society, October 2015.
Tzafestas, Spyros G.: Roboethics: A Navigating Overview, Springer 2016.
Wynsberghe, Aime van: Healthcare Robots. Ethics, Design and Implementation. Farnham, Surrey: Ashgate Publ. 2015.
AJung Moon: Roboethics info Database
Oliver Bendel: Maschinenethik
Capurro, Rafael and Nagenborg, Michael (Eds.): Ethics and Robotics. Heidelberg: Akad.Verlagsgesellschaft 2009 (ISBN 978-3-89838-087-4 (AKA) and 978-1-60750-008-7 (IOS Press)
- P. M. Asaro: What should We Want from a Robot Ethic?
- G. Tamburrini: Robot Ethics: A View from the Philosophy of Science
- B. Becker: Social Robots - Emotional Agents: Some Remarks on Naturalizing Man-machine Interaction
- E. Datteri, G. Tamburrini: Ethical Reflections on Health Care Robotics
- P. Lin, G. Bekey, K. Abney: Robots in War: Issues of Risk and Ethics
- J. Altmann: Preventive Arms Control for Uninhabited Military Vehicles
- J. Weber: Robotic warfare, Human Rights & The Rhetorics of Ethical Machines
- T. Nishida: Towards Robots with Good Will
- R. Capurro: Ethics and Robotics
"This issue is a very special issue. What it makes so special is the fact that we faced some of the issues dealt in it in the process of creating it: some contributions sent in by Email were blocked by the spam mail scanner. They were - of course wrongly - tagged as 'sexual discriminating' but no alert was given by the system. Now: who was to be made responsible if we - in fact in an uncomplicated und constructive thus human way - would not have fixed the problem in time and the authors would not have been included in the issue? On which grounds did the software decide to block them and thus can it be taken as a moral agent? And finally, is the phenomenon of spam forcing us to use such agents in our social communication on which we have to rely in various ways? There we are amidst the subject of our current issue: Ethics in Robotics.
- Gianmarco Veruggio and Fiorella Operto: Roboethics: a Bottom-up Interdisciplinary Discourse in the Field of Applied Ethics in Robotics
- Peter M. Asaro: What Should We Want From a Robot Ethic?
- John P. Sullins: When is a Robot a Moral Agent?
- Brian R. Duffy: Fundamental Issues in Social Robotics
- Barbara Becker: Social Robots - Emotional Agents: Some Remarks on Naturalizing Man-Machine Interaction
- Dante Marino and Guglielmo Tamburrini: Learning Robots and Human Responsibility
- C.K.M. Crutzen: Invisibility and the Meaning of Ambient Intelligence
- Stefan Krebs: On the Anticipation of Ethical Conflicts between Humans and Robots in Japanese Mangas
- Karen Kraehling: In Between Companion and Cyborg: The Double Diffracted Being Else-where of a Robodog
- Naho Kitano: 'Rinri': An Incitement towards the Existence of Robots in Japanese Society
- Miguel Angel Pérez Alvarez: Robotics and Development of Intellectual Abilities in Children
- Dirk Söffker and Jutta Weber: On Designing Machines and Technologies in the 21st Century. An Interdisciplinary Dialogue
Murphy, Robert and Woods, David: Beyond Asimov the Three Laws of Responsible Robotics. In: Intelligent Systems, July/August 2009, Vol. 24, No. 4, pp. 14-20
"Asimov's Three Laws of Robotics have been inculcated so successfully into our culture that they now appear to shape expectations as to how robots should act around humans. However, there has been little serious discussion as to whether the Laws really do provide a framework for human-robot interactions. Asimov actually used his laws as a literary device to explore the lack of resilience in the interplay between people and robots in a range of situations. This paper briefly reviews some of the practical shortcomings of each of Asimov's Laws for framing the relationships between people and robots, including reminders about what robots can't do. The main focus of the paper is to propose an alternative, parallel set of Laws of Responsible Robotics as a means to stimulate debate about the accountability relationships for robots when their actions can result in harm to people or human interests. The alternative laws emphasize (1) systems safety in terms of the responsibilities of those who develop and deploy robotic systems, (2) robots' responsiveness as they participate in dynamic social and cognitive relationships, and (3) smooth transfer of control as a robot encounters and initially responds to disruptions, impasses, or opportunities in context."
" (...) What would happen if a parent were to leave a child in the safe hands of a future robot caregiver almost exclusively? The truth is that we do not know what the effects of the long-term exposure of infants would be. We cannot conduct controlled experiments on children to find out the consequences of long-term bonding with a robot, but we can get some indication from early psychological work on maternal deprivation and attachment. Studies of early development in monkeys have shown that severe social dysfunction occurs in infant animals allowed to develop attachments only to inanimate surrogates
Despite these potential problems, no international or national legislation or policy guidelines exist except in terms of negligence, which has not yet been tested in court for robot surrogates and may be difficult to prove in the home (relative to cases of physical abuse). There is no guidance from any international Nanny code of ethics, nor even from the U.N. Convention on the Rights of Children (7) except by inference. There is a vital need for public discussion to decide the limits of robot use before the industry and busy parents make the decision themselves.
At the other end of the age spectrum, the relative increase in many countries in the population of the elderly relative to available younger caregivers has spurred the development of sophisticated elder-care robots. Examples include the Secom "My Spoon" automatic feeding robot, the Sanyo electric bathtub robot that automatically washes and rinses, and the Mitsubishi Wakamura robot for monitoring, delivering messages, and reminding about medicine. These robots can help the elderly to maintain independence in their own homes (8), but their presence could lead to the risk of leaving the elderly in the exclusive care of machines. The elderly need the human contact that is often only provided by caregivers and people performing day-to-day tasks for them (9).
A different set of ethical issues is raised by the use of robots in military applications. Coalition military forces in Iraq and Afghanistan have deployed more than 5000 mobile robots. Most are used for surveillance or bomb disposal, but some, like the Talon SWORD and MAARS, are heavily armed for use in combat, although there have been no reports of lethality yet. The semiautonomous unmanned combat air vehicles, such as the MQ1 Predator and MQ9 Reapers, carry Hellfire missiles and bombs that have been involved in many strikes against insurgent targets that have resulted in the deaths of many innocents, including children.
Robot autonomy is required because one soldier cannot control several robots.
The ethical problems arise because no computational system can discriminate between combatants and innocents in a close-contact encounter. Computer programs require a clear definition of a noncombatant, but none is available.
Robots for care and for war represent just two of many ethically problematic areas that will soon arise from the rapid increase and spreading diversity of robotics applications. Scientists and engineers working in robotics must be mindful of the potential dangers of their work, and public and international discussion is vital in order to set policy guidelines for ethical and safe application before the guidelines set themselves."
Wallach, Wendell and Allen, Collin: Moral Machines. Teaching Robots Right from Wrong. Oxford 2009
“Three questions emerge naturally from the discussion so far. Does the world need AMAs? Do people want computers making moral decisions? And if people believe that computers making moral decisions are necessary or inevitable, how should engineers and philosophers proceed to design AMAs?“(Introd.)
„We take the instrumental approach that while full-blown moral agency may be beyond the current or future technology, there is nevertheless much space between operational morality and “genuine” moral agency. This is the niche we identified as functional morality in chapter 2.“(Introd.)
„The top-down and bottom-up approaches emphasize the importance in ethics of the ability to reason. However, much of the recent empirical literature on moral psychology emphasizes faculties besides rationality.
Emotions, sociability, semantic understanding, and consciousness are all important to human moral decision making, but it remains an open question whether these will be essential to AMAs, and if so, whether they can be implemented in machines.“ (Introd.)
„The field of machine morality extends the field of computer ethics beyond concern for what people do with their computers to questions about what the machines do by themselves. (In this book we will use the terms ethics and morality interchangeably.) We are discussing the technological issues involved in making computers themselves into explicit moral reasoners.“ (Introd.)