Social life is increasingly ruled by
algorithms. What is an algorithm? It is a digital tool to help to find
solutions to problems. IT companies have created powerful algorithms
personalized searches creating individual and social profiles that are
not only for the digital economy but also for political and social
locally and globally. This social dimension of algorithms is not
is the classical technical definition
by Donald E. Knuth in The Art of Computer
meaning for algorithm is quite similar to that of recipe,
process, technique, procedure, routine, except that the
word "algorithm" connotes something just a little different. Besides
merely being a finite set of rules which gives a sequence of operations
solving a specific type of problem, an algorithm has five important
features: finiteness, definiteness, input, output,
effectiveness. (Knuth 1968/69 apud
Ziegenbalg 1996, 23).
What happens when algorithms with such
"important features" are at the core of all kinds of social
processes, industrial techniques, and everyday routines? After the
the internet, the cultural dimension of algorithms has become apparent:
fundamental level, they are what one can call anthropologically
entrenched in us, their creators and users. In
other words, there is a "constitutive entanglement" where "it is
not only us that make them, they also make us" (Introna and Hayes 2011,
108). Indeed, the problem with such mutual imbrication is that
cannot be fully 'revealed,' but only unpacked to a certain extent. What
more, they always find themselves temporally
entrenched, so to speak. They come to life with their own rhythm,
use Shitaro Miyasaki's description in this volume, "they need
and thus they embody time" (p. 129). (Seyfert & Roberge 2016, 2)
Algorithms are implicitly or explicitly
designed within the framework of social customs. They are embedded in
from scratch. According to the phenomenologist Lucas Introna, creators
users are "impressed" by
algorithms (Introna 2016). The "impressionable subject," however, is
not the modern subject detached from the so-called outside world, but a
plurality of selves sharing a common world that is algorithmically
What is ethically at stake when dealing with algorithms becomes part of
human mores? What is the nature of this
entanglement between human mores and
algorithms? To what extent can it be said that algorithms are, in fact,
cultural? Who is responsible for the decisions taken by algorithms? To
extentis this anthropomorphic view on algorithms legitimate in order to
understand what algorithms are? These are some foundational questions
dealing with the ethics of algorithms that is in an incipient state
(Mittelstadt et al. 2016). This paper deals with the difference between
what we are in order to take an ethical perspective on algorithms and
regulation. The present casting of ourselves as homo
digitalis (Capurro 2017) opens the possibility of reifying
ourselves algorithmically. The main ethical challenge for the inrolling
age consists in unveiling the ethical difference, particularly when
the nature of algorithms and their ethical and legal regulation.
AND CONCEALING THE ETHICAL DIFFERENCE
Human Condition: Hannah Arendt writes:
speaking, men show who they are, reveal actively their unique personal
identities and thus make their appearance in the human world, while
physical identities appear without any activity of their own in the
shape of the body and sound of the voice. This disclosure of "who" in
contradistinction to "what" somebody is―his qualities, gifts,
talents, and shortcomings, which he may display or hide―is implicit in
everything somebody says and does. It can be hidden only in complete
and perfect passivity, but its disclosure can almost never be achieved
wilful purpose, as though one possessed and could dispose of this
"who" in the same manner he has and can dispose of his qualities. On
the contrary, it is more than likely that the "who," which appears so
clearly and unmistakably to others, remains hidden from the person
like the daimōn in Greek religion
which accompanies each man throughout his life, always looking over his
shoulder from behind and thus visible only to those he encounters.
Human whoness is nothing permanent and
substantial. It is not the immortal soul, not the Cartesian res
cogitans, and not noumenal Kantian personhood. It
as an encounter. We conceal and reveal who we are through mutually
acknowledging or disacknowledging on the basis of shared customs,
and practices, i.e., culturally. This concept of whoness
(Capurro et al. 2013) echoes the Latin concept of persona,
the mask or role of theatre
players. It also echoes the non-substantial view of the self in Eastern
traditions (Elberfeld 2017, 274-327) and, on a different account, David
concept of "personal identity." He writes:
when I enter most intimately into what I call myself,
I always stumble on some particular perception or other, of
heat or cold, light or shade, love or hatred. I never can catch myself at anytime without a perception,
and never can observe anything but the perception. [...] The mind is a
theatre, where several perceptions successively make their appearance
(Hume 1962, 259)
The reification of our "qualities,
gifts, talents and shortcomings" in digital media is deeply ambiguous.
suggests it is the truth about who we are, while in fact it
digital profiles of ourselves. An adequate medium to unveil this
Shakespeare's Hamlet, Prince of Denmark
is a paramount example of a plot that does not intend to offer a
the problem of life but to unveil it through questioning human
Bernardo, an officer, starts the play by asking: "Who's there?" The
question being repeated by Horatio, a friend to Hamlet. The Ghost
Bernardo says: "In the same figure, like the king that's dead."
Horatio, desperately, says: "Stay! speak, speak! I charge thee,
speak!" The Ghost exits. Marcellus, another officer, states
"'Tis gone, and will not answer." (Shakespeare 2010, Hamlet I, 1,
2233-2235). "[M]en’s minds are wild" states Horatio at the end of the
play. "Music and the rites of war" should "speak loudly"
for the dead Hamlet who is brought to stage "like a soldier."
(Shakespeare 2010, Hamlet, V, 2, 2332) Who is Hamlet? "What is behind a name?" asks Thomas
Ostermeier in whose staging of Hamlet
he, Hamlet, is not present (Ostermeier 2017, 96). "The whole world
the player" ("Die ganze Welt spielt den Schauspieler") is
Ostermeier's German translation of "All the world's a stage" in
Shakespeare's As you Like it,
(Shakespeare 2010, As you Like it,
II, 7, 626) (Ostermeier 2017, 98). "To act means to play"
("Handeln heißt spielen") writes Ostermeier (ibid.). Hesitation
delay in answering, is proper to good
human interplay being exposed to "the whole world," i.e., to
situations that she cannot foresee or even master in its entirety,
the possibility of taking her time for hesitation before
to Ostermeier, many of the catastrophes of the last century "were due
lack of time for hesitation that would have given place for thinking
for reflection" (Ostermeier 2017, 99, transl. RC).
Algorithms know nothing about hesitation.
In fact, they know nothing at all and they do not learn. They are
They are not played by the world, but by human designers and users. The
question about who we are is about the being of the who. Asking this question
means avoiding the confusion about as who we play with
the belief that this
possibility is, in fact, the only and true one, making a fixation out
interpretation. When it comes to human beings, their being
is a matter of interpretation who one is as a player
with other players in the drama of life. The Australian
phenomenologist Michael Eldred writes:
one is is always a matter of having adopted
certain masks of identity reflected
from the world as offers of who one could be in the world. Each human
an origin of his or her own
self-movement and has an effect on
the surroundings, changing them this way or that, intentionally or
unintentionally. [...] The core mask of identity borne by a who (Gr.
quis) is one's own proper name, around
which other masks cluster. (Eldred 2013, 22-23)
An ethics of algorithms deals with making
this difference theoretically and practically between who and what we
resisting the tendency to confuse or even to identify ourselves (our
with masks that we give to ourselves or others give to us. This
to a head when we believe we can attribute moral responsibility to
that are supposed to be a kind of who whatsoever. Hamlet,
Prince of Denmark represents on stage this interplay of
masking and unmasking ourselves. It is a key theatre play when it comes
unmasking the ethos of a society
driven by algorithms. Algorithms implement digital reifications of who
and what roles we play in the drama of life. Ethics of algorithms faces
challenge of the extent to which and under what rules we (who?) want
to play a role in the human interplay on the stage that is the world.
2. ENCULTURATING ALGORITHMS
We build our individual and social ways of
being (ethos) through what Hannah
Arendt calls "the ‘web’ of human relationships" (Arendt 1994, 183).
Human interplay is risky because human agents face the contingencies of
past, present and future actions and interpretations and the risks of
power play with others. This makes a difference between the human
the interaction between non-human actors. Michael Eldred puts the
difference interplay and interaction
in dialogue with Hannah Arendt as follows:
dimension she is addressing, of ‘people... acting and speaking
(27:198) through which they show to each other who they are and perhaps
‘full appearance [in] the shining
brightness we once called glory’ (24: 180), is not that of action and
no matter (to employ Arendt's own words) how surprising, unexpected,
unpredictable, boundless social interaction may be, but of interplay.
It is the play that has to be understood, not the
action, and it is no accident that play is also that which takes place
stage, for she understands the dimension of ‘acting and speaking’
revealing and disclosing their selves as who
they are. On the other hand, interplay takes place also in private: in
interplay of love as a groundlessly grounding way to be who with
speaking easily becomes hollow. (Eldred 2013, 83)
The implicit and explicit moral and legal
norms and values of human interplay can today be reified in the digital
through algorithms that shine back onto the players when personal
of whatever kind are at stake. This shining-back on the social players
can be a
means of promotion or destruction not only of what people produce
outside the digital network but also of their own interpretation of who
are or want to be. In-between these two poles, namely promotion or
there is a lot of possibilities that should be carefully analyzed and
due to the ambiguities inherent in this intertwining between the
freedoms and its digital reification with various kinds of masking and
unmasking options and procedures. An example of this ambiguity is the
algorithms for pre-crime analysis aiming at unmasking potential
the perspective of algorithmic search we are nothing but a bunch of
Algorithms can map and track our digital identities. But nobody can
that such a public persona matches me and not someone else. Everyone is
general suspicion and everything we do on the internet on any kind of
connected with the internet leaves our digital footprint that might be
misused for or against us and others in both the digital and the
world. The result is a tension between two moods of being-in-the-world,
trust and anxiety (Capurro 2005).
Algorithms might strengthen or weaken our
"symbolic immune systems" such as moral and legal norms and values
(Sloterdijk 2009). As in the case of biological immune systems, we must
attention to the changing environment not only by using but by
algorithms, using them, paradoxically, for such a purpose (Algorithm
2017). To be digitally observed, or not to be, that is the question. Or
Lanchester puts it: "You Are the Product" (Lanchester 2017).
Protecting us from algorithms means to
becoming identified by algorithmic observation everywhere, all the
and being (ab-)used, for instance, through mobile phones. To resist
learning to reveal and conceal
ourselves through a kind of guerrilla tactics
that Brunton and Nissenbum
call "obfuscation" (Brunton & Nissenbaum 2015).
The ethical and legal challenge is about explicitly
paying attention to the contexts in
which, for what, by whom, and for whom they are created and used. Helen
structured social settings characterized by canonical activities,
relationships, power structures, norms (or rules), and internal values
ends, purposes). Contexts are ‘essentially rooted in specific times and
that reflect the norms and values of a given society. (Nissenbaum 2010,
Algorithms with their "five important
features: finiteness, definiteness,
input, output, effectiveness" (Donald Knuth) are embedded from
in contexts, i.e., in social norms and values. Norms and values arise
three-dimensional temporal in-between as which the human interplay of
and traditions takes place. Unveiling the temporality shaped by
algorithms is a
key task for a future phenomenology of algorithms. Understanding
algorithms as cultural
practices means to critically reflect about the assemblages
of institutions, values, and norms to which they are
explicitly or implicitly related (Stalder 2016). Algorithms are reified
practices whose norms and regulations must be hermeneutically
reconsidered in view of the operations and intentions of their users
producers (Dobusch 2013). Algorithmic decision-making (ADM) is not
because it is logical and executed by a machine. The cultural framework
which algorithms are ethically and legally embedded is not a permanent
unquestionable basis, at least in democratic systems. The weakness of
systems becomes problematic when considering, for instance, the
Russian influence on the U.S.
presidential elections via Facebook, Google and Twitter. In a
one-party system like China
in search of a Confucius-based harmonious
society, algorithms are a powerful instrument for political
The Confucian tradition aiming at ruling
society can be weakened when considering the Taoist tradition according
which societal processes are embedded in a larger natural framework.
thinking looks for regulation in the
sense of, for instance, regulating the current of a river. Such a kind
regulation is based on the maxim: ‘Don't block!' which is a translation
Taoist concept of wu wei or
non-action (Jullien 2005). Wu wei
means not acting against the laws of nature as well as paying attention
to the changes in social settings. Blockages of
different kinds, such as information overload, can arise due to a lack
rules but also to ways of ruling and regulating information flows
2010). These questions are particularly relevant in the present debate
ENCULTURATING AUTONOMOUS CARS
December 8, 1926 The Milwaukee Sentinel announced:
"'Phantom Auto' will tour
car’ will haunt the streets of Milwaukee today.
Driverless, it will start its own motor, throw in its clutch, twist its
steering wheel, toots its horn, and it may even ‘sass’ the policeman at
corner. The ‘master mind’ that will guide the machine as it prowls in
of the busy traffic will be a radio set in a car behind. Commanding
from the second machine will be caught by a receiving set in the ‘ghost
The tour, conducted by the Achen Motor company, will start at 11.30
the company's rooms at Oneida and
Jackson streets [...] (Quote apud
Capurro 2017, 115)
Thirty years later, the US The Central
Power and Light Company
foresaw the future of driverless cars this way:
MAY BE THE DRIVER. One day a car may speed
electric super-highway, its speed and steering automatically controlled
electronic devices embedded in the road. Highways will be made safe –
electricity! No traffic jams ... no collisions ... no driver fatigue.
Victoria Advocate 1957, quote apud
Capurro 2017, 116)
Today it seems, on the one hand, as if in
the near future, say, in a decade or so, driverless cars controlled by
algorithms will become an obvious option or even, as some experts think
2025), the most successful paradigm for global and local mobility.
it comes to trusting algorithms or not as car drivers might diminish or
disappear according to the "familiarity principle." (Leonhardt 2017,
Capurro 2005) But, on the other hand,
nobody can guarantee that algorithms can deal with the complexity and
unforeseeability of mobility in situations in which pedestrians, young
old, cars driven by humans, bikes, dogs etc. come into play, following
implicit or explicit rules and laws that vary according to cultural
individual preferences, and ad hoc
decisions. What is supposed to reduce complexity and diminish the
consequences of today's mobility systems by relying on algorithms,
all kinds, in cars as well in the roads, GPS control, etc. might become
nightmare if, for instance, hackers misuse the system for terrorist
That said, we must consider that the
present ethical and legal debate over autonomous cars should be
part of the broader issue about the digitalization of society in
mobility in particular. The ethics of algorithms with regard to
deals so far mainly with questions of accountability, responsibility
so-called distributed morality, where moral responsibility might be
analogically to artificial agents (Mittelstadt
et al. 2016, 10-12, Floridi and Sanders
2004). Autonomy as
a technical concept concerns different levels at which cars might be
less autonomous with regard to the intervention of a driver inside or
the car or of a whole surveillance and digital track system. In this
mobility?? could be designed exclusively for autonomous cars in order
so-called ethical dilemmas that arise when moral and legal rules
algorithms becomes a matter of autonomous interpretation in
situations. This concept of autonomy contrasts with the philosophical
qualifying human beings whose actions have their origin in themselves.
critique of computer systems as moral agents Deborah Johnson and Keith
are levels of abstraction in which computer behaviour appears
the appropriate use of the term ‘autonomous’ at one level of
not mean that computer systems are, therefore, ‘autonomous’ in some
general sense. We should not allow the existence of a particular level
abstraction to determine the outcome of the broader debate about the
agency of computer systems. (Johnson and Miller 2008,
A comparative theory of agents must
address, historically and systematically, different concepts of agents
autonomy in order to avoid misleading analogies and equivocal uses of
other concepts that originate when the ethical difference between who
is no longer being perceived as a
difference (Capurro 2015). What kind of mobility makes sense for a
What are the options among different kinds of assemblages of means of
What is the trade-off of such assemblages with regard to the
can automated and autonomous driving be embedded in different societal
and needs, geographic environments, etc.? In other words, enculturating
algorithms is a key issue with regard to autonomous driving. With the
of the car, we defined ourselves as car drivers following the long
reflection on the tasks and qualities of steering ships (kybernetes)
or the six-thousand-year tradition of learning how to
handle horses (Raulff 2015). Different kinds of practices on how to
being moved in the world shine back on ourselves. This is also the case
autonomous driving when we trust algorithms to steer the movement of a
view of goals given by ourselves. What is a car in the 21st century?
of recasting of the relation between man and world takes place with the
invention of autonomous cars? How will this invention be adopted and
different cultures? What kind of new forms of social inclusion and
will this new form of mobility bring to humans in different societies
Enculturating algorithms is a broad
interdisciplinary and intercultural field. In the introduction to their
book Understanding Computers and Cognition. A New
Foundation for Design published in 1986, Terry Winograd and
write: "[...] in designing tools we are designing ways of being."
(Winograd and Flores 1986, xi). Interpreting algorithms as
ways of being means taking a critical stance with regard not
only to calculating thinking in general and algorithms in particular,
our belief in them that becomes
exacerbated in the 21st century. When such belief becomes predominant,
ethical difference might be perceived either as an anthropocentric
simply as a myth arising from pre-scientific or anti-technological
thinking. In The Science of Logic Hegel writes:
calculation is so much of an external and therefore mechanical
business, it has
been possible to manufacture machines that perform arithmetical
complete accuracy. It is enough to know this fact alone about the
calculation to decide on the merit of the idea of making it the main
of the education of spirit/mind, of stretching spirit/mind on the rack
to perfect it as a machine. (Hegel 2010, 181-182)
In the last two hundred
years the concept of spirit/mind (Geist) has been object of
and practical critique in such a way that it lost its meaning as dynamis or capacity for changing the
relationship between ourselves and world. The Marxian demand to change
world instead of just interpreting it (Marx 1969) begs the question,
change of the relationship between ourselves and world is possible if
on a previous recasting of it. Geist,
our historical mind, is our original and originating capacity for such
recasting. Only if we acknowledge this capacity can we cope with the
of an age in which human mental creativity is to be ostensibly
perfected as an
algorithm. In doing so, we give up what enables us to originate
castings of ourselves and the world, including the present digital one.
power of calculation might become a source of liberation by
algorithms instead of stretching human mental creativity on the rack of
algorithmically controlled computers.
AD 2025. The automated driving
Algorithm Watch (2017)
(1998). The Human
and London: The University of Chicago
Press, 2nd Ed.
Beuth, Patrick (2017).
Feinbild Algorithmus. In:
DIE ZEIT, October 14.
Nissenbaum, Helen (2015). Obfuscation. A
for privacy and protest. Cambridge,
Mass.: The MIT Press.
Capurro, Rafael (2017).
Homo Digitalis. Beiträge zur Ontologie, Anthropologie und Ethik
(2017a). Ethical Issues of
Humanoid-Human Interaction. In: Prahlad Vadakkepat, Ambarish Goswami,
Kim (eds.): Handbook of Humanoids. Springer 2017.
(2017b). Autonomous Zombies
are not an option. In: 2015
AD. The Automated Driving Community, June 28.
Capurro, Rafael (2015).
Comparative Theory of Agents. In: Mathias
Gutmann, Michael Decker, Julia Knifka (Eds.):
Robotics, Organic Computing and Adaptive Ambience. Vienna: LIT, 2015, 81-96.
Capurro, Rafael (2010).
Dao of the
Information Society in China
and the Task of Intercultural Information Ethics.
Capurro, Rafael (2005).
Between Trust and
Anxiety. On the Moods of Information Society. In: Richard Keeble
Communication Ethics Today. Leicester:
Troubadour Publishing Ltd., 2005, 187-196.
Eldred, Michael and Nagel,
Daniel (2013). Digital Whoness. Identity, Privacy and Freedom in the
Algorithm Regulation #4: Algorithm as a Practice. January 14. In:
Dobusch, Philip Mader and Sigrid Quack: governance across borders.
fields and transversal themes. a blogbook.
Elberfeld, Rolf (2017).
Philosophieren in einer globalisierten Welt. Wege zu einer
Phänomenologie. Freiburg/München: Alber.
Eldred, Michael (2013). Phenomenology of whoness: identity, privacy, trust and
Rafael Capurro, Michael Eldred & Daniel Nagel, Daniel: Digital Whoness: Identity, Privacy and
Freedom in the Cyberworld. Berlin: de
Gruyter 2013, 19-59.
Floridi, Luciano and
Sanders, Jeff W.
the Morality of Artificial Agents. Minds and Machines 2004, 14 (3),
Hegel, Georg Wilhelm Friedrich (2010). The
Science of Logic, transl. and ed. G. di Giovanni, Cambridge University
Hume, David (1962). A
Treatise of Human
Nature. In: On Human Nature And the Understanding, ed. A. Flew. New York:
Introna, Lucas (2016).
choreography of the impressionable subject In: Robert
Seyfert & Jonathan Roberge (eds.):
Algorithmic Cultures: Essays on Meaning, Performance and New
Technologies. London and New York:
Johnson, Deborah G. and
Miller, Keith W.
(2008). Un-making artificial moral agents. Ethics and Information
Technology 10, 123-133.
(2005). Nourrir sa vie. À l'écart du bonheur. Paris: Seuil.
(1968/69). The Art of
Computer Programming. Reading,
Product. In: London
Review of Books, Vol. 39 No. 16, 3-10.
Made me Nervous. Then I Tried One. In:
The New York Times International Weekly, October 22.
Liebert, Juliane (2017).
"Das wird auch auf uns zukommen" Wenn Algorithmen dich zum Verbrecher
stempeln - der Dokumentarfilmer Matthias Heeder über seinen Film
"Pre-Crime". In: Süddeutsche Zeitung, Nr. 235, October 12,, 12.
Lischka, Konrad and
Klingel Anita (2017). Wenn
Maschinen Menschen bewerten. Internationale
Fallbeispiele für Prozesse algorithmischer Entscheidungsfindung -
Arbeitspapier. Gütersloh: Bertelsmann-Stiftung.
Marx, Karl (1969). Thesen
über Feuerbach: In: Marx-Engels Werke, 3, Berlin: Dietz Verlag.
Daniel, Allo, Patrick, Taddeo, Mariarosaria, Wachter, Sandra, and
Luciano (2016). The
ethics of algorithms: Mapping the
debate. Big Data & Society, July–December 1–21.
(2010). Privacy in
Context. Technology, Policy, and the Integrity of Social Life. Stanford: Stanford University Press.
(2017). Hamlet in der Mausefalle. In: Lettre International 118, 96-101.
Raulff, Ulrich (2015).
Das letzte Jahrhundert der Pferde. Geschichte einer Trennung.
Seyfert, Robert and
(eds.) (2016). Algorithmic Cultures: Essays on Meaning, Performance and
Technologies. London and New York:
(2010). Hamlet, Prince
In: Sämtliche Werke 2. Frankfurt am
Main: Zweitausendeins, 2233-2332.
Sloterdijk, Peter (2009):
Du musst dein Leben ändern. Über Anthropotechnik. Frankfurt
am Main: Suhrkamp.
Stalder, Felix (2016).
Kultur der Digitalität. Berlin: Suhrkamp Verlag.
Winograd, Terry, Flores,
Fernando (1986). Understanding Computers and
A New Foundation for Design. Norwood,
(1996). Algorithmen: von Hammurapi bis Gödel. Heidelberg, Berlin, Oxford:
Last update: March 2, 2018