Rafael Capurro

Keynote paper at the international conference: Artificial Intelligence & Regulation, LUISS (Libera Università Internazionale degli Studi Sociali Guido Carli), Rome, March 2, 2018 (Programme).


The paper deals with the difference between who and what we are in order to take an ethical perspective on algorithms and their regulation. The present casting of ourselves as homo digitalis implies the possibility of projecting who we are as social beings sharing a world, into the digital medium, thereby engendering what can be called digital whoness, or a digital reification of ourselves. A main ethical challenge for the inrolling digital age consists in unveiling this ethical difference, particularly when dealing with algorithms and their regulation in the context of human relationships. The paper addresses by way of example some issues raised by autonomous cars.


"Who's there?`"

W. Shakespeare Hamlet, Prince of Denmark I,1


Social life is increasingly ruled by algorithms. What is an algorithm? It is a digital tool to help to find solutions to problems. IT companies have created powerful algorithms that allow personalized searches creating individual and social profiles that are a basis not only for the digital economy but also for political and social processes, locally and globally. This social dimension of algorithms is not obvious. There is the classical technical definition by Donald E. Knuth in The Art of Computer Programming:

The modern meaning for algorithm is quite similar to that of recipe, process, technique, procedure, routine, except that the word "algorithm" connotes something just a little different. Besides merely being a finite set of rules which gives a sequence of operations for solving a specific type of problem, an algorithm has five important features: finiteness, definiteness, input, output, effectiveness. (Knuth 1968/69 apud Ziegenbalg 1996, 23).

What happens when algorithms with such "important features" are at the core of all kinds of social processes, industrial techniques, and everyday routines? After the invention of the internet, the cultural dimension of algorithms has become apparent:

On the most fundamental level, they are what one can call anthropologically entrenched in us, their creators and users. In other words, there is a "constitutive entanglement" where "it is not only us that make them, they also make us" (Introna and Hayes 2011, 108). Indeed, the problem with such mutual imbrication is that algorithms cannot be fully 'revealed,' but only unpacked to a certain extent. What is more, they always find themselves temporally entrenched, so to speak. They come to life with their own rhythm, or, to use Shitaro Miyasaki's description in this volume, "they need unfolding, and thus they embody time" (p. 129). (Seyfert & Roberge 2016, 2)

Algorithms are implicitly or explicitly designed within the framework of social customs. They are embedded in cultures from scratch. According to the phenomenologist Lucas Introna, creators and users are  "impressed" by algorithms (Introna 2016). The "impressionable subject," however, is not the modern subject detached from the so-called outside world, but a plurality of selves sharing a common world that is algorithmically intertwined. What is ethically at stake when dealing with algorithms becomes part of human mores? What is the nature of this entanglement between human mores and algorithms? To what extent can it be said that algorithms are, in fact, cultural? Who is responsible for the decisions taken by algorithms? To what extentis this anthropomorphic view on algorithms legitimate in order to understand what algorithms are? These are some foundational questions when dealing with the ethics of algorithms that is in an incipient state (Mittelstadt et al. 2016). This paper deals with the difference between who and what we are in order to take an ethical perspective on algorithms and their regulation. The present casting of ourselves as homo digitalis (Capurro 2017) opens the possibility of reifying ourselves algorithmically. The main ethical challenge for the inrolling digital age consists in unveiling the ethical difference, particularly when addressing the nature of algorithms and their ethical and legal regulation.



In The Human Condition: Hannah Arendt writes:

In acting and speaking, men show who they are, reveal actively their unique personal identities and thus make their appearance in the human world, while their physical identities appear without any activity of their own in the unique shape of the body and sound of the voice. This disclosure of "who" in contradistinction to "what" somebody is―his qualities, gifts, talents, and shortcomings, which he may display or hide―is implicit in everything somebody says and does. It can be hidden only in complete silence and perfect passivity, but its disclosure can almost never be achieved as a wilful purpose, as though one possessed and could dispose of this "who" in the same manner he has and can dispose of his qualities. On the contrary, it is more than likely that the "who," which appears so clearly and unmistakably to others, remains hidden from the person himself, like the daimōn in Greek religion which accompanies each man throughout his life, always looking over his shoulder from behind and thus visible only to those he encounters. (Arendt 1998, 179-180)

Human whoness is nothing permanent and substantial. It is not the immortal soul, not the Cartesian res cogitans, and not noumenal Kantian personhood. It happens as an encounter. We conceal and reveal who we are through mutually acknowledging or disacknowledging on the basis of shared customs, rules, values and practices, i.e., culturally. This concept of whoness (Capurro et al. 2013) echoes the Latin concept of persona, the mask or role of theatre players. It also echoes the non-substantial view of the self in Eastern traditions (Elberfeld 2017, 274-327) and, on a different account, David Hume's concept of "personal identity." He writes:

For my part, when I enter most intimately into what I call myself, I always stumble on some particular perception or other, of heat or cold, light or shade, love or hatred. I never can catch myself at anytime without a perception, and never can observe anything but the perception. [...] The mind is a kind of theatre, where several perceptions successively make their appearance [...] (Hume 1962, 259)

The reification of our "qualities, gifts, talents and shortcomings" in digital media is deeply ambiguous. It suggests it is the truth about who we are, while in fact it (re-)presents digital profiles of ourselves. An adequate medium to unveil this ambiguity is, precisely, theatre.

Shakespeare's Hamlet, Prince of Denmark is a paramount example of a plot that does not intend to offer a solution for the problem of life but to unveil it through questioning human identity. Bernardo, an officer, starts the play by asking: "Who's there?" The question being repeated by Horatio, a friend to Hamlet. The Ghost enters. Bernardo says: "In the same figure, like the king that's dead." Horatio, desperately, says: "Stay! speak, speak! I charge thee, speak!" The Ghost exits. Marcellus, another officer, states laconically: "'Tis gone, and will not answer." (Shakespeare 2010, Hamlet I, 1, 2233-2235). "[M]en’s minds are wild" states Horatio at the end of the play. "Music and the rites of war" should "speak loudly" for the dead Hamlet who is brought to stage "like a soldier." (Shakespeare 2010, Hamlet, V, 2, 2332) Who is Hamlet? "What is behind a name?" asks Thomas Ostermeier in whose staging of Hamlet he, Hamlet, is not present (Ostermeier 2017, 96). "The whole world plays the player" ("Die ganze Welt spielt den Schauspieler") is Ostermeier's German translation of "All the world's a stage" in Shakespeare's As you Like it, (Shakespeare 2010, As you Like it, II, 7, 626) (Ostermeier 2017, 98). "To act means to play" ("Handeln heißt spielen") writes Ostermeier (ibid.). Hesitation or delay in answering, is proper to good human interplay being exposed to "the whole world," i.e., to situations that she cannot foresee or even master in its entirety, requiring the possibility of taking her time for hesitation before decision-making. According to Ostermeier, many of the catastrophes of the last century "were due to a lack of time for hesitation that would have given place for thinking over and for reflection" (Ostermeier 2017, 99, transl. RC).

Algorithms know nothing about hesitation. In fact, they know nothing at all and they do not learn. They are heteronomous. They are not played by the world, but by human designers and users. The question about who we are is about the being of the who. Asking this question means avoiding the confusion about as who we play with the belief that this possibility is, in fact, the only and true one, making a fixation out of an interpretation. When it comes to human beings, their being is a matter of interpretation who one is as a player with other players in the drama of life. The Australian phenomenologist Michael Eldred writes:

Who one is is always a matter of having adopted certain masks of identity reflected from the world as offers of who one could be in the world. Each human being is an origin of his or her own self-movement and has an effect on the surroundings, changing them this way or that, intentionally or unintentionally. [...] The core mask of identity borne by a who (Gr. τίς, L. quis) is one's own proper name, around which other masks cluster. (Eldred 2013, 22-23)  

An ethics of algorithms deals with making this difference theoretically and practically between who and what we are by resisting the tendency to confuse or even to identify ourselves (our selves) with masks that we give to ourselves or others give to us. This confusion comes to a head when we believe we can attribute moral responsibility to algorithms that are supposed to be a kind of who whatsoever. Hamlet, Prince of Denmark represents on stage this interplay of masking and unmasking ourselves. It is a key theatre play when it comes to unmasking the ethos of a society driven by algorithms. Algorithms implement digital reifications of who we are and what roles we play in the drama of life. Ethics of algorithms faces the challenge of the extent to which and under what rules we (who?) want algorithms to play a role in the human interplay on the stage that is the world.




We build our individual and social ways of being (ethos) through what Hannah Arendt calls "the ‘web’ of human relationships" (Arendt 1994, 183). Human interplay is risky because human agents face the contingencies of their past, present and future actions and interpretations and the risks of ongoing power play with others. This makes a difference between the human interplay and the interaction between non-human actors. Michael Eldred puts the difference interplay and interaction in dialogue with Hannah Arendt as follows:

The realm or dimension she is addressing, of ‘people... acting and speaking together’ (27:198) through which they show to each other who they are and perhaps come to ‘full appearance  [in] the shining brightness we once called glory’ (24: 180), is not that of action and reaction, no matter (to employ Arendt's own words) how surprising, unexpected, unpredictable, boundless social interaction may be, but of interplay. It is the play that has to be understood, not the action, and it is no accident that play is also that which takes place on a stage, for she understands the dimension of ‘acting and speaking’ (27:199), revealing and disclosing their selves as who they are. On the other hand, interplay takes place also in private: in the interplay of love as a groundlessly grounding way to be who with another, where speaking easily becomes hollow. (Eldred 2013, 83)

The implicit and explicit moral and legal norms and values of human interplay can today be reified in the digital medium through algorithms that shine back onto the players when personal digital data of whatever kind are at stake. This shining-back on the social players can be a means of promotion or destruction not only of what people produce within and outside the digital network but also of their own interpretation of who they are or want to be. In-between these two poles, namely promotion or destruction, there is a lot of possibilities that should be carefully analyzed and evaluated due to the ambiguities inherent in this intertwining between the interplay of freedoms and its digital reification with various kinds of masking and unmasking options and procedures. An example of this ambiguity is the use of algorithms for pre-crime analysis aiming at unmasking potential criminals. From the perspective of algorithmic search we are nothing but a bunch of data. Algorithms can map and track our digital identities. But nobody can guarantee that such a public persona matches me and not someone else. Everyone is under general suspicion and everything we do on the internet on any kind of devices connected with the internet leaves our digital footprint that might be used or misused for or against us and others in both the digital and the material world. The result is a tension between two moods of being-in-the-world, namely trust and anxiety (Capurro 2005).

Algorithms might strengthen or weaken our "symbolic immune systems" such as moral and legal norms and values (Sloterdijk 2009). As in the case of biological immune systems, we must pay attention to the changing environment not only by using but by observing algorithms, using them, paradoxically, for such a purpose (Algorithm Watch 2017). To be digitally observed, or not to be, that is the question. Or as John Lanchester puts it: "You Are the Product" (Lanchester 2017). Protecting us from algorithms means to resist becoming identified by algorithmic observation everywhere, all the time, using and being (ab-)used, for instance, through mobile phones. To resist means learning to reveal and conceal ourselves through a kind of guerrilla tactics that Brunton and Nissenbum call "obfuscation" (Brunton & Nissenbaum 2015).

The ethical and legal challenge is about explicitly enculturating algorithms, paying attention to the contexts in which, for what, by whom, and for whom they are created and used. Helen Nissenbaum writes:

Contexts are structured social settings characterized by canonical activities, roles, relationships, power structures, norms (or rules), and internal values (goals, ends, purposes). Contexts are ‘essentially rooted in specific times and places’ that reflect the norms and values of a given society. (Nissenbaum 2010, 132-133)

Algorithms with their "five important features: finiteness, definiteness, input, output, effectiveness" (Donald Knuth) are embedded from scratch in contexts, i.e., in social norms and values. Norms and values arise in the three-dimensional temporal in-between as which the human interplay of interests and traditions takes place. Unveiling the temporality shaped by algorithms is a key task for a future phenomenology of algorithms. Understanding algorithms as cultural practices means to critically reflect about the assemblages of institutions, values, and norms to which they are explicitly or implicitly related (Stalder 2016). Algorithms are reified social practices whose norms and regulations must be hermeneutically questioned and reconsidered in view of the operations and intentions of their users and producers (Dobusch 2013). Algorithmic decision-making (ADM) is not neutral just because it is logical and executed by a machine. The cultural framework within which algorithms are ethically and legally embedded is not a permanent and unquestionable basis, at least in democratic systems. The weakness of such systems becomes problematic when considering, for instance, the presumable Russian influence on the U.S. presidential elections via Facebook, Google and Twitter. In a centralized one-party system like China in search of a Confucius-based harmonious society, algorithms are a powerful instrument for political surveillance. The Confucian tradition aiming at ruling society can be weakened when considering the Taoist tradition according to which societal processes are embedded in a larger natural framework. Taoist thinking looks for regulation in the sense of, for instance, regulating the current of a river. Such a kind of flow regulation is based on the maxim: ‘Don't block!' which is a translation of the Taoist concept of wu wei or non-action (Jullien 2005). Wu wei means not acting against the laws of nature as well as paying attention to the changes in social settings. Blockages of different kinds, such as information overload, can arise due to a lack of legal rules but also to ways of ruling and regulating information flows (Capurro 2010). These questions are particularly relevant in the present debate over autonomous cars.



On December 8, 1926 The Milwaukee Sentinel announced: "'Phantom Auto' will tour city":

A ‘phantom motor car’ will haunt the streets of Milwaukee today. Driverless, it will start its own motor, throw in its clutch, twist its steering wheel, toots its horn, and it may even ‘sass’ the policeman at the corner. The ‘master mind’ that will guide the machine as it prowls in and out of the busy traffic will be a radio set in a car behind. Commanding waves sent from the second machine will be caught by a receiving set in the ‘ghost car’. The tour, conducted by the Achen Motor company, will start at 11.30 a.m. from the company's rooms at Oneida and Jackson streets [...] (Quote apud Capurro 2017, 115)

Thirty years later, the US The Central Power and Light Company foresaw the future of driverless cars this way:

ELECTRICITY MAY BE THE DRIVER. One day a car may speed along an electric super-highway, its speed and steering automatically controlled by electronic devices embedded in the road. Highways will be made safe – by electricity! No traffic jams ... no collisions ... no driver fatigue. (The Victoria Advocate 1957, quote apud Capurro 2017, 116)

Today it seems, on the one hand, as if in the near future, say, in a decade or so, driverless cars controlled by algorithms will become an obvious option or even, as some experts think (AD 2025), the most successful paradigm for global and local mobility. Anxiety when it comes to trusting algorithms or not as car drivers might diminish or even disappear according to the "familiarity principle." (Leonhardt 2017, Capurro 2005)  But, on the other hand, nobody can guarantee that algorithms can deal with the complexity and unforeseeability of mobility in situations in which pedestrians, young and old, cars driven by humans, bikes, dogs etc. come into play, following (or not) implicit or explicit rules and laws that vary according to cultural traditions, individual preferences, and ad hoc decisions. What is supposed to reduce complexity and diminish the lethal consequences of today's mobility systems by relying on algorithms, sensors of all kinds, in cars as well in the roads, GPS control, etc. might become a nightmare if, for instance, hackers misuse the system for terrorist attacks (Capurro 2017b).

That said, we must consider that the present ethical and legal debate over autonomous cars should be addressed as part of the broader issue about the digitalization of society in general and mobility in particular. The ethics of algorithms with regard to autonomous cars deals so far mainly with questions of accountability, responsibility and so-called distributed morality, where moral responsibility might be applied analogically to artificial agents  (Mittelstadt et al. 2016, 10-12, Floridi and Sanders 2004). Autonomy as a technical concept concerns different levels at which cars might be more or less autonomous with regard to the intervention of a driver inside or outside the car or of a whole surveillance and digital track system. In this case, mobility?? could be designed exclusively for autonomous cars in order to avoid so-called ethical dilemmas that arise when moral and legal rules embedded in algorithms becomes a matter of autonomous interpretation in unforeseeable situations. This concept of autonomy contrasts with the philosophical concept qualifying human beings whose actions have their origin in themselves. In their critique of computer systems as moral agents Deborah Johnson and Keith Miller write:

Obviously, there are levels of abstraction in which computer behaviour appears autonomous, but the appropriate use of the term ‘autonomous’ at one level of abstraction does not mean that computer systems are, therefore, ‘autonomous’ in some broad and general sense. We should not allow the existence of a particular level of abstraction to determine the outcome of the broader debate about the moral agency of computer systems. (Johnson and Miller 2008,  132)

A comparative theory of agents must address, historically and systematically, different concepts of agents and autonomy in order to avoid misleading analogies and equivocal uses of this and other concepts that originate when the ethical difference between who and what is no longer being perceived as a difference (Capurro 2015). What kind of mobility makes sense for a society? What are the options among different kinds of assemblages of means of mobility? What is the trade-off of such assemblages with regard to the environment? How can automated and autonomous driving be embedded in different societal customs and needs, geographic environments, etc.? In other words, enculturating algorithms is a key issue with regard to autonomous driving. With the invention of the car, we defined ourselves as car drivers following the long tradition of reflection on the tasks and qualities of steering ships (kybernetes) or the six-thousand-year tradition of learning how to handle horses (Raulff 2015). Different kinds of practices on how to move or being moved in the world shine back on ourselves. This is also the case with autonomous driving when we trust algorithms to steer the movement of a car in view of goals given by ourselves. What is a car in the 21st century? What kind of recasting of the relation between man and world takes place with the invention of autonomous cars? How will this invention be adopted and adapted by different cultures? What kind of new forms of social inclusion and exclusion will this new form of mobility bring to humans in different societies and environments?




Enculturating algorithms is a broad interdisciplinary and intercultural field. In the introduction to their book Understanding Computers and Cognition. A New Foundation for Design published in 1986, Terry Winograd and Fernando Flores write: "[...] in designing tools we are designing ways of being." (Winograd and Flores 1986, xi). Interpreting algorithms as ways of being means taking a critical stance with regard not only to calculating thinking in general and algorithms in particular, but to our belief in them that becomes exacerbated in the 21st century. When such belief becomes predominant, the ethical difference might be perceived either as an anthropocentric ideology or simply as a myth arising from pre-scientific or anti-technological thinking. In The Science of Logic Hegel writes:

Since calculation is so much of an external and therefore mechanical business, it has been possible to manufacture machines that perform arithmetical operations with complete accuracy. It is enough to know this fact alone about the nature of calculation to decide on the merit of the idea of making it the main instrument of the education of spirit/mind, of stretching spirit/mind on the rack in order to perfect it as a machine. (Hegel 2010, 181-182)

In the last two hundred years the concept of spirit/mind (Geist) has been object of theoretical and practical critique in such a way that it lost its meaning as dynamis or capacity for changing the relationship between ourselves and world. The Marxian demand to change the world instead of just interpreting it (Marx 1969) begs the question, since no change of the relationship between ourselves and world is possible if not based on a previous recasting of it. Geist, our historical mind, is our original and originating capacity for such recasting. Only if we acknowledge this capacity can we cope with the challenges of an age in which human mental creativity is to be ostensibly perfected as an algorithm. In doing so, we give up what enables us to originate different castings of ourselves and the world, including the present digital one. The power of calculation might become a source of liberation by enculturating algorithms instead of stretching human mental creativity on the rack of algorithmically controlled computers.



AD 2025. The automated driving community.

Algorithm Watch (2017)

Arendt, Hannah (1998). The Human Condition. Chicago and London: The University of Chicago Press, 2nd Ed.

Beuth, Patrick (2017). Feinbild Algorithmus. In: DIE ZEIT, October 14.

Brunton, Finn, Nissenbaum, Helen (2015). Obfuscation. A user’s guide for privacy and protest. Cambridge, Mass.: The MIT Press.

Capurro, Rafael (2017). Homo Digitalis. Beiträge zur Ontologie, Anthropologie und Ethik der digitalen Technik. Heidelberg: Springer.

Capurro, Rafael (2017a). Ethical Issues of Humanoid-Human Interaction. In: Prahlad Vadakkepat, Ambarish Goswami, Jong-Hwan Kim (eds.): Handbook of Humanoids. Springer 2017.

Capurro, Rafael (2017b). Autonomous Zombies are not an option. In: 2015 AD. The Automated Driving Community, June 28.

Capurro, Rafael (2015). Toward a Comparative Theory of Agents. In:  Mathias Gutmann, Michael Decker, Julia Knifka (Eds.): Evolutionary Robotics, Organic Computing and Adaptive Ambience. Vienna: LIT, 2015, 81-96.

Capurro, Rafael (2010). The Dao of the Information Society in China and the Task of Intercultural Information Ethics.

Capurro, Rafael (2005). Between Trust and Anxiety. On the Moods of Information Society. In: Richard Keeble (ed.): Communication Ethics Today. Leicester: Troubadour Publishing Ltd., 2005, 187-196.

Capurro, Rafael, Eldred, Michael and Nagel, Daniel (2013). Digital Whoness. Identity, Privacy and Freedom in the Cyberworld. Berlin: de Gruyter.

Dobusch, Leonhard (2013). Tag Archive: Algorithm Regulation #4: Algorithm as a Practice. January 14. In: Leonhard Dobusch, Philip Mader and Sigrid Quack: governance across borders. transnational fields and transversal themes. a blogbook.

Elberfeld, Rolf (2017). Philosophieren in einer globalisierten Welt. Wege zu einer transformativen Phänomenologie. Freiburg/München: Alber.

Eldred, Michael (2013). Phenomenology of whoness: identity, privacy, trust and freedom. In Rafael Capurro, Michael Eldred & Daniel Nagel, Daniel: Digital Whoness: Identity, Privacy and Freedom in the Cyberworld. Berlin: de Gruyter 2013, 19-59.

Floridi, Luciano and Sanders, Jeff W. (2004). On the Morality of Artificial Agents. Minds and Machines 2004, 14 (3), 349-379.

Hegel, Georg Wilhelm Friedrich (2010). The Science of Logic, transl. and ed. G. di Giovanni, Cambridge University Press.

Hume, David (1962). A Treatise of Human Nature. In: On Human Nature And the Understanding, ed. A. Flew. New York: Collier.

Introna, Lucas (2016). The algorithmic choreography of the impressionable subject In:  Robert Seyfert & Jonathan Roberge (eds.): Algorithmic Cultures: Essays on Meaning, Performance and New Technologies. London and New York: Routledge, 26-51.

Johnson, Deborah G. and Miller, Keith W. (2008). Un-making artificial moral agents.  Ethics and Information Technology 10, 123-133.

Jullien, François (2005). Nourrir sa vie. À l'écart du bonheur. Paris: Seuil.

Knuth, Donald (1968/69). The Art of Computer Programming. Reading, Mass.: Addison-Wesley.

Lanchester, John (2017). You Are the Product. In: London Review of Books, Vol. 39 No. 16, 3-10.

Leonhardt, David (2017). Driverless Cars Made me Nervous. Then I Tried One.  In: The New York Times International Weekly, October 22.

Liebert, Juliane (2017). "Das wird auch auf uns zukommen" Wenn Algorithmen dich zum Verbrecher stempeln - der Dokumentarfilmer Matthias Heeder über seinen Film "Pre-Crime". In: Süddeutsche Zeitung, Nr. 235, October 12,, 12.

Lischka, Konrad and Klingel Anita (2017). Wenn Maschinen Menschen bewerten. Internationale Fallbeispiele für Prozesse algorithmischer Entscheidungsfindung - Arbeitspapier. Gütersloh: Bertelsmann-Stiftung.

Marx, Karl (1969). Thesen über Feuerbach: In: Marx-Engels Werke, 3, Berlin: Dietz Verlag.

Mittelstadt, Brent Daniel, Allo, Patrick, Taddeo, Mariarosaria, Wachter, Sandra, and Floridi, Luciano (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, July–December 1–21.

Nissenbaum, Helen (2010). Privacy in Context. Technology, Policy, and the Integrity of Social Life. Stanford: Stanford University Press.

Ostermeier, Thomas (2017). Hamlet in der Mausefalle. In: Lettre International 118, 96-101.

Raulff, Ulrich (2015). Das letzte Jahrhundert der Pferde. Geschichte einer Trennung. München: Beck.

Seyfert, Robert and Roberge, Jonathan (eds.) (2016). Algorithmic Cultures: Essays on Meaning, Performance and New Technologies. London and New York: Routledge.

Shakespeare, William (2010). Hamlet, Prince of Denmark. In: Sämtliche Werke 2. Frankfurt am Main: Zweitausendeins, 2233-2332.

Sloterdijk, Peter (2009): Du musst dein Leben ändern. Über Anthropotechnik. Frankfurt am Main: Suhrkamp.

Stalder, Felix (2016). Kultur der Digitalität. Berlin: Suhrkamp Verlag.

Winograd, Terry, Flores, Fernando (1986). Understanding Computers and Cognition. A New Foundation for Design. Norwood, NJ: Ablex.

Ziegenbalg, Jochen (1996). Algorithmen: von Hammurapi bis Gödel. Heidelberg, Berlin, Oxford: Spektrum Akademischer Verlag.

Last update: March 2, 2018


Copyright © 2018 by Rafael Capurro, all rights reserved. This text may be used and shared in accordance with the fair-use provisions of U.S. and international copyright law, and it may be archived and redistributed in electronic form, provided that the author is notified and no fee is charged for access. Archiving, redistribution, or republication of this text on other terms, in any medium, requires the consent of the author.


Back to Digital Library
Homepage Research Activities
Publications Teaching Ínterviews