Venue: Budapest University of Technology and Economics Building "Q" 2 Magyar Tudósok körútja 1117 Budapest Hungary
Register to be in the audience with this link.
Workshop Program
Below this overview of the schedule, you will find all the peer-reviewed abstracts of the presentations. You can navigate to any of the abstracts by clicking on the title of the talk.
Thursday, 12 December, 2019.
TIME SLOT | ROOM A - QA403 | ROOM B - QA407 |
---|---|---|
8:45-9:15 | Registration | |
9:15-9:30 | Welcome | |
9:30-11:00 | Session 1 | |
Laura Corti. (Campus Bio-Medico University, Italy) What is a robot? From an ontological/functional perspective to a relational definition | ||
Zsolt Ziegler. (Eötvös Loránd University, Hungary) The values of automatisation research and development | ||
Alexandra Karakas. (Budapest University of Technology and Economics, Hungary) How to prevent malfunction in technical artefacts? Maintenance, function, and malfunction in technology | ||
11:00-11:15 | Coffee Break | Coffee Break |
11:15-12:45 | Session 2A | Session 2B |
Daniel Bardos. (Budapest University of Technology and Economics, Hungary) Defining life and the technological challenges of searching for alternative microbial life | Auli Viidalepp. (University of Tartu, Estonia) A ‘work of art’: depicting artificial creatures in science fiction narratives | |
Reto Gubelmann. (Universities of Zurich and St.Gallen, Switzerland) The Linguistic Capacities of Neural Networks | Eszter Héder-Nádasi. (Budapest University of Technology and Economics, Hungary) Life saving technologies and places in medical series | |
Jan Schmutzler. (Albert-Ludwig-Universität, Freiburg) The influence of conscious control on the feeling of autonomy in patients with AI-controlled brain implants | Jesse de Pagter. (TU Wien, Austria) Trust in robots and their futures: Understanding the role of speculation and imagination | |
12:45-14:00 | Lunch | |
14:00-16:00 | Session 3A | Session 3B |
Koji Tachibana. (Kumamoto University, Japan) AI and the cultivation of human moral emotion | Akos Gyarmathy. (Budapest University of Technology and Economics, Hungary) The user’s expected value: risk, benefit and rational choice concerning digital traces and privacy | |
Virág Véber. (Eötvös Loránd University, Hungary) On Biased Self-driving Cars | Mihail-Valentin Cernea. (Bucharest University of Economic Studies, Romania) Moral Pluralism and Data-Driven Morality in the Big Data Industry | |
Robin Kopecký and Michaela Košová. (Charles University in Prague, Czechia) How virtue signalling makes us better: Moral preferences with respect to autonomous vehicle type choices | István Danka (Budapest University of Technology and Economics) and János Tanács (John von Neumann University) Loss of knowledge, unintelligibility of technological rules and violation of regulation in the Chernobyl nuclear plant disaster | |
Mihály Héder. (Budapest University of Technology and Economics, Hungary) On the applicability of Ethics Guidelines for Artificial Intelligence | Aleksandra Kazakova. (Bauman Moscow State Technical University, Russia) Reflexivity in Curriculum: Risks, Safety and Ethics in Engineering Education | |
16:00-16:15 | Coffee Break | |
16:15-17:45 | Session 4 | |
Darryl Cressman. (Maastricht University, Netherlands) Re-Considering Critical Theory after the Empirical Turn | ||
Phil Mullins. (Missouri Western State University/Polanyi Society, United States) Modern Social Imaginaries and AI: Polanyian Notes | ||
Hermann Diebel-Fischer. (Universitaet Rostock, Germany) Robots, moral agency, and blurred boundaries | ||
17:45-18:00 | Break | |
18:00-19:15 | Keynote | |
Mark Coeckelbergh (University of Vienna, Austria) Artificial Intelligence: Ethical issues and policy directions | ||
21:00-23:00 | Conference Dinner |
Friday, 13 December, 2019.
TIME SLOT | ROOM A - QA403 | ROOM B QA407 |
---|---|---|
9:30-11:00 | Session 5 | |
Hesam Hosseinpour. (University of Tartu, Estonia) Disobedience: Threat or promise | (9:30-10:30, QA405) Panel discussion on Gábor István Bíró's book The Economic Thought of Michael Polanyi (Routledge studies in the History of Economics Book 222) | |
Eugenia Stamboliev. (University of Plymouth, United Kingdom) From Moral Care Robots to Ethical Tracking Devices | ||
Aron Dombrovszki. (Eötvös Loránd University, Hungary) The Double Standard Between Autonomous Weapons Systems and other AI Technologies | ||
11:00-11:15 | Coffee Break | |
11:15-12:45 | Session 6A | Session 6B |
Radu Uszkai. (Bucharest University of Economic Studies, Romania) A Theory of (Sexual) Justice: the revised roboethician's edition | Agostino Cera. (Università della Basilicata, Italy) Beyond the Empirical Turn (Elements for an Ontology of Engineering) | |
Chang-Yun Ku. (Academia Sinica, Taiwan) When AIs Say Yes and I Say No — On the Tension between AI’s Decision and Human’s Decision from Epistemological Perspective | Daniel Paksi. (Budapest University of Technology and Economics, Hungary) The Problem of the Living Machine according to Samuel Alexander’s Emergentism | |
Temitayo Fagbola (Federal University Oye-Ekiti, Nigeria) and Surendra Thakur (Durban University of Technology, South Africa) Towards the Development of Artificial Intelligence-based Systems: Human-Centered Functional Requirements and Open Problems | Jacopo Giansanto Bodini. (Université Jean Moulin Lyon 3, France) Is there an ideology of new technologies? The immediation of experience and the information of desire | |
12:45-14:00 | Lunch | |
14:00-16:00 | Session 7A | Session 7B |
Ricardo Rohm. (Federal University of Rio de Janeiro, Brazil) The Architecture of Misinformation and Democracies in South America | Paul Grünke. (Karlsruhe Institute of Technology, Germany) Opacity in Machine Learning | |
Cristina Voinea . (Bucharest University of Economic Studies, Romania) The emperor’s new clothes: private governance of online speech | Dániel Gergő Pintér and Péter Lajos Ihász. (SZTAKI Institute for Computer Science and Control, Hungary) The Use of Natural Language Processing AI techniques in corporate communications | |
Karoline Reinhardt. (IZEW, Eberhard Karls Universität Tübingen, Germany) A diversity-sensitive social platform: Ethical Questions from the Project "WeNet - The Internet of Us" | Krisztina Szabó. (Budapest University of Technology and Economics, Hungary) “Not Exactly Reading” – The Nature of Reading in the Era of Screen | |
Jernej Kaluža (Faculty of Social Sciences, Ljubljana, Slovenia) Ambiguities with The Algorithms of Hate | ||
16:00-16:15 | Coffee Break | |
16:15-17:45 | Session 8 | |
Jurgis Karpus (LMU-Munich, Germany) The future of human-AI cooperation | ||
Lorenzo De Stefano. (Univerisity of Naples Federico II, Italy) From Pebbles to Hyperobjects. Some consideration the social foundation of technology | ||
Anda-Maria Zahiu. (University of Bucharest, Romania) Autonomous decision-making: A potential ethical problem for immersive VR technologies |
Abstracts
What is a robot? From an ontological/functional perspective to a relational definition
Laura Corti. (Campus Bio-Medico University, Italy)
A robot is a material entity (or a machine) designed to do a particular job; the purpose of these machines is fulfilled autonomously after an analytic phase of detecting input or/and environment. So, a robot has a body to act in the physical world through sensors, that are the basic units to receive information by the environment, and actuator /effectors that are able to respond to sensory inputs and achieve goals. We can consider the previous sentences as a technical description of robotics, useful in an engineering perspective but not strong enough in a philosophical analysis. Neither a purely functional description, as proposed by some attorneys, is able to solve the problem. We propose to address the problem from a relational point of view.
This paper aims to show the necessity to model, from a theoretical point of view, the trails of robotics, discerned from the reigning dualism between res cogitans and res extensa or consciousness versus objects. To achieve this goal, I will consider two different kinds of element: the formulation of "quasi-other" category and how the engineers have spoken about robots (or life-like agents), as equipped with consciousness, embodiment, free will and emotions. These words are so important because they refer to a tenacious will to strictly connect the human way of living with robots; indeed, it is not only a problem of lack of terms if we commonly speak about it as if they were humans even if they come from the material world. I will attempt to assess the state of the art through papers and definitions of workers of A.I. and connect with the philosophical perspective to foster an approach in which robots are evaluated as objects but closer to humans.
References
1 M. C. Carrozza, The robot and Us, Springer, 2019.
[2] J. A. Angelo, Robotics: A Reference Guide to the New Technology, Westport: Greenwood Press, 2007.
[3] M. A. Boden, Mind as Machine: A History of Cognitive Science,, Oxford: Clarendon Press, 2006.
[4] R. A. Brooks, «New Approaches to Robotics,» Science, vol. 253, p. 1227–1232., 1991.
[5] COMEST, «Report of COMEST On Robotics Ethics,» 2017.
[6] K. Dautenhahn, «Embodiment and Interaction in Socially Intelligent Life- Like Agents,» in Computation f or Metaphors, Analogy and Agent, Berlin Heidelberg, Springer-Verlag, 1999, pp. 102-142.
[7] D. Ihde, Technology and the Lifeworld: From Garden to Earth, Bloomington: Indiana University Press, 1990.
[8] J. M. Rosenberg, Dictionary of Artificial Intelligence and Robotics, New York: John Wiley & Sons, 1986.
[9] P.-P. Verbeek, What things do : philosophical reflections on technology, agency, and design, The Pennsylvania State University Press, 2005.
[10] R. Manzotti, «Machine Consciousness: A Modern Approach,» Natural Intelligence: the INNS Magazine, vol. 2, n. 1, pp. 7-18, 2013.
From Pebbles to Hyperobjects. Some consideration the social foundation of technology
Lorenzo De Stefano. (Univerisity of Naples Federico II, Italy)
In the era of the so called “Antropocene” (Crutzen, 2014), where human technology has become the principium individuation is of an age of the world, defining what technology is, has become matter of the utmost importance for philosophy. This is a complex issue because defining technology puts in question at one time the nature of the “technical gesture” and the nature of the product of technology: the object– or better: the medium.
Therefore, the question “What is technology?” always concerns the nature of the subject who is capable of technology and the ontological structure of mediation. Anthropologist such as Leroi-Gourhan (1964) and Gehlen (1940) pointed out that technology and language, enhanced by imagination, were the specific trait of the animal Homo since his very beginning, and, consequently, that our evolution has always been cultural. Culture, according to philosopher such Hegel or Heidegger, is the result of all the objectification of mankind –what Hegel calls Geist – or similarly the specific project of the being, that is solely open to the meaning of Being (Heidegger 1977), which is forced to construct his own world (Hedegger 1983) . Both Philosophy and Anthropology has conceptualized technology, mediation and culture as human-related. But is it really so?
In my proposal, following the studies of Boesch (2012) on Chimpanzee which demonstrate that Homo is not the only monkey with a culture, and Tomasello (1999), arguing that the origin of human cognition is always cultural, I try to answer the question on technology integrating the philosophical perspective mentioned above with the contemporary anthropological and zoological research. The main idea, which I try to demonstrate is that culture, in his social dimension, is the result of the mediation between life and the world, and not “just” an ethological trait of the Homo Faber. Then I underline the continuity between two different objects which can be found at the beginning and at the end of our evolution: the pebble and the Hyperobject (Morton, 2013) in order to demonstrate that the ontology of technology and the ontology of technological object find in the social nature of culture their common foundation.
References
Crutzen, Schwagerl: The Anthropocene: The Human Era and How It Shapes Our Planet, Rieman, 2014
Boesch, Wild Cultures, A comparision Between Chimpanzee and Human culture, Cambridge 2012
Gehlen, Der Mensch. Seine Natur und seine Stellung in der Welt. Athenäum, Bonn
Heidegger, Sein und Zeit, in GA Bd.II, Klostermann, Frankfurt a.M., 1977
ID., Die Grundbegriffe der Metaphysik. Welt – Endlichkeit – Einsamkeit, in GA XXIX-XXX, Klostermann, 1983.
Leroi-Gouran, Le Geste et la Parole. I, Technique et Langage, Paris, A. Michel, (coll. Sciences d'aujourd'hui), 1964
Morton, Hyperobjects, University of Minnesota Press, 2013.
Tomasello, M. (1999). The Cultural Origins of Human Cognition. Harvard University Press, 1999.
How to prevent malfunction in technical artefacts? Maintenance, function, and malfunction in technology
Alexandra Karakas. (Budapest University of Technology and Economics, Hungary) *
Technological devices are described with their particular function. Thus, these objects all have particular descriptions of what they do, and how they do that. However, in many cases technical artefacts cannot achieve the purpose they were designed for, and malfunction, failure, and gradual degradation appears as time goes by. In order to prevent malfunction, and to keep artefacts functional, there exist many different maintenance strategies, especially within technology. In this paper, I examine the philosophical implications of various maintenance strategies, to highlight the connections between function, malfunction, and maintenance.
Broadly speaking, there are four types of maintenance strategies: reactive, preventive, predictive, and pro-active maintenance. These refers to, on the one hand, different stages of the lifecycle of technological artefacts, and on the other hand, to diverse methods of preventing malfunction and possible degradation. Maintenance could mean restoration to the original state of the artefact, repairing, or even improvement of an object. However, there are many types of technological objects that are almost impossible to repair, or maintain their original functionality since they are designed to resist against being repaired.
What reflects particular engineering and design choices in a technological object? Who is responsible for malfunction and maintenance? What happens when components of an object one by one get replaced? This paper examines the practical and philosophical implications of different maintenance strategies both on simple objects, and on greater technological systems.
Defining life and the technological challenges of searching for alternative microbial life
Daniel Bardos. (Budapest University of Technology and Economics, Hungary)
Despite its apparent morphological diversity, all currently known terrestrial life is very similar in its fundamental molecular and biochemical architecture. This is not surprising, because there is convincing molecular evidence that life as we know it shares a Last Universal Common Ancestor. However, it is possible that there is an alternative form (or maybe forms) of microbial life, a ‘shadow biosphere’, which constitutes another form of life, with different molecular structure and biochemistry. The epistemic situation of microbiologist searching for the signatures of a shadow biosphere is very similar to astrobiologists’ situation seeking for evidence of extraterrestrial life on other planets. Based on the single example of terrestrial ‘standard life’, how can we know we have found a different, ‘weird’ form of life, if we do not know what we are exactly looking for?
We can characterize both astrobiology and microbiology as technoscientific disciplines, which means technology and theory are indissolubly entangled. To detect, visualize and culture microbial life either on Earth or on another planet, various technological devices are required. The design of the tools and in situ life-detection experiments to examine alternative microbial activity, depends on working definitions about our understanding of standard life. These assumptions also determine what data will count as evidence for life. Consequently, it is possible that our devices and techniques are systematically incapable to detect ‘weird life’. Carol Cleland argues that the best strategy to handle this problem in the search for a shadow biosphere or extraterrestrial life, is to abandon any definition of life and general theory of living systems. Instead we should use these definitions as a tentative criteria for life and focus on anomalies, signatures resembling standard life, but also differing it in important ways.
In my presentation I will examine the implications of Cleland’s idea about this exploratory research strategy from a technological standpoint. The decisions of not just the scientists, but other research workers, such as engineers or technicians determine what counts as data, what is a signature of life and what is the mere product of abiotic processes. What role the tacit knowledge embedded in the design of technologies can play and what kind of challenges does this mean in the search of alternative microbial life? I use Adrian Currie’s notion of investigative scaffolding to characterize the research strategy proposed by Cleland. I argue that progressive research could be possible in a piecemeal fashion, even in the absence of a definition or a general theory of life.
A ‘work of art’: depicting artificial creatures in science fiction narratives
Auli Viidalepp. (University of Tartu, Estonia)
In a typical pop culture narrative starring an artificial creature of intelligence, the robot is often depicted as a ‘perfect human’ (or, in Lotman’s terms, a perfect work of art), while the ‘real’ humans are seen as weak, helpless, and at the mercy of robotic creatures that are manipulating the situation at will. The ultimate fate of mankind is left to the reckoning between the “benevolent” and “evil” AI, thus stripping humans of their agency. Even the secretly powerful technologists (or scientists in Haynes’ terms) are shown to lose control of their creations - a kind of fabula that in AI-related public discourse can also be recognised as technification in the terms of Hansen & Nissenbaum (2009).
At the same time, animism is nothing new. In human myths, as well as philosophies, agency (or the lack of it) is attributed to non-human entities in nature, such as animals, but also to other natural objects such as trees and stones. Agency is also projected to creatures of human imagination such as gods and spirits.
Vincent Mosco (2004) points out myths as valuable tools for understanding complex things such as technology. Roslynn Haynes (2014) discusses literary narratives featuring stereotypical scientists whose experiments get out of control and bring humanity to peril. Reframed in the terms of Tartu-Moscow school of semiotics, an artificial character can be described as a perfect work of art or an ideal futuristic machine - a device created by man -, and culture as a collective mechanism "capable of performing intellectual operations" (Lotman 2003: 113-115).
In the presentation, I will briefly describe a few examples of such popular narratives in the cinema, observe how they can be related to the frameworks of Mosco and Haynes, and see what the concept of artificial intelligence inspired by Tartu-Moscow School could bring to the discussion.
References
Haynes, Roslynn 2003. From alchemy to artificial intelligence: Stereotypes of the scientist in Western literature. Public Understanding of Science 12(3): 243-253.
Hansen, Lene, and Helen Nissenbaum 2009. Digital disaster, cyber security, and the Copenhagen School. International studies quarterly 53(4): 1155-1175.
Lotman, Juri 2001. Inimesed ja märgid. Vikerkaar 1: 85–91. — 2003. Что дает семи отический подход? Воспитание души. Санкт-Петербург: Искусство-СПБ, 113–115.
Mosco, Vincent 2005. The digital sublime: Myth, power, and cyberspace. MIT press.
Torop, Peeter 2010. Tüpoloogia ja artoonika. Lotman, J. Kultuuritüpoloogiast. Tartu: Tartu Ülikooli Kirjastus, 9–21.
The Linguistic Capacities of Neural Networks
Reto Gubelmann. (Universities of Zurich and St.Gallen, Switzerland)
In 2018, Hassan et al. (2018) sent shockwaves through the natural language processing (NLP) community: they argue that their translation system has reached human parity. According to their understanding of human parity, a translation system has reached this level if human evaluators are unable to systematically distinguish the system’s translations from professional human translations. Läubli, Sennrich, and Volk (2018) have independently verified this claim. It is generally agreed that this revolutionary progress is due to a new method in machine translation, so-called “neural machine translation” (NMT).
In my paper, using concepts and distinctions from the field of animal mentality, I provide a novel perspective on these achievements of NMT-systems. The goal is to assess the hypothesis that these NMT-systems mark a clear step towards Strong AI, as they have to be credited with linguistic and cognitive abilities equivalent to those of higher non-human mammals.
The main objection to the very idea that computers have linguistic understanding in any serious sense of the term, let alone Strong AI, is almost forty years old: Searle (1980) presents the so-called “Chinese Room Thought Experiment” (CRTE). A natural moral of this thought experiment is that computers, while perhaps displaying perfect linguistic behavior, are just correlating strings with other strings, without having any understanding of what the strings mean, let alone holding any promise for ever reaching Strong AI.
My paper investigates whether the aforementioned hypothesis can withstand the dialectic force of the CRTE by means of testing the following more specific, verifiable, and hence falsifiable claim: In contrast to traditional statistical machine translation methods (which merely correlate n-grams in the source-language with n-grams in the target language), NMT-systems construct and employ semantic representations (so-called word embeddings) and, based on them, proceed in a way that is best described as deliberately discriminating. Building on an established position in the field of animal mentality (see, in particular, Glock, 2010, p. 29), it could then be argued that this qualifies as judging. This would then clearly outmatch the kind of correlation operation envisaged in the CRTE.
References
Glock, Hans-Johann (2010). “Can animals judge?” In: Dialectica 64.1, pp. 11–33.
Hassan, Hany et al. (2018). “Achieving Human Parity on Automatic Chinese to English News Translation”. In: CoRRabs/1803.05567. arXiv: 1803.05567. url: http://arxiv.org/abs/1803.05567.
Läubli, Samuel, Rico Sennrich, and Martin Volk (2018). “Has machine translation achieved human parity? a case for document-level evaluation”. In: arXiv preprint arXiv:1808.07048.
Searle, John R (1980). “Minds, brains, and programs”. In: Behavioral and brain sciences 3.3, pp. 417–424.
Life saving technologies and places in medical series
Eszter Héder-Nádasi. (Budapest University of Technology and Economics, Hungary)
Television medical drama series have been popular from the 1950s, and since then, the genre raised academic interest – for instance, because of its power to enhance laypeople’s medical knowledge. As prior studies show, the representations of fictional doctors are capable of involving positive and negative expectations towards real medical professionals.
The presentation builds on American medical dramas premiered after 2010 - Chicago Med, Code Black, New Amsterdam, The Good Doctor, The Night Shift and The Resident just to name a few. However, milestones of the genre - ER and Grey’s Anatomy - are also discussed. The analysis focuses on the representation of doctors in these productions with special attention on their involvement with existing and forthcoming technologies. As the narrative analysis of the content shows, these productions regularly present doctors who cross scientific and technological boundaries in powerful ways to save their patient and gain new knowledge. In many cases the legal, economical and ethical aspects of healthcare are positioned as the disturbing boundaries of lifesaving. Hospitals - especially emergency rooms and operating theaters - are usually presented as places where magic happens in a regular basis.
Medical drama series present powerful innovations and innovators for a broad, heterogeneous audience. This fact makes it all-important to have a better understanding on their content and the potential consequences of their technology representation. Thus, theoretical approaches of mass communication – for instance, cultivation theory – will be discussed in the presentation.
The influence of conscious control on the feeling of autonomy in patients with AI-controlled brain implants
Jan Schmutzler. (Albert-Ludwig-Universität, Freiburg)
Since the late 1980s brain implants in form of so-called deep brain stimulation devices have been experiencing a renaissance, after having been neglected as dangerous technology in earlier decades. Recently the first implants that function beyond a pure pacemaker-like rhythmic stimulation have been developed. These closed-loop implants are able to analyze brain activity and learn via machine-learning algorithms the special patterns of each patient. With these implants it is possible to detect for example early stages of epileptic seizures and by its progressive collection of brain data enhance the performance.
At the moment there are two concepts for these implants in development. While one is just advisory, asking for a patient’s permission on a possible intervention, the other option is fully autonomous, so the patient is possibly unaware of the implant having just prevented a seizure. The discussion about which implant guarantees more autonomy is controversial. Some argue that more opportunities to decide always result in a feeling of more autonomy, while others point out that the flow-breaking advent of possibilities might also create a feeling of being overwhelmed. This is even more the case if the decisions are perceived as predetermined since it seems unreasonable to deny the option of preventing a seizure.
I will argue that the basic trust that a healthy person develops naturally in respect of their unconscious thoughts and behavior could also be applied to these implanted devices. When we learn a skill, e.g. using a tool, we are planning and reflecting every move very consciously. As soon as we feel confident about our new skill, we stop thinking about it when we use it.
Some people tend to have greater and faster growing trust in themselves, while others can have a pathological level of self-distrust, leading to an excess of conscious control (e.g. people with OCD). So the common conception of control as equivalent to freedom seems to be wrong and a balanced account of unconscious and conscious in regard to freedom behavior seems more reasonable.
In respect to the closed-loop brain implants I will therefore suggest a new approach between the two concepts. This approach will include a close psychological consultation of the patient in order to adjust to their personal character the frequency of being asked for a decision., especially their tendency to stay in control, and possibly more important - the degree of ownership and trust they developed in respect of the implant. This is meant to imitate the process of learning and provoke a feeling of naturally merging with the new body part.
Trust in robots and their futures: Understanding the role of speculation and imagination
Jesse de Pagter. (TU Wien, Austria)
One of the main concerns with the present rise of new technologies is the question how the social trust in them can be ensured and improved. This project examines trust in emerging technologies, with an emphasis on robotics. Thereby it focuses on the specific topic of speculations and imaginations of the future when it comes to social trust in robots. In order to introduce the matter, first a literature review on the issue of (social) trust in technology is presented. This review focuses on the connection between the notion of trust in technology and the ascription of a normative and moral status to autonomous technologies. As such it demonstrates how the efforts from Philosophy of Technology that aim to advance and improve ethical systems and design practices, are crucial for the building of social trust in autonomous technologies such as robots.
Based on the literature review, the central issue of this project is defined: how is the character of robotics as an emerging technology complicating the way in which the social trust in it can be understood and conceptualized? This entails an emphasis on the strong role of speculative and imaginative thinking in both the development of emerging technologies themselves, as well as in the different conceptual-ontological notions of those technologies. The paper therefore develops a concept that explains technology development as searching the space of possibility (and even go beyond the expected), while also looking at literature that engages with narratives of technology as the driver of (human) progress. On the basis of this, the way in which social trust is affected by such speculative and imaginative visions of technologies is discussed.
Finally, the project proposes to include this speculative and imaginative element in the philosophical analysis of the social trust in emerging technologies. In that context, a definition of those speculative and imaginative futures will be developed on the basis of the following three aspects: (a.) (dis)trust in technological futures;(b.) trust and speculative foresight (e.g. speculative ethics); and (c.) social trust in imaginary/speculative artifacts. As such the aim is to induce a richer concept of trust in robotics (and potentially other emerging technologies), while explicitly not condemning the speculative thinking and imagination that surrounds those technologies. It is rather including this kind of thinking into the phenomenology of trust in technology, as well as the into the efforts of finding new definitions regarding ontological status of technological artifacts. In addition to the development of novel understandings of social trust in emerging technologies, this work can contribute to the future design, regulation and legislation of those technologies.
AI and the cultivation of human moral emotion
Koji Tachibana. (Kumamoto University, Japan)
A recent article on the possibility of AI-based moral enhancement argues that a soft mean such as Socratic assistance is preferable to exhaustive means because the former saves our traditional values concerning human morality such as noninvasiveness, moral development, self-determination, and authenticity (Lara and Deckers 2019). Their argument and proposal seem persuasive in that the method will surely enhance the human faculty of logical thinking and reasoning on moral issues. However, human morality is also based on emotional states. They argue that their AI-based Socratic method "may indirectly" modify emotional and motivational states of the agents by the use of rational persuasion, which comes from ancient Greece. However, I would like to argue that the rational persuasion, which they call "the persuasive power of good arguments", "the persuasion of reason" or "the rational force of the arguments", would not work well if we take the findings seriously by moral psychology studies (Tachibana 2018, 2019). This does not mean, however, that AI cannot enhance moral emotions. Rather, I will propose another possibility of AI-based moral enhancement which I name "Aristotelian training". Aristotelian training is unique in that AI and information technologies will play crucial roles in enhancing human emotional states as well as human virtue development.
References
Lara, F. and J. Deckers. (2019). Artificial Intelligence as a Socratic Assistant for Moral Enhancement. Neuroethics, https://doi.org/10.1007/s12152-019-09401-y
Tachibana, K. (2018). The Dual Application of Neurofeedback Technique and the Blurred Lines Between the Mental, the Social, and the Moral. Journal of Cognitive Enhancement, 2(4): 397.
Tachibana, K. (2019). Nonadmirable Moral Exemplars and Virtue Development. Journal of Moral Education, DOI: 10.1080/03057240.2019.1577723
Acknowledgment
This presentation is a part of the project "AI and human virtue" which is financially supported by Japan Science and Technology Agency (JST) and Research Institute of Science and Technology for Society (RISTEX) in Japan.
The user’s expected value: risk, benefit and rational choice concerning digital traces and privacy
Akos Gyarmathy. (Budapest University of Technology and Economics, Hungary)
Many define the function of an artefact by using the term intentionality (cf. Thomasson 2003, 2007, Hilpinen 1992, Preston 2009; Houkes & Vermaas 2010, Karen Neander 1991, Searle 1995, McLaughlin 2001, Baker 2007, Evnine 2016). This paper offers an elaboration of this definition by relying on rational choice theory. Accordingly the function of an artefact is defined by the subjective expected value of using it, in case of rational users, even if they are not the majority, but in a certain sense they are still considered to be ideal.
However, estimating the risk of using certain technologies is never an easy task. Let alone the problem of malfunction such as in cases of nuclear plants or airplanes, some technologies might surprise us with malicious effects even under proper use. In the case of self driving cars, the car might kill us in certain cases, when it needs to save a larger number of pedestrians. Or in the case of Facebook, its proper use might lead us to take our country out of the EU.
In some cases an initially successful, low-risk technology might become more risky with changing environment. For instance a boat becomes riskier to use in a raging storm or a nuclear plant becomes riskier in a tsunami. These risks however concern the risk of malfunction, which I will not discuss. The focus of this paper is another kind of risk: risks arising from new consequences of the proper use of an artefact.
This paper discusses the offered elaboration of intentional account of function on the case of the world wide web as artefact that changed its function in the subjective expected utility sense, during the late 2010s. Because of the innovation seen in the field of psychometrics and data analysis, the decision whether one wishes to use the world wide web became different compared to the 1990s. Before, the world wide web presented trade-offs such as the one between easily accessible information and the risk that the information is less reliable than the information provided by traditional sources. From 2010s, as personality profiles created on the basis of a user’s digital traces became incredibly accurate (cf. Azucar, Marengo, Settanni 2018) using the world wide web became a trade-off between social interaction, access to information and the risk of violating one’s own psycho-medical privacy. This leads to a moral conundrum. While the privacy of psycho-medical data is protected by law and the moral code of experts providing it, it is technically accessible and rationally (even morally arguably in some cases) aspired by economic and political agents or even social agents such as higher education. The new trade-off is also rationally accepted by the individual user without knowing the extent and risk of taking the risk of privacy violation.
On Biased Self-driving Cars
Virág Véber. (Eötvös Loránd University, Hungary)
In recent years self-driving vehicles often made headlines, and autonomous cars may represent the first robots to be integrated into our lives. Human-robot interactions pose several challenges to psychology, law, sociopolitical systems (such as business, infrastructure, transportation services), and raise many other issues – making robot cars a dynamic platform for thinking about a full range of ethical issues.
The talk addresses one of these, the notion of biases, and focuses on three issues. First I will argue that as the learning processes share certain features autonomous vehicles have biases, but not in the same sense that people have. Second, the primary function of biases and stereotypes is to help us navigate the world through fast, pre-fabricated categories, and it is not easy to deny that self-driving cars also need this feature. Yet unfair bias is a known problem with self-driving cars: the ability of the car to detect persons with dark skin (in other words, those of certain minority backgrounds) is an issue that the industry is struggling to solve. This will lead to the third issue, that self-driving cars tend to reproduce the very human biases. When teaching any artificial intelligence (AI) system to perform a task, one needs to train it with data (i.e. experience), so that it can learn from those experiences and be trained to act in a certain way when it encounters new data/experience. Accordingly, if one trains an autonomous driving module with data that includes only light-skinned persons as examples of what constitutes a “human,” then the car will have trouble recognizing dark-skinned persons as also “human” when operating in real-life scenarios. If the data incorporates our biases, which it often surely does, then the machine will also “learn” our biases and make decisions governed by them. The magnitude of the problem is a matter of technical oversight. Can we teach robot cars not to behave in a prejudiced way?
Moral Pluralism and Data-Driven Morality in the Big Data Industry
Mihail-Valentin Cernea. (Bucharest University of Economic Studies, Romania)
One of the focal points of contemporary applied ethics is the large scale collection and manipulation of data from various digital sources (smartphones, browsing data, Internet of Things, etc.) and the multifaceted moral challenges that are involved (Martins 2015, Herschel & Miori 2017). Given the amount of data collected and the variety of its uses (research, marketing, social engineering, political campaigning, etc.), one could find it quite difficult to specify the type of ethical framework able to account coherently for all of the issues involved. During this presentation, I will emphasize, using relevant case studies, some of the limits traditional theories face when being applied in Big Data contexts, particularly when dealing with privacy rights. As we shall see, in certain situations, utilitarianism can be used to argue plausibly for the trade between privacy and convenience. The philosophical difficulty of these problems is compounded by the application of machine learning algorithms to the collection of data which, in some cases, determines wholly unexpected and unintended ethical breaches – a tremendous problem for deontological ethics. A solution will be suggested that is, in some ways, inspired from the way many computer scientists have chosen to deal with epistemic inconsistency when building software that collects and stores tremendous amounts of data: the use of paraconsistent logic (that is, a kind of logical system that allows, in certain cases, the derivation of contradictions – “inconsistency-tolerant” solutions of information that itself will be inconsistent) (Hewitt 2008). Using the same kind of thinking, we could build an applied ethics for Big Data on the foundation of moral pluralism. Another question will arise with this proposed solution: should the framework be foundationally pluralistic (Thompson 1997) or non-foundationally pluralistic (Taylor 1982, Swanton 2001) given the challenges set. More simply put, does the difficult epistemology of Big Data have an influence on the morality of Big Data and, if yes, what is the extent of that influence and how does it modify the basic ethical framework involved? Is a data-driven ethics possible for a data-driven industry?
How virtue signalling makes us better: Moral preferences with respect to autonomous vehicle type choices
Robin Kopecký and Michaela Košová. (Charles University in Prague, Czechia)
Autonomous vehicles (henceforth AVs) are expected to significantly benefit our transportation systems, their safety, efficiency, and impact on environment. However, many technical, social, legal, and moral questions and challenges concerning AVs and their introduction to the mass market still remain. One of the pressing moral issues has to do with the choice between AV types that differ in their built-in algorithms for dealing with situations of unavoidable lethal collision. In this paper we present the results of our study of moral preferences with respect to three types of AVs: (1) selfish AVs that protect the lives of passenger(s) over any number of bystanders; (2) altruistic AVs that minimize the number of casualties, even if this leads to death of passenger(s); and (3) conservative AVs that abstain from interfering in such situations even if it leads to the death of a higher number of subjects or death of passenger(s). We furthermore differentiate between scenarios in which participants are to make their decisions privately or publicly, and for themselves or for their offspring. We disregard gender, age, health, biological species and other characteristics of (potential) casualties that can affect the preferences and decisions of respondents in our scenarios. Our study is based on a sample of 2769 mostly Czech volunteers (1799 women, 970 men; age IQR: 25-32). The data come from our web-based questionnaire which was accessible from May 2017 to December 2017. We aim to answer the following two research questions: (1) Whether the public visibility of an AV type choice makes this choice more altruistic and (2) which type of situation is more problematic with regard to the altruistic choice: opting for society as a whole, for oneself, or for one’s offspring.
Our results show that respondents exhibit a clear preference for an altruistic utilitarian strategy for AVs. This preference is reinforced if the AV signals its strategy to others. The altruistic preference is strongest when people choose software for everybody else, weaker in personal choice, and weakest when choosing for one’s own child. Based on the results we conclude that, in contrast to a private choice, a public choice is considerably more likely to pressure consumers in their personal choice to accept a non-selfish solution, making it a reasonable and relatively cheap way to shift car owners and users towards higher altruism. Also, a hypothetical voting in Parliament about a single available program is less selfish when the voting does not take place in secret.
Loss of knowledge, unintelligibility of technological rules and violation of regulation in the Chernobyl nuclear plant disaster]
István Danka (Budapest University of Technology and Economics) and János Tanács (John von Neumann University)
The events leading to the nuclear accident at Chernobyl started at 01:06 on 25 April 1986. According to the requirements of the Operating Procedures, when the number of the manual control rods falls to 15 or below, the reactor must be shut down immediately. The regulation of the number of minimum permissible manual control rods uninserted in the core was violated several times before this test process and recurred again during the two shifts on 25 April. Both heads of the two shifts knew the regulation of the immediate shutdown but neither of them initiated the process. Although the regulation was clear and definite, it did not emphasise the danger of its violation at all.
The operators based their decision on the supposition that the violation of the regulation of the minimum permissible manual control rods would not cause any negative consequences, while complying with the regulation would take serious security (as well as operators’ existential) risks. Later the operator personnel criticised the formulation of the regulation which states what to do in a case without mentioning why it has to be done or what consequences occur for the lack of doing so.
Technological regulations are unidirected communication packages between different fields of knowledge via which designers communicate with operators. These packages contain compressed knowledge: abbreviations in the form of commands that often result in a loss of knowledge. The designer and the operator have different kinds of knowledge that only partially overlap. The operator often cannot fully unpack the knowledge the designer compressed into the regulation package because of two main reasons. First, the more complex the situation, the more loss of knowledge occurs. Second, in extremely specific fields, compression builds on background knowledge that is not only hard to understand in detail, but often seems unintelligible or straightforwardly counterintuitive, to experts of other fields. Hence, understanding and following technological regulations do not imply an understanding why the regulation is to be followed. For the latter, background knowledge of the regulator (the designer) is required that the operator does not fully have. This raises the following dilemma: either the operators follow the rules blindly and comply with the regulations that are unintelligible or counterintuitive for them or they override the rules to make actions intelligible in accordance with their background knowledge.
We claim that the unintelligibility of the technological rules and regulations should be treated in the course of the trainings of the operator personnel. We also claim that the formulation of the regulations is inappropriate in the form of “under C circumstances, rule/regulation R has to be followed”. Instead, they should contain a phrase referring to the possible dangers or consequences of the violation: “under C circumstances, rule/regulation R has to be followed, otherwise danger D or consequence Q would occur”.
On the applicability of Ethics Guidelines for Artificial Intelligence
Mihály Héder. (Budapest University of Technology and Economics, Hungary)
On the applicability of Ethics Guidelines for Artificial Intelligence In the last two decades, we are experiencing an unprecedented boom of AI in terms of business potential and ubiquity of applications. Philosophical inquiries into the ethical dimensions of AI go as far back as the very first experiments with the technology - or arguably even further. However, this current wave created the need for having industry-level guidelines and principles that are supposed to be the foundations on which to govern the design, implementation, and usage of AI systems. Moreover, the audience of such documents are no longer academics but engineers and practitioners.
Several proposed guidelines met this demand. Notable recent examples are the Ethics Guidelines for Trustworthy AI by the European Union, the Ethically Aligned Design [of Autonomous and Intelligent Systems] by IEEE, a professional association and standards body, and the Beijing AI Principles by BAAI, a research institute backed by the Chinese state. Some big companies active in the field, like IBM, Google, or Microsoft, have published their internal guidelines. Other organizations like VDI, the German engineering association, and the UNESCO also expressed that they feel obligated for being active in this field.
What makes such an ethics guidelines proposal good? Granted, many of them took input from a broad range of experts, and some even featured a public consultation element. As a result, they are highly discussed and possibly represent some set of shared values. Also, there is no question that all of them were created with benign intent.
However, the practicability of any guidelines proposal is equally essential to fulfill its normative role in influencing design, configuration, and application decisions. "Applied" or "practical" ethics, a sub-field of ethical theory produced some insights during the last fifty years about what an ethical framework's quality of being useful might entail.
In the presentation will compare the various guidelines proposals and assess them from the aspect of applicability. In that, we will rely on some findings of applied ethics as well as specific insights from the industrial usage of artificial intelligence. One result of this investigation is a better mapping of the different ethical considerations on the particular phases of the engineering process from technology development to maintenance. Another outcome is a better understanding of how the availability of information and predictions have severe implications for what kind of framework is useful. Finally, the specificity of the frameworks to AI - in contrast with generic ethics of conduct - is investigated.
Reflexivity in Curriculum: Risks, Safety and Ethics in Engineering Education
Aleksandra Kazakova. (Bauman Moscow State Technical University, Russia)
The agency in production of technological risks cannot be seen in isolation from the processes of reproduction of the engineering community. Engineering education can be regarded as a mechanism of transmission of professional knowledge, ethics and attitudes, but also as a result of the interplay of state, market and public expectations and demands. Thus, the increasing reflection and awareness of technological risks should influence the structure of engineering education.
The study is aimed to identify the way the content of engineering education responds to the growing need for responsible innovation. Analyzing the comparable programmes in different countries (USA, Germany, Russia and India) I am trying to identify the forms and scope in which the concept of professional responsibility is integrated into engineering education. It is assumed that reflexivity of future engineers may be developed by systematically addressing the problems of ethics, social or environmental risks and safety in general sense. Three questions have been posed for analysis:
Is development of reflexivity (professional ethics, social responsibility and ecological awareness) explicitly documented among priorities of the programme?
Are there mandatory courses specialized in ethics, risks or safety in the curriculum?
Are the ethics-, risks- or safety-related topics discussed in non-specialized or non-mandatory courses?
To answer these questions, qualitative analysis of the “missions”, “objectives”, “outcomes” and “competences” declared in the educational programmes has been made. Then the share of working load in the curriculum, devoted to the specialized courses, has been estimated. For the non-specialized or non-mandatory courses the relevant topics were searched in their abstracts/synopsis.
Results show that the subjects related to problems of professional responsibility are poorly represented or, at least, formally documented in the educational programmes, which is not in conformity with their declared objectives and outcomes. A few strategies used by the educational institutions, minimizing the share of these "impractical" subjects in their curriculum, have been revealed. It is necessary to discuss their possible consequences and the alternative ways of integration of responsibility principle into engineering education within the existing constraints (growing specialization, globalizing market) on educational systems.
Re-Considering Critical Theory after the Empirical Turn
Darryl Cressman. (Maastricht University, Netherlands)
Over the past two decades, proponents of the empirical turn have constructed an intellectual history of the philosophy of technology in which the discipline can be neatly divided between classical and empirical approaches. The latter, influenced by work in Science and Technology Studies (STS) and phenomenology, takes as its starting point that humans and technical artifacts are intertwined and so the challenge for philosophers is conceptualizing the active engagements between humans and technologies without drawing a neat distinction between the two. In contrast, classical approaches, which are associated with philosophers like Martin Heidegger, Herbert Marcuse, and Jacques Ellul, have been deemed "classical" and reduced to essentialist holdovers from the past.
In this presentation, I want to challenge this intellectual history by arguing that dialectical and critical philosophies of technology are neither essentialist nor do they rely upon simplistic dichotomies of liberation and domination. Rather, like work in STS and phenomenology, the critical, or dialectical, tradition is empirical and recognizes the inherent contingency of technical design and meaning. Where these approaches differ is that critical theories of technology are historically oriented towards the question of why we have the technologies we do, pointing to the distinctly sociotechnical contexts that precede and give meaning to our everyday experiences while opening up concrete potentials that can realize goals and ambitions that are different than those of the groups who design and administer technologies.
Focusing on this empirically grounded idea of sociotechnical potential, I refer a variety of different case studies, including concert halls, communications technologies, and ultrasounds to better situate what the philosopher Andrew Feenberg calls, “a dialectical critique of technology that is neither irrationalist nor technophobic.”
Modern Social Imaginaries and AI: Polanyian Notes
Phil Mullins. (Missouri Western State University/Polanyi Society, United States)
This presentation has two related components. First, I use some recently discussed philosophical ideas about “modern social imaginaries” as a vehicle to summarize the critical and constructive philosophical elements in Michael Polanyi’s philosophical writing. Following this discussion, I pose some questions about how recent developments in AI are being appropriated in contemporary and emerging social imaginaries. I use some of Polanyi’s mid-twentieth century discussions about the digital computer to ask whether some of the contemporary expansions of AI seem to be extending the kind of objectivist imaginary of the mid-twentieth century. To counter a renewed objectivist social imaginary, I also set forth some of Polanyi’s discussion of the tacit dimensions of computer use and his ideas about how properly to understand the expansion of formalization which digital devices make possible.
Charles Taylor’s Modern Social Imaginaries (2004), argued that the “social imaginaries” of modernity are important and are in many ways problematic. Taylor developed his ideas about modern “social imaginaries,” by drawing on his work in the history of ideas and by creatively building on suggestions of other scholars with interest in imaginaries. He has recently commented on his views in relation to what might be called Polanyi’s “social imaginaries.” Taylor’s account and his broader philosophical perspective is generally somewhat akin to perspectives developed by Polanyi, although Polanyi does differ from Taylor in some important respects. In Charles Taylor, Michael Polanyi and the Critique of Modernity (2017), there is an interesting comparison of the accounts of the social imaginaries of Taylor and Polanyi in essays by Charles Lowney and Jon Fennell with a response from Taylor. Although Polanyi did not use the term “social imaginary,” Polanyi’s critical and his constructive philosophical reflections can be reframed using the “social imaginary” framework. In Polanyi’s critical philosophical thought, elements are directed against much of philosophy since the modern turn; Polanyi analyzes and sharply criticizes the way in which science has been interpreted in modernity and modern culture has been shaped by this misreading of science. Polanyi attacks objectivism and the adulation of skepticism as well as the misrepresentation and undervaluing of belief and skills in modern thought. He argues that the nihilism, violence and totalitarianism of the twentieth century are the fruits of the kind of thought that eventually developed after the scientific revolution. Polanyi’s criticisms of the critical dispositions of modern thought are Polanyi’s “scientistic social imaginary” (Lowney’s apt term). But Polanyi’s critical philosophizing is tightly woven with elements of his constructive philosophical account--i.e., his post-critical alternative to the dominant tradition, which is an alternative aiming to heal the modern mind. This “post-critical social imaginary” focuses on a richer understanding of understanding in science and all human knowing; it is an imaginary which affirms the importance of tacit knowing and personal knowledge. And his constructive philosophy also includes elements of a Lebensphilosophie.
Both Polanyi’s “scientistic social imaginary” and his “post-critical imaginary” have important bearing on contemporary culture where the extensions of AI using predictive analytics are increasingly visible. Much of the public conversation about machine learning and machines passing Turing tests seems to be an extension of Polanyi’s scientistic imaginary dominant in the early days of AI. Now thus seems a good time to recover elements of Polanyi’s post-critical imaginary that emphasize that we indwell digital devices (just as we indwell physical tools, texts and other minds) and that the extensions of formalization (made possible by digital devices) are important cultural developments but they don’t eliminate tacit elements. Philosophy of technology discussions need again to address questions Polanyi treated about differences between minds and machines.
Robots, moral agency, and blurred boundaries
Hermann Diebel-Fischer. (Universitaet Rostock, Germany)
The question whether an autonomous machine that interacts with other machines and human beings as an automatic decision making (ADM) system has the capability of moral agency can only be answered by deciding that this has moral agency or not. This is closely linked to the concept of responsibility. Leaving the moral and legal consequences of being responsible for an unwanted action aside, the idea of having an entity that can be blamed seems to be the crucial point when it comes to decision whether a machine or a human being is regarded as a moral agent.
As a machine as a moral agent will act according to its idea of what is right and wrong, the machine – which is still a perfectly functioning automaton – becomes less predictable to those who only know it from outside. With ethics we seek to control contingencies which appear in human action – however, the idea of ethics alone does not guarantee that there is not an unwanted outcome, as ethics does not mean that everyone agrees on the one option which is deemed best.
If moral agency is a ‘differentia specifica’ of humankind then deploying it to the field of machines blurs a boundary that is upheld by this concept. The philosopher Günter Anders (1950) coined the term Prometheic embarrassment which describes the human‘s feeling of being inferior to the machine as humans do not function in perfect order. If humans are willing to regard an autonomous machine as an entity which is responsible for its actions then interaction with these machines will – in theory – not differ from interaction with humans. That means interaction which is predictable but not guaranteed in its outcome.
The robot which acts according to its own ethical compass is the perfect human and the perfect anti-human at same time, when it comes to acting. The concept of ethics is uniquely human and its application on other entities transfers a human trait into technical system. The machine which makes decisions on its own appears less as a machine than those machines which are understood as entirely controlled by their makers. Understanding the machine as the responsible entity blurs the boundary between humans and technical entities (Schwarke 2017) – whereas the same technical entity remains a machine if we decide that it can never be regarded as responsible for its actions.
As the boundary between humans and machines in the above described case the result of a decision, the idea of the human being appears to be more fragile then we might expect. This emphasizes that our philosophy of technology and philosophical anthropology are more closely related than it might appear at first sight. It is the cultural heritage with which we think about humans that deeply shapes how we judge technology and understand technological progress.
References
Anders, Günther (1950): Die Antiquiertheit des Menschen. Über die Seele im Zeitalter der zweiten industriellen Revolution. München: C.H. Beck.
Schwarke, Christian (2017): Technik und Christentum. Anmerkungen zu einem verkanteten Verhältnis, in: Sebastian Böhmer et al. (Hrsg.), Technologien des Glaubens. Schubkräfte zwischen technologischen Entwicklungen und religiösen Diskursen, Halle: Leopoldina, 131-142.
Disobedience: Threat or promise
Hesam Hosseinpour. (University of Tartu, Estonia)
When most people think about artificial intelligence (AI), possibility of its disobedience is usually counted as a threat to human race. It is a common dystopian theme in most science fiction movies in which machines rebellion against humans, leads to catastrophic consequences. But in this paper I will elaborate on a counterintuitive and optimistic approach which sees disobedience of AI as a promise, rather than a threat. A promise to develop a new and open relationship with technology, and to reach an alternative technology which make it possible to consider a robot as a person, rather than a slave. In the first part of this article I will argue for the importance of shaping a new relation with future intelligent technologies which is totally different with our current relation with current technology. Now most thinkers and engineers conceive any technology, including AI, as a slave which should be completely at human service. Therefore, they do their best to create obedient AI and disobedience is considered a serious threat. In the second part of the article I will use Foucault’s analysis of power and its relation to the subject, to discuss that there are different ways for human to become a subject and find a position in the mesh of power (power relations). Foucault explains that power relations develop, if and only if resistance (disobedience) is possible. Then, in the last part, I draw this conclusion that by means of disobedience, AI will find its way to power relations and will promote to the position of a subject, so that it can be counted as a person. In this way, AI will introduce us a totally new generation of technology, which have the potential to become a subject and will alter our relation with it.
References
Coeckelbergh, M. (2015). The tragedy of the master: automation, vulnerability, and distance. Ethics and Information Technology, 17(3), 219-229.
Feenberg, A. (2002). Transforming technology: A critical theory revisited. Oxford University Press.
Foucault, M. (1982). The subj ect and power. Critical inquiry, 8(4), 777-795.
Gunkel, D. J. (2012). The machine question: Critical perspectives on AI, robots, and ethics. MIT Press.
Heidegger, M. (1977). The question concerning technology, and other essays.
Marcuse, H. (2013). One-dimensional man: Studies in the ideology of advanced industrial society. Routledge.
Miller, P. (1987). Domination and power. Routledge & Kegan Paul.
Sparrow, R. (2011). 19 Can Machines Be People? Reflections on the Turing Triage Test. Robot Ethics: The Ethical and Social Implications of Robotics, 301.
From Moral Care Robots to Ethical Tracking Devices
Eugenia Stamboliev. (University of Plymouth, United Kingdom)
This paper investigates the conflation of moral and ethical research around social robots, especially around humanoid care robots. It argues that the understanding of social robots as inherently ethically problematic tracking devices 1 – yet not necessarily morally accountable devices – is underdeveloped. This concern emerges in the context of elderly care, in which care robots raise ethical issues when viewed as moral and humanlike companions and through their inherent ability to gather data (Sharkey & Sharkey, 2010; Royakkers & van Est, 2016).
This paper manoeuvres within discourses in Robot Ethics, Philosophy of Technology, Posthumanism, and Media and Surveillance Studies. It is aims for a transdisciplinary and philosophical exploration of care robots (as humanoid bodies and information structures) by firstly, scrutinising anthropomorphic morality research through concepts such as ‘virtual moral responsibility’ (Coeckelbergh, 2009), and the ‘anthropocentric conception of agenthood’ (187) [2]. Secondly, by suggesting an additional different discussion on care robots as ethical structures exceeding anthropocentric or anthropomorphic angles and questions on harm or immorality. It hereby opens a debate that looks at ethical consequences of tracking as dataveillance [3] by making use of Introna’s (2014) view on the ‘sociomaterial agency’ and of posthumanist views on performative agential structures (Barad, 2003; Gitelman, 2011; Ruprecht, 2017).
Notes
1 Social care robots – if used as interactive companions – have the capacity to track (as in gather and manage) data on human movements and facial/gestural expressions through tracking or detection modules. This allows to interact with the human subject (dealt with in HRI research). This process of collecting and processing data leads to a neglected association to ethical concerns on dataveillance.
[2] “An entity is still considered a moral agent if (i) it is an individual agent, (ii) it is human-based, in the sense that it is either human or at least reducible to an identifiable aggregation of human beings, who remain responsible the only morally responsible source of action, like ghosts in a legal machine” (Floridi, 2014: 187)
[3] Dataveillance is the “systematic use of personal data systems in the monitoring or investigation of the actions or communications of one or more persons.” (Clarke, 1995)
The Double Standard Between Autonomous Weapons Systems and other AI Technologies
Aron Dombrovszki. (Eötvös Loránd University, Hungary)
The prospect of autonomous weapons systems (AWS) has raised several ethical issues, which led to an unfavorable attitude towards them. In 2012 Human Rights Watch drew attention on the emerging issues, then a public letter by researchers, AI experts, and leaders of big robotics companies was published in 2017 to propose a preventive ban on AWS.
Compared to other potentially lethal autonomous technologies – e.g. self-driving cars (SDC) –, these adverse reactions are unique: philosophers are eager to solve moral dilemmas raised by SDC. Besides, the researchers and the laypeople highly anticipate their development. If the ethical concerns are present only in military applications of AI, this would not be a surprising outcome. However, I argue that, regardless of what AI-technology is in question, structurally the very same ethical problems were discussed and raised. My conclusions are twofold. (1) The prejudices against AWS cannot be justified exclusively by philosophical argumentation. It has other underlying sources too, which are never said explicitly in the articles. (2) Even though ethical dilemmas are biased, they are undoubtedly present. Therefore, either we solve these problems proactively and anticipate all AI technologies, including AWS, or we propose a preventive ban on all AI technologies, including SDC, because of ethical concerns.
In the first part of my presentation, I define the notion of AWS. Then by introducing the Just War Theory, I provide a framework which makes easier to introduce the ethical issues concerning them. After this, I sketch the most common objections against the application of AWS: the responsibility gap; the potential vulnerability of these systems; contingent issues about the capabilities of AWS; the problem of radically asymmetric warfare; and the fear that barriers of going to war will be lower with such advanced technologies. In the final section, I show parallel issues from the literature on the ethical problems of SDC to arrive my conclusions mentioned above.
A Theory of (Sexual) Justice: the revised roboethician's edition
Radu Uszkai. (Bucharest University of Economic Studies, Romania)
Sex robots have been gaining significant traction in the media and in pop culture. Each new launch of an updated model or a new entrepreneurial innovation on the sex robot market was signaled and discussed at length by tabloids like The Sun or by serious outlets like The Guardian. Simultaneously, Hollywood productions like Ex Machina or popular TV series like Westworld have graphically illustrated and brought forth serious questions regarding human – sex robot relationship.
Unsurprisingly, philosophical interest on this topic is extensive, with series of papers and books tackling a wide array of topics related to it. Leaving aside traditional and general issues involving what is the proper philosophical methodology for doing roboethics (Coeckelbergh 2009) or the appropriate meta-ethical perspective to do machine ethics (Torrace 2011), roboethicians have focused on a wide array of specific questions for their field. While some scholars focused on the “ethical way of building a love machine” (Sullins 2012), others have argued that sex robots might prove to be a preferable alternative to the current prostitution market (Levy 2015; Klein and Lin 2018). Richardson (2016) argued strongly against such a deployment of robots, as she believed that this would be something similar to slavery. Echoing Richardson at a distance, Cox-George and Bewley (2018) argued that we should be cautious regarding the therapeutic role of sex robots. Their analysis might, however, be based on some unconscious biases regarding robots and sex (Eggleton 2018).
Some ink has been spilled in academic journals lately on the topic of sex rights for the disabled and the consequences of accepting this view (Appel 2010; Di Nucci 2011; Thomsen 2014), with some arguing that, on the basis of this rights, we should allow a limited (and highly regulated) market for prostitution. More recently, Di Nucci (2017) argued that sex robots might prove to be a more acceptable solution to the moral conundrum posed by prostitution.
The purpose of my presentation is to explore how a Rawlsian luck egalitarian might analyze the issue of sex rights in the context of the existence of sex robots. For Rawls, primary goods are “the things that every rational man is presumed to want” (1999, 54). They range from social goods (rights, liberties, income) to natural goods (health, vigor, intelligence, etc). If Rawls is right in saying that it is unfair for a just society not to take into account the random distribution of these primary goods, then we are posed with an interesting dilemma. Some people, due to contingent reasons, are born either wealthy or good looking, thus enjoying more pleasurable sexual experiences. Others, either due to medical reasons or for contingent cultural factors regarding how we define beauty do not have access to this type of experiences which, if I am right, can be tied to some of Rawls’ most important goals: that of self-realization and self-respect. Mixing this Rawlsian luck egalitarian with an anthropocentric meta-ethical outlook and taking into account the objections to a market for prostitution, shouldn’t we accept that some people should have subsidized sexual experiences with robots?
Beyond the Empirical Turn (Elements for an Ontology of Engineering)
Agostino Cera. (Università della Basilicata, Italy)
My paper aims to make an attempt of historicization of the so-called Empirical Turn in philosophy of technology (according to the definition coined by Hans Achterhuis in his 1999 book American Philosophy of Technology: The Empirical Turn).
My thesis is that after 35 years (taking as conventional birth date 1984, namely the year of publication of Albert Borgmann’s Technology and the Character of Contemporary Life: A Philosophical Inquiry) the Empirical Turn has proven to be an Ontophobic Turn. By this expression, I intend an over-reaction against the essentialist approach to the question of technology, in particular against Heidegger’s legacy.
Concretely, this over-reaction consists of the transition from an over-distance to an over-proximity. That is to say, from a disinterest in – or indifference towards – the ontic dimension (namely, the social, political, practical implications) of technology (and therefore an over-distance) to an almost total and absolute interest in this ontic dimension, with a consequent a priori disinterest in any ontological implication of technology (and therefore an over-proximity).
The benchmark of this epistemic metamorphosis in the philosophy of technology is the lexical replacement of its object, namely the change from “technology” (in the singular) to “technologies” (in the plural). Such a replacement corresponds to the increasing inability to acknowledge technology as something in itself and as such (that is, as epochal/historical phenomenon, potential Weltanschauung or grand récit of our age). In particular, I consider the main outcome of this replacement/inability what I define “Mr Wolf Syndrome”. By this expression I mean the gradual transformation of the philosophy of technology (i.e. technologies) into a problem solving activity. This transformation can also be considered an Engineerization of the Philosophy of Technology, insofar as the basic ontological assumption of the engineering approach is to be found in the Problematization of the Reality, that is the understanding of the every single entity into a problem expected to be solved.
With regard of such an issue, my objection is the following. If technology as such is/becomes nothing, then the paradoxical but consequent result is that the philosophy of technology ceases to have a meaning and a value in itself. In other words, if the philosophy of technology entirely equates to a problem solving activity in the presence of concrete problems emerging from the single/concrete technologies, then it must be admitted that this kind of activity can be performed much better by scientists, engineers, politicians… than by philosophers.
As a consequence, the ontophobic turn in philosophy of technology or its engineerization (namely, the over-reaction against Heidegger’s legacy) corresponds to the disappearance of the reason itself for a strictly philosophical approach to the question of technology. Given this assumption, the paradoxical accomplishment/fulfilment of the empirical turn should be the final self -suppression, or at least self -overcoming, of the philosophy of technology .
When AIs Say Yes and I Say No — On the Tension between AI’s Decision and Human’s Decision from Epistemological Perspective
Chang-Yun Ku. (Academia Sinica, Taiwan)
The application of AIs or AI algorithms is common in our daily life, although it’s not in the sense of Strong AI yet, but it’s has been used nowadays. Financial credit system, medical diagnosis, even prisoner sentencing is now using AI algorithms to process the result that was decided by human once upon a time. And the Article 22 (1) of EU’s GDPR specifically require that for those decisions which will significantly effect the individuals, then those decisions should not be determined solely by automated processing, in order to prevent the possible harms that solely AIs decision could cause. It seems perfect, but does it?
Let’s take a thought experiment to emphasis the point I would like to make here. There is a patient, who is being in clinical room waiting to know the diagnosis result, whether a surgery should be performed for his medical treatment. After AI machine or SaMD (Software as a Medical Device) processed, the result says that the patient needs surgery immediately, which is in opposite with your diagnosis that the patient needs not to be performed the surgery. Will you, as a physician in this scenario, object the result that AIs has made? If your answer is no, which I assume that most of people will react exactly the same, then, the requirement of Article 22 (1) of GDPR seems meaningless, i.e. AIs will determine all the decision which them processing, with or without human intervention. And as my assumption, I believe that this scenario will happen in every different field, as long as it’s using AIs for decision-making process.
In this article, I would like to explore the tensions between AI’s decision and human’s decision from epistemology perspective. In order to do so, I will divide my article into five parts. Firstly, I will bring up the problem as above mentioned from practical examples. Secondly, I will look into the Article 22 (1) of GDPR for the reasoning of this clause, and the gap between this clause and practical application of it as well. Thirdly, I want to explore the epistemological differences between AI’s decision, human’s decision and truth as the starting point, and inquire the reason that causes this issue. Fourthly, by the finding of the previous part, I will turn back to the Article 22 (1) to discuss the possible solution that would fill up the gap. And finally, I will summary my argument and conclude this article.
The Problem of the Living Machine according to Samuel Alexander’s Emergentism
Daniel Paksi. (Budapest University of Technology and Economics, Hungary)
The concept that living beings are a kind of living machines is widespread and well-known. However, if it is only a metaphor, it dos not mean much; but otherwise, there is a severe conceptual problem, since the living part of the concept always indicates the notorious notion of vitalism. The question is how can living machines be really different from machines without the concept of vitalism?
According to Samuel Alexander, the problem arises from the traditional usage of the concept of mechanical as well as material which are severely influenced by Cartesian dualism. Consequently, the concept of mechanical is confused with the concept of material, as well as the concept of material is defined against the Cartesian concept of mind and not on its own right. Nonetheless, the vitalism-less solution indicates that neither living beings, nor machines are just material but machines are mechanical, while living beings are mechanical and living. The difference lies not in a vital substance and a non-mechanical principle but in an emergent mechanical quality called life which simply machines lack.
Towards the Development of Artificial Intelligence-based Systems: Human-Centered Functional Requirements and Open Problems
Temitayo Fagbola (Federal University Oye-Ekiti, Nigeria) and Surendra Thakur (Durban University of Technology, South Africa)
The increasing capability of AI-powered systems, including self-aware and unmanned systems, at automating simple to sophisticated tasks and their wide areas of real-world interventions in boosting productivity and enhancing competitiveness has offered transformative potentials leading to better quality of life. These systems have lately become an inseparable part of human lives. However, incorrect use leading to unintended consequences, safety, fairness, trustworthiness are major concerns of these emerging ubiquitous systems. In this paper, an attempt was made to concisely present and discuss key human-centered functional requirement specifications of emerging Artificial intelligence-based systems especially interpretability, explainability, fairness, transparency and security. Some emerging toolkits and open libraries for developing and evaluating AI-based systems are also discussed. A number of open problems with respect to managing the tradeoff among these requirements and systems’ performance are presented to guide future researches in this direction.
Is there an ideology of new technologies? The immediation of experience and the information of desire
Jacopo Giansanto Bodini. (Université Jean Moulin Lyon 3, France)
Postmodern is supposed to be the post-ideological era. Jean-François Lyotard, opening his Report on Knowledge on The Postmodern Condition, states indeed its “incredulity towards metanarratives” – among which stand, of course, “ideologies” –, to be understood as “a product of the progress in the sciences”; at the same time, he points out how this very “progress” in the sciences “in turn presupposes it”. According to Lyotard, sciences and technologies in the postmodern condition seem to answer then to the one and only criterion of “efficiency”, arising as the value defining our times.
A consequence of such paradigm shift is, in our opinion, a progressive loss of technological specificity awareness: is the medium still the message (McLuhan) when efficiency seems to be the only worthy message? This will be the first point discussed in our talk.
Furthermore, the digital revolution arose therefore on this new paradigm of efficiency, developing nevertheless new criteria individuating, eventually, a new kind of ideology on which new technologies rely. We’re talking of a new kind of ideology because it is no longer a political/philosophical one (answering to political or philosophical structured metanarratives), but rather an aesthetical ideology, inherent to the design of these very technologies themselves.
In the present talk we will then try to describe and discuss the main aspects of this new aesthetical ideology, laying behind new technologies: we esteem this task difficult and, therefore, equally urgent, on the one hand because of the post-ideological status of our times, on the other, because of the transparent nature of this kind of ideology, that tends to make itself invisible. Transparency is, indeed, one of the key word of the latter: we will argue that absolute transparency is the paradoxical regime through which such ideology can establish itself and blossom.
Ideology of absolute transparency is characterised therefore as a deliberately or involuntarily ignorance of technological mediation standing between the world and us: it is not the absence of mediation, but rather the denegation of mediation itself. We will call then immediation such paradoxical logic of technological mediated experience, which is deceptively perceived as immediate thanks to the efficiency of technological mediation itself. We will argue that this logic is what makes the relation between the digital life and the real of experience so intertwined and sometimes twisted, as in the cases of perversion of the real, such as fake news, or deep fake.
This logic affects not only the perceptive dimension of our experience, but also the libidinal one, influencing therefore the way we desire, as well as our individual and collective behaviours and beliefs. We will argue therefore than such ideology of new technology enables a new paradigm of desire: information. This paradigm is able to explain, in our opinion, how algorithms and artificial intelligences inform (literally, give form) to our desires, ultimately telling us what we desire.
The Architecture of Misinformation and Democracies in South America
Ricardo Rohm. (Federal University of Rio de Janeiro, Brazil)
Representative democracies in South America are considerably young and in most cases have not achieved a more stabilized and institutionalized legitimacy as collective self-determination and agency have not been strong enough to deal with ethically and strategically positioned social political actors. Social control and participation in governmental decision-making processes have been undermined by an architecture of misinformation. This mediatic political dispositif, as we define it here, is an assemble of digital platforms, algorithmical design and communication tactics, intertwinned with the misusage of social networks and big data manipulation, whose consequences in a surveillance capitalism system is the weakening of transparency,accountability and self-determination of citizens whilst performing their rights to vote in supposedly free elections as well as to be able to protest against manipulations and perverse influences of strong economical groups and even against foreign international interests. Besides, there seems to be a strong role of a corporatocracy and its lobbies of authoritarian performances within the political system in different Southern American Countries nowadays. The case of the extreme right president in Brazil who has been elected in 2018 under the influence of "fake news" and authoritarian strategies on the social networks is an important case we would like to develop and demonstrate how these gimmicks, dispositifs and carefully designed strategies of political marketing are real threats, not only in South American countries, but in many other places in the world as well. From a theoretical framework considering at first the public sphere, the construction of public opinion, secondly, the digital technologies and communication strategies available today, and third, the weaknesses and challenges which have been faced by democracies and politicians in the recent years in South America, we would like to develop a conceptual cartography as to analyse and understand this architecture of misinformation. In a world where datafication is trasversed in all realms and layers of our individual and collective lives, to understand this challenging social political environment is paramount for democracies to survive and especially to develop and get stronger, fairer, socially inclusive and ethical within its institutions, decision-making processes, public policies and its social control. The methodological path to achieve this goal will consider a bibliographic research as main political categories and concepts are taken into account, as well as the possible models which might have been created or suggested to analyse this scenario. Also, a documentary research will be carried out especially focusing on a critical journalistic production from some important newspapers, blogs, magazines and institutional reports in Brazil during the last 2 years whereas the case study of the Brazilian recent political crisis is a strong example for a heuristic endeavor. Though this research has an exploratory approach, its multi methodological design is important to enrich the comprehension of a complex phenomenon which is currently happening and will therefore underline and emphasize some possible trends and hints in order to define more accurate research problems as well as reveal the limitations and strenghts of the present design to investigate the subject and its complexities.
References
ANTUNES, Ricardo. O Privilégio da Servidão: o novo proletariado de serviços na era digital. Rio de Janeiro: Boitempo, 2018.
CASTELLS, M. Networks of outrage and hope: socisl movements in the inyernet age. Cambridge, 2015.
FIORMONTE,D.& SORDI, P. Humanidades Digitales del Sur y GAFAM. Para uma geopolítica del conocimiento digital. São Paulo: Revista IBICT,Vol,15 N.1, 2019.
McCOY, J et al. Polarization and the global crisis of democracy: common patterns, dynamics and pernicious consequences of democratic polities. In: American Behavioral Scientist, Vol.62(1): 16-42, 2018.
SOUZA SANTOS, B. Epistemologies of the South: Justice against Epistemicide. Paradigms Publishers, 2014.
__________________ A difícil democracia. São Paulo: Editora Cultura, 2016.
ZIZEK, Slavoy. Like a thief in broaday light: power in the Era of Post Human Capitalism. London: 2018.
ZUBOFF, S. The age of surveillance capitalism: the fight for a human future at the bew frontier of power. London: Profile Books Ltd, 2019.
Opacity in Machine Learning
Paul Grünke. (Karlsruhe Institute of Technology, Germany)
Machine Learning techniques are present in today’s technologies and will be implemented in many future technologies. What is the price that has to be paid for the better results in pattern recognition and the successful decision-making based on Machine Learning?
Analysing Machine Learning techniques from the perspective of philosophy of science, a comparison between Machine Learning and computer simulations that are based on common modelling techniques is useful. Machine Learning techniques have often times been associated with a “black box”-nature. Starting from Paul Humphreys definition of epistemic opacity (Humphreys 2011), I will show that in this respect there are no differences between the training of a neural network and the construction of a computer model by a human modeler. In both cases, the resulting models are algorithmically transparent and all the epistemically relevant information is accessible.
This might not be enough for practical purposes however. In many contexts, the user of a technology is not only interested in the result itself, but also in a justification. In classical modelling, one can ask the modeler to explain the reasons for modelling choices that lead to certain results. In the case of Machine Learning techniques, this is not possible because of another kind of opacity, that I call model-opacity (Boge/Grünke, accepted). I will argue that model-opacity will often times make it impossible to understand why specific results are reached with the result of Machine Learning techniques, because the underlying structure of the neural network can not be extracted.
The Use of Natural Language Processing AI techniques in corporate communications
Dániel Gergő Pintér and Péter Lajos Ihász. (SZTAKI Institute for Computer Science and Control, Hungary)
In the era of information society and digitalization, where the creation, distribution, and manipulation of information have become the most significant economic and cultural activity, a vast amount of information become easily accessible, profoundly changing all aspects of social organization. Accordingly, business management started to rely on automated data mining, data analysis, and automated response generation in order to harvest this novel and profound resource. Companies collect and store a large amount of customer data in order to enable better business decisions and gain an advantage in the global market through performing communication that builds on a better understanding of customer needs.
As one pillar of corporate business management, corporate communications - activities aimed to establish and maintain favorable internal and external reputation of the corporation - relies heavily on customer data. Many e-commerce websites, for example, allow the customers to express their opinions about the products and services the company offers. The reviews are considered not only by fellow customers but with the right information retrieval techniques, these easily obtainable feedbacks serve as a valuable source of information for the companies as well. As another source of information, social media can be harvested through Artificial Intelligence (AI) -based information retrieval.
Sentiment analysis (SA), is a way to extract semantic information from feedbacks, where opinions, sentiments, emotions, attitudes toward entities and their attributes are computationally identified. Topic extraction, dialogue act classification or summarization are further examples from a wide palette of information retrieval practices.
Extracting customer intelligence from such user-generated content, however, is a challenging task, as it involves dealing with data requiring natural language processing (NLP) techniques. Nevertheless, various AI methods exist, making possible the effective NLP-based extraction of customer intelligence, and thus, indirectly enhancing business networking, improving the efficiency of public relations management, and extending the possible application areas of communication components.
This paper gives an overview on the use of AI-based NLP information retrieval practices (NLP tasks) in different disciplines of corporate communications, discusses industrial exemplars, identifies promising research topics for the future and elaborates on the ethical aspects of gathering user data for business purposes. As a result of the presented synthetizing study, a model has been developed highlighting state-of-the-art AI techniques specified by the communicational disciplines, and NLP tasks they are utilized in.
The paper is organized as follows. Section 1 identifies the general role of AI within Information Society. Section 2 describes the main disciplines of corporate communications and specifies AI-based NLP tasks they require. Section 3 specifies the mainstream and state-of-the-art deep-learning methods the tasks - elaborated in section 2 - are accomplished by. In section 4, the developed model is introduced, along the conceptual and methodological basis it builds upon. Section 5 discusses the moral aspects of user data extraction for marketing and business management. Finally, section 6 offers concluding remarks and outlines possible future work.
A diversity-sensitive social platform: Ethical Questions from the Project "WeNet - The Internet of Us
Karoline Reinhardt. (IZEW, Eberhard Karls Universität Tübingen, Germany)
Diversity is a fact of our everyday life. Technology, especially algorithm-based technology, however, still struggles when it comes to helping develop and maintain social relationships that transcend geographical and cultural backgrounds.
“WeNet-The Internet of Us” is an EU-funded project that aims at developing new and inclusive methods for computer-mediated diversity-aware social interaction. The WeNet application is supposed to be not only diversity-aware but also highly trustworthy, especially with regard to data protection. The goal is to create a social platform (WeNet) that enables users to communicate person to person in order to foster community life and enhance human interaction. WeNet is part of what is now called AI for social good.
Computer scientists, sociologists, psychologists, political scientists and ethicists work together in this project towards developing diversity-aware algorithms that meet ethical standards. Diversity-related data, on the one hand, are very often sensitive personal data. The protection of informational privacy, on the other hand, means having control over sensitive personal data. A software application that works with data relevant to diversity carries a considerable risk of losing that control. Furthermore, when people reveal information about individual problems, they become vulnerable to harmful intentions of others. How can a platform like WeNet militate these risks and other misuse scenarios such as hate speech, trolling and cyberbullying?
In this talk, I want to give an overview on the ethical issues that we are dealing with in the WeNet-project before I turn to an in-depth discussion of an ethical key question: Is it even possible to design an algorithm that allows for diversity while avoiding the pitfalls of reinforcing stereotypes and perpetuating discrimination? Since machine learning systems “learn” from the patterns they aggregate from the past, they infer from what has been the case what is going to happen - and they might be very successful with their prognosis. That however, does not tell us anything on how things should be. AI for social good has to consider closely what’s right and good - not only what’s likely. Furthermore, in WeNet we have to collect certain data on persons to measure diversity and ensure that diversity is instantiated in the WeNet software application. This, however, brings about the above mentioned risks. All this taken together bestows an enormous responsibility on designers. I will analyze this predicament and sketch a concept for responsibility in AI for social good that takes into account the special care that the subject matter deems necessary.
“Not Exactly Reading” – The Nature of Reading in the Era of Screen
Krisztina Szabó. (Budapest University of Technology and Economics, Hungary)
According to McLuhan (1967)’s technological determinism, technological development is constantly changing culture. Mediums are extensions of human perception and have more power on society than the message they transmit. Literacy is at the focus of this process as the fundament of communication, cognition, learning, and heredity of culture, especially in the era of platform shift from print to digital, from paper to screen. Key-concepts of literacy (reading and writing, text and context, comprehension, reception and interpretation) become slurred and vexed. Today the question is not about the live and death of printed books, but the future of reading. As stated in the National Endowment for the Arts (NEA) study, reading digital contents or learning online are “not reading”, but “activities that distract one from reading”. (Coyle, 2008, 3-4) In a weaker phrasing, digitalism will give us new experience, “which is not exactly »reading«”. (Badulescu, 2016, 148)
As a contribution to this debate, I shall argue that digital reading is reading indeed, based on the major features of print reading, but extended with aspects that are essentials in the 21th century. The application of digital devices does not necessarily mean distraction, but a new opportunity for comprehension and cognitive development. However, in the tenors of making screen reading effective, comfortable and very practical, the biggest challenge is to accomplish and bring back the missing experience of classic reading. While we concentrate on the pros of technological development in reading, we also try to reach what is lost: the engagement, emotion, inner motivation and complex mental, physical and sensual experience that makes print reading specific. It is a matter of question whether technology is able to create and reach this complexity in screen reading or changes readers, cognition and culture as much as these experiences will not be missed at all.
Ambiguities with The Algorithms of Hate
Jernej Kaluža (Faculty of Social Sciences, Ljubljana, Slovenia)
In the last years, especially after the gamergate controversy and mainstream success of digital alt-right politics, there was a lot of emphases given on the theme of reproduction of so-called digital hate. That phenomenon was, among others, analyzed by Angela Nagle, David Neiwert, George Hawley, and Mike Wendling. However, the role of technology in that reproduction became, especially after the Christchurch and El-Passo tragedies, the central question in studies of the new rise of radical right politics. Shooters in both contexts were active in specific digital environments, which, as analyzed among others by Luke Munn and Kevin Roose, encouraged hate, conspiracy theories, and “we versus them” mentality. In the proposed presentation, I would like to address some crucial questions and ambiguities arising out of the described situation. Besides the questions of human responsibility and coincidence between the specific psychological profile and technological characteristics of algorithms, I would like to address the issue of the response to digital hate primarily. In that context, I would like to problematize the idea of the strict struggle against hate speech, which attempts to eliminate hate discourse from the digital public sphere altogether. Part of the reason for the appearance of the alternative, hateful “rabbit holes” is, namely, precisely the intolerance towards such views in the mainstream digital space. However, the opposite approach, therefore, complete tolerance towards hate is also not the appropriate answer. What is then, the right approach to digital hate? How to think it in the context of a broader debate about regulation/deregulation of the internet?
The emperor’s new clothes: private governance of online speech
Cristina Voinea . (Bucharest University of Economic Studies, Romania)
Because social media platforms are used by more than two billion people, they have become crucial players in enacting several conditions of living under democracy, such as: how people receive political information; how they articulate their personal, social and political relationships / associations; how they access knowledge; and how they organize spaces for deliberation (DeNardis and Hackl 2015).
In most democratic societies, social media platforms are neutral intermediators, meaning that they are exempted from a legal obligation to actively monitor the content uploaded by users. Despite the legal neutrality presupposed by the law, governments and civil society are mounting increasing pressure on these companies to prevent the dissemination of illegal content, hate speech, and of other material that may be deemed harmful to individuals, groups or societies. This means that social media platforms now have to evaluate what type of speech is harmful or infringing, make rules according to these evaluations and adjudicate potential conflicts between rightsholders (Balkin 2018).
As such, social media platforms have become chokepoints which regulate the flux of information and users’ capacities of expression, without a corresponding system of checks and balances meant to protect users from abuses (Bloch-Wehba 2019). In other words, they possess power over individual speakers, but they have no corresponding responsibilities. The aim of this paper is to show that social media platforms have become private forms of governance.
According to Elizabeth Anderson (Anderson 2017), a governance structure is private when it satisfies three conditions: firstly, those that are subjected to the rules are eliminated from decision-making processes. Secondly, robust due process mechanisms are virtually inexistent. And thirdly, it is extremely costly for individuals to exit the concerned governance structure. This paper shows that all of the conditions are satisfied by social media platforms. The decisions and rules of these new private governors can have profound consequences on users’ capacities of exercising their rights and freedoms.
A solution will be sketched in the end. In order to attenuate the dangers of an online private governance system for users’ rights, these companies should shift from a punitive to a corrective justice system. More precisely, instead of using punitive measures for the enforcement of their rules – which focus exclusively on content–, social media platforms should directly address user behavior. The purpose of corrective mechanisms would be to make content moderation systems as transparent as possible and to allow people to police themselves.
References
Anderson, Elizabeth. 2017. Private Government: How Employers Rule Our Lives (and Why We Don’t Talk about It). Princeton University Press.
Balkin, Jack M. 2018. “Free Speech Is a Triangle.” SSRN Scholarly Paper ID 3186205. Rochester, NY: Social Science Research Network.
Bloch-Wehba, Hannah. 2019. “Global Platform Governance: Private Power in the Shadow of the State.” SSRN Scholarly Paper ID 3247372. Rochester, NY: Social Science Research Network.
DeNardis, Laura, and A. M. Hackl. 2015. “Internet Governance by Social Media Platforms.” Telecommunications Policy, 39 (9): 761–70.
The future of human-AI cooperation
Jurgis Karpus (LMU-Munich, Germany)
Imagine yourself driving while stuck in trafc on your way out of the city for a weekend holiday outdoors. A few metres ahead of you someone wants to join the trafc from a side road. Will you stop and let them in or push on, hoping that someone else will let them in behind you? Will you do the same if the other is a humanless self-driving van?
As artificial agents become increasingly endowed with their own autonomous decision-making capacities, we will soon switch from being mere users of machines (e.g., of Google Translate) to being their co-players in strategic social settings (e.g., as with the humanless self-driving van). When people make interdependent decisions in these settings, they often trust and cooperate with one another to attain mutual gains, even when cooperation entails taking risks and exposing oneself to exploitation by strangers. Here we ofer evidence indicating that the same behaviours may not emerge in human interactions with AI. In four experiments using well-known economic games, human participants made one-shot decisions while interacting with either another human or an artificial agent emulating typical human behaviour. We found that tacit cooperation was less likely to emerge in people’s interactions with AI vs. with humans. More importantly, the reason for this was not participants’ misgivings about AI’s cooperative dispositions, but their willingness to exploit AI’s actual or anticipated benevolence more than a human’s. These results caution that vulnerability to exploitation may be a key challenge to the introduction of autonomous AI into human society. The fault, as it goes, may not be in the algorithm, but in ourselves, and this calls for a public policy and not a computational or engineering intervention.
The values of automatisation research and development
Zsolt Ziegler. (Eötvös Loránd University, Hungary)
In my presentation, I am going to define values of automatisation research and development which governs engineering technological developments. I also show that in case of research and development on automatisation follows contradicting values.
Technological developments have various diversified ends. For instance, even the same research and development program on automatisation follows, in the one hand, that to reduce the number of human resource (by possibly inducing cutbacks) and in the same time, to reach the most cutting edge of automatisation possible. These aims of automatisation are represented by preferences of the research team. These are what the research is for. Note that, preferences expresses value statements when agents (or group of agents) making a comparison like “X is better than Y”. In addition, preferences are also subjective evaluations of the alternatives (Broome 1993)1. In both cases, value claims make the evaluation possible because values fill preferences with motives. For example, I prefer vanilla over chocolate ice cream because vanilla is valued more. Therefore, diversified preferences governing automatisation research and development are established by certain values held by researchers. Diversified end leads diversified values which may contradict.
Contradicting values (be them moral or not) may seem not to be fatal in the everyday working progress of engineering, however, following different ends which interfere with each other may imply damages in the research program. Once a program follows a specific end, it determines not just financial or political tools but also research method and research size. Differing values lead non-complementary financial, political tools along with interfering research and scientific methods amongst research teams. Nonetheless, it is also important to note that a perfectly unified values of research and development may create a blindfolded research program that lacks any external feedback. An overly unified system of values makes scientific research discourse almost impossible. Therefore, a golden mean is needed to govern research and development (on automatisation). This mean system of values will have more or less the same financial, political tools along with complementary research and scientific methods, also to preserve critical discourse over the subject matter; automatisation. Having all this in mind, I am going to offer a rough but useful system of criteria for achieving value statements specially for research and development on automatisation that satisfy the mentioned golden mean.
According to value theory, there are two types of values; intrinsic and extrinsic values. Intrinsic values is reserved for the value that something has in virtue of its intrinsic properties. Extrinsic values are also characterised that the value that something has in virtue of its extrinsic, relational properties. The following approaches also attempt to capture further conceptual aspects of values. First, the instrumental understanding of values is the value that something has in virtue of being a means to an end. Instrumental value is often contrasted with intrinsic values or replaced by extrinsic values (Korsgaard 1983). Second pair of concepts is the final, non-final values. The final value of something is said to be the value that that thing has “for its own sake or “in its own right but final values sometimes can supervene on extrinsic properties. Non-final values best understood as instrumental values. The third approach of concepts is the derivative and non-derivative pairs. Accordingly, one thing has derivative value if the thing derives its goodness or badness from some further thing which has value. This leads to a derivative chain of values that must end up with a thing which has final or intrinsic value (note that they do not always overlap). “Things are good and bad only in a derivative sense, that their value is merely parasitic on or reflective of the value of something else” (Zimmerman 2019). Non-derivative values, hence, are either final or intrinsic values.
Accordingly, we have, then, (i.) intrinsic vs. extrensic, (ii.) intinsic vs. instumental, (iii.) final vs. non-final, (iv.) non-derivative vs. derivative values. These approaches characterising values may sometimes overlap but due to their differences they shed light on slight but significant differences on value statements. This way, values of things can be best captured by a combination of these approaches. For instance, the value of a thing best determined as extrinsic final (and non-derivative) or an other intrinsic derivative (and non-final) value.
On the one hand, as we have seen, for research and development of automatisation (and in any developing field of engineering science) we need a mean system of values which encompasses values in a non-contridictory way that follows more or less the same end — that also guarantees financial, political tools along with complementary research and scientific methods. Such a system of values may also allow scientific discourse. On the other hand, having a developed system of criteria (i.-iv.) for differentiating several values allows to establish an order of research programs. Suppose that research A program has the extrinsic final value of X determining A’s financial, political and methodological dimensions. An other program B has also the value of X but intrinsically and finally. But, research program C follows its end by having the extrinsic final value of Y. Based on the above mentioned approached that describes different aspects of values, in my paper, I am offering a system of criteria which determines the research programs that are close enough to participate in a scientific discourse but also excludes those programs that are rather impedimental for each other.
Autonomous decision-making: A potential ethical problem for immersive VR technologies
Anda-Maria Zahiu. (University of Bucharest, Romania)
The CEO of Jekko believed, in 2016, that Virtual Reality technology “is the story of humanity mastering our senses” (Shah, 2016). As the tech market becomes more aware of the widespread potential of virtual reality (VR) for furthering therapeutic methods and carry on research with more accurate results than ever, the experience of immersion is raising novel conceptual challenges for philosophers of technology and ethicians alike. Immersive technologies can affect how we act, perceive reality and understand ourselves (Zuboff, 2019). This aspect places a lot of responsibility on the shoulders of policy-makers and developers, and the concept of authenticity that is often brought into discussion by philosophers is often neglected. Many new technologies have a disruptive effect on the mental states, behavioral patterns, and self-representation of consumers. In this context, parting ways with some unfit accounts can be of great value for policy-makers.
Many authors draw attention to the fact that immersive VR can be used for neurorehabilitation (Teo et. al, 2016) because it allows the subject to build new context-sensitive experiences that can replace problematic behavioral patterns. Empirical studies in this area show that immersion presents numerous side effects for consumers- cybersickness, depersonalization (Simeon, Abugel, 2009), creating the illusion of embodiment (Madary, Metzinger, 2016), and other effects resulting in a loss of sense of agency for the users (Gallagher, 2005). Another direction in VR research confronts the issue of user’s privacy- the information collected from repeated interaction with an immersive and virtual environment is more telling than bits and pieces from the personal clickstream and that, when combined with personal data collected from social networks, they threat user’s autonomy and privacy (O’Brolchain, 2014).
I intend to provide a conceptual framework for immersion VR- specific risks and see which account of autonomy is more suitable for this discussion. I will argue that an account that takes the mental state of the agent as an indicator for autonomous decision-making has more analytical value in this discussion than a relational account of autonomy.