Minoru Asada (School of Engineering, Osaka University, Japan)
How to design artificial moral agents towards symbiotic society
ABSTRACT: Morality is one of big challenges in AI/robotics towards symbiotic society with advanced artificial systems. In this talk, I argue how pain nervous system can induce empathy, morality, and ethics as a developmental process of consciousness based on the mirror neuron system (MNS) that promotes the emergence of concept of self (and others), and discuss any possibility how to design moral agents. First, the limitation of the current progress of AI focusing on deep learning is pointed out from a viewpoint of the emergence of consciousness. Next, the outline of ideological background on issues of mind in a broad sense is shown. Then, cognitive developmental robotics (CDR) is introduced with two important concepts; physical embodiment and social interaction both of which help to shape conscious minds. Following the working hypothesis, existing studies of CDR are briefly introduced and missing issues are indicated. Finally, an issue how robots (artificial systems) could be moral and legal agents is shown.
BIO: Minoru Asada received the B.E., M.E., and Ph.D. degrees in control engineering from Osaka University, Suita, Japan, in 1977, 1979, and 1982, respectively. He became a Full Professor of Mechanical Engineering for Computer-Controlled Machinery with Osaka University, in 1995, and was a Professor with the Department of Adaptive Machine Systems, Osaka University during 1997-2018. Since 2019, he has been a specially-appointed professor as a strategic adviser for the Symbiotic Intelligent System Research Center Open and Transdisciplinary Research Initiatives, Osaka University. He was the Research Director of the Japan Science and Technology Agency Exploratory Research for Advanced Technology ASADA Synergistic Intelligence Project in 2005 and 2012. In 2012, the Japan Society for Promotion of Science named him to serve as the Research Leader for the Specially Promoted Research Project (Tokusui) on Constructive Developmental Science Based on Understanding the Process From Neuro-Dynamics to Social Interaction. Currently, he is a PI for JST RISTEX HITE project entitled “Legal Beings: Electronic personhoods of artificial intelligence and robots in NAJIMI society, based on a reconsideration of the concept of autonomy.”
>>> Talk at Sydney Ideas on Tuesday 11 June 2019 – SSB Lecture Theatre 200 Social Sciences Building, the University of Sydney
Raya Jones (School of Social Sciences, Cardiff University, UK)
Anthropomorphism as a dialogue with ourselves
ABSTRACT: Advances in robotics are often associated with anticipations of humanlike machines. Varieties of anthropomorphism in this context range from unintentional to deliberate, and may combine visceral and projective aspects. The phenomenon invites ‘why’ questions, which are addressed differently depending on whether the inquiry is articulated in contexts of cognitive science, engineering, or the humanities and social sciences. My main interest concerns what representations of robots reveal about us as human. Since the concept of anthropomorphism rests on the ‘as-if’ of apperceiving human attributes in nonhumans, its phenomenon invites also questions of why and how a line is drawn between intelligent artefacts and genuine persons. I underline the capacity to participate in dialogical action (to have a ‘voice’) as fundamental to the human form of life, and yet lacking in socially interactive artefacts.
BIO: Raya A. Jones, PhD, is a Reader at the School of Social Sciences, Cardiff University, UK, where she teaches psychology. Her latest research concerns social robotics in the context of social psychology. Earlier and ongoing work involves comparisons of Jungian, dialogical, narrative and social constructionist perspectives on the self. Her latest authored book is Personhood and Social Robotics (Routledge, 2016). Earlier books include Jung, Psychology, Postmodernity (Routledge, 2007), The Child–School Interface (Cassell, 1995), and several edited and co-edited volumes.
>> Talk at Sydney Ideas on Tuesday 11 June 2019 – SSB Lecture Theatre 200 Social Sciences Building, the University of Sydney
Laurence Devillers (Computer Sciences and Artificial Intelligence, Sorbonne University/CNRS, France)
Bad nudge Bad robot: ethical issues
ABSTRACT: In a near future, socially assistive systems aim to address critical areas and gaps in care or education by automating supervision, coaching, motivation, and companionship aspects of one-to-one interactions with individuals from various large and growing populations, including the elderly and children. Talk during social interactions naturally involves the exchange of propositional content but also and perhaps more importantly the expression of interpersonal relationships, as well as displays of emotion, affect, interest, etc. In order to provide a companion-machine with the skills to create and maintain a long term social relationship through verbal and nonverbal language interaction. Such social interaction requires that the robot has the ability to represent and understand some complex human social behavior. Conversational agents or social robots using affective computing and adaptive training, bring a new dimension to interaction and could become a new mean of “nudging” individuals. Emotional manipulation can be defined as an exercise of influence, with the intention to seize control and power at the person’s expense. They are currently neither regulated nor evaluated and very obscure. The aim of this talk is to present our project BAD NUDGE BAD ROBOT. We are a pluri-disciplinary team. Should affective systems interact using the norms appropriate for verbal and nonverbal communication consistent with the societal norms where they are located? Economics is studying rationality and to that aim many studies are documenting cognitive biais with a focus on how they affect decision-making. Experiments in the field are a very effective approach to do this. It can be applied to the topics of child development and education at young age. As vocal assistants have become ubiquitous, this project studies their impact when such objects are used as an interface: are nudges efficient when implemented by such a vocal assistant? Are they more effective than a human interviewer? Can the vocal assistants elicit issues better than a human interviewer? We setup a field experiment to address these questions.
BIO: Laurence Devillers is a full Professor of Computer Sciences and Artificial Intelligence at Sorbonne University/ CNRS (LIMSI lab., Orsay) on Affective Robotics, Spoken dialog, Machine learning, and Ethics. She leads the research team “Affective and social dimensions in Spoken interaction”. Laurence Devillers received her HDR (habilitation dissertation) in Computer Science “Emotion in interaction: Perception, detection and generation”, in 2006. She is the author of more than 150 scientific publications (h-index: 35). In 2017, she wrote the book “Des Robots et des Hommes : mythes, fantasmes et réalité” (Plon, 2017) for explaining the urgence of building Social and Affective Robotic Systems with Ethics by design. Since 2014, she is member of the French Commission on the Ethics of Research in Digital Sciences and Technologies (CERNA) of Allistene and participated to several reports on Research Ethics on Robotics (2014) and Research Ethics on Machine learning (2018). Since 2016, she is involved in “The IEEE Global Initiative for Ethical Considerations in the Design of Autonomous Systems” and leads the IEEE P7008 norm and standard working group on nudging. She is also involved in the DataIA institut (Orsay) and the French HUBIA.
>>> Talk on Wednesday 12 June 2019 – Lecture Theatre 1130 Abercrombie Business School Building, the University of Sydney