Robot intelligence: Caution: breach of confidence

By Christian Veyre.

AI, human-machine interaction, affective and interactive robotics… In the future, could we develop peer-to-peer relationships with robots? On 8 December, the “Ministry of Ethics of Artificial Intelligence and Robotics” and one of its most prominent magistrates, Laurence Devillers, will take the floor at the Digital Tech Conference in Rennes.

Laurence Devillers is a scientist in the strictest sense of the word. Even in the hard sense. But one who demonstrates a lot of empathy and adapts very easily to her audience. This just goes to show that it is possible to be a computer scientist and have a keen situational intelligence!

“Speaking about the super intelligence of robots is a bluff!”

But that was before speaking to her about Sofia (the robot by Hanson Robotics) or Elon Musk. Here, her tone is adamant. She speaks more quickly. And more loudly. And the researcher does not beat about the bush: Portraying robots as intelligent is a breach of confidence! Speaking about the super intelligence of robots is a bluff! You’re given the impression in the case of Sofia that the robot experiences emotions, feelings. That it is capable of adapting, understanding… The only thing this “social” robot can do is simulate certain human capacities without understanding them: facial recognition, imitating human expressions, speaking and answering some questions. It has no conscience in the human sense, or desire or pleasure. It is a pure fantasy! And what’s more, this machine has no moral limits!

So that’s that, it’s settled, it’s clear! However, Laurence can’t be stopped. “What you need to understand is that dialogue with robots doesn’t really exist. They don’t understand the meaning of the words spoken to them and, moreover, they don’t understand the meaning of the words they themselves generate either.”

Sofia is Siri in a dress!

What a disappointment! With all her renunciation and post-traumatic depression, Laurence has not finished driving the point home: We are in an era of chatbots but for the time being they are very rudimentary. Some are more evolved than others but they can only simulate conversation in natural language within specific fields, without dialogue history management. A bit like Siri, Apple’s personal assistant, Google Home or even Amazon Alexa, which do not have very evolved dialogue management.

And on top of all that, they have no sense of humour! Bah!

Laurence and her teams are therefore working on interactions between humans and robots. Her favourite subject? So-called “emotional” robots. In other words, those that use powerful algorithms to decrypt the emotion expressed by a human being: The robot analyses the prosody of our language (intonation, energy, rhythm, accent, etc.) and our facial expressions (laughter, grimaces, grinding of teeth, etc.). It then interprets these stimuli and reacts. The machine interprets but does not understand. These companion robots are particularly interesting for use in supporting elderly people and even autistic children. They can aid cognitive stimulation in patients with degenerative diseases stresses Laurence Devillers.

In hospital environments (such as BROCA hospital) and nursing homes, LIMSI-CNRS is carrying out experiments with gerontologists, in particular in the LUSAGE living lab: Robots take on a role similar to that of a personal coach. They help stimulate memory, language and prevent isolation.”

“I don’t understand your question!”

While making it easier to collect data. One of the objectives of systems today is to effectively collect data via connected objects such as bracelet-watches that capture physiological data for example, via audio and video during the interaction with the robot in order to analyse and visualise the progress made by patients as part of this type of stimulation program, in conjunction with care staff.”

The monitoring of learning and the data collected are key issues which pose ethical questions. “Today, our community of researchers is passionate about these subjects. The arrival of robots in our everyday lives is forcing us to think about an ethical framework. Alexa, Amazon’s conversational agent, is present in 20% of American households. Humans will co-evolve with robots. The real question is how humanity will adapt to robots. The good news is that we have a bit of time to prepare the ground. Robot intelligence is only low. They have trouble with semantics and are very slow learners.”

This time must be used by researchers to educate the general public, political decision-makers, journalists, AI researchers and designers about chatbots, affective machines and artificial intelligence in general… “It’s a priority. To program a robot, you need to anticipate the side effects, good and bad, so that users can have confidence in the machine.”

Lastly, for definitive reassurance on the subject and to sound out the intentions of chatbot designers, we asked Siri if it was intelligent and if we should be afraid. The machine’s response: “Enough not to answer this question!”; “I think that goes beyond my skill set for the time being.”

You have been warned!

And where do ethics figure in all this?

There are clearly ethical consequences in relation to digital technologies. That is why in 2009, leading French research organisations on these subjects (CDEFI, CEA, CNRS, CPU, INRIA and Institut Mines-Télécom) came together to create the alliance of digital sciences and technologies (Allistene). The latter soon set up a reflection and ethics commission, CERNA, which includes, among others, Laurence Devillers. So what is the aim of this learned society? To tackle the ethical subjects relating to research in the field of machine learning, a field of study of artificial intelligence.

> Click here to see the June 2017 report.

> Click here to see the report on the Science and Future website and discover Angel, Laurence Devillers’ life assistant.

> Click here to see the Tonight Showbotics by Jimmy Fallon, with Snake Bot, Sophia and eMotionButterfly.

Originally published in Ouest-France “Intelligence des robots : attention à l’abus de confiance