The robot that “imitates” human unpredictability comes from the Italian Institute of Technology

What does human -one human being? And it is possible to replicate this “humanity”, whatever it is, and transfer it to one robot? A team of researchers from the laboratory i Social cognition in human-robot interaction toItalian Institute of Technology (Iit) of Genoa, led by Agnieszka Wykowska, has just tackled the problem, setting up an experiment to try to clarify how and when humans “see” robots as “intentional agents,” an entity very close to their fellow humans. To do this, they implemented a Turing test non-verbal in an interaction between robot and human, involves the now famous iCub: and in this way, as they tell in the magazine Science roboticshave discovered that it is indeed possible to “transfer” some characteristics typical of humans to robots, especially response timein such a way that a human cannot tell whether he is talking to a fellow species or to a machine.

The Turing Test

Let’s take a step back. One of the first scientists it was to question the “humanity” of the machines. Alan Turingwho over sixty years ago suggested that consider the following questions: Are machines capable of thinking?imagines “Describe a new form of the problem in terms of a game we call ‘imitation game’. It is played in threes, a man (A), a woman (B) and an interrogator (C) […] The interrogator is in a separate room from the other two. The object of the game is for the questioner to decide who between A and B is the man and who is the woman. He knows them only as X and Y, and at the end of the game he can tell ‘X is A and Y is B’ or ‘X is B and Y is A’. To prevent the interrogator from helping himself by listening to tone of voice or handwriting, A and B’s answers are written. “Now – continues Turing – let’s ask ourselves the following questions: what would happen if a car took A’s place? Would the questioner fail at the same error rate when the test is administered by a man and a woman? These questions replace the original question: can a robot think?”.

Throughout history, hundreds of experiments have been conducted to answer this question. And recently there have been some positive results: This is, for example, the case with the dialogue between Eugene Goostman, a computer programmed to hold conversations and human volunteers who had to figure out who they were talking to. At that time, Goostman managed to convince a third of the judges that he was a 13-year-old boy, a boy in flesh and blood.

The IIT Experiment

The one just described is a “classic” Turing test. The IIT researchers instead suggested some a “non-verbal” version, that is, which does not involve the exchange of messages. “The most interesting result of our study – tells Wykowska a The cablelies in the fact that the human brain is very sensitive to the nuances of behavior that reveal‘humanity’. In the non-verbal Turing test, human participants had to judge whether they were interacting with a machine or with a person by considering only the reaction time of pressing a button “. To prepare for the experiment, Wykowska’s team first measured precisely the response times and accuracy of an average human profile. He then recruited volunteers and divided them up in such a way as to create human-robot pairs: each person was essentially paired with a robot who had to press a button whenever he saw a certain signal on a screen . The robot was controlled by a person or an algorithm, programmed to act in the same way, but not quite the same, as a human. And of course his “partner” didn’t know who he was being controlled by.

“In our experiment – he adds Francesca Ciardofirst author of the study – “We pre-programmed the robot by slightly changing the reaction time and accuracy parameters of the average human profile. In this way, the possible reactions of the robot were of two kinds: the first completely human – one where the robot is actually controlled by a human – and the second slightly different from that of a human, since the robot is controlled by a pre-programmed algorithm”. Result: the robot appears to have passed this particular type of non-verbal Turing test. That is, in other words, the volunteers who interacted with the robot were unable to tell whether the robot was controlled by a human or by an algorithm in situations where it was actually controlled by the algorithm. “Next step in the experiment – concludes Wykowska – will involve implementing a more complex behavior in such a way that one gets a more extensive interaction with people and understands what other parameters of this interaction are perceived as people or mechanical”.

Leave a Comment