Study / Anzalone et al. 2014 / N° participants / 16 ASD, 14 TD
Method / A JA task was prepared: the robot attempted to induce the child to look toward some figures placed on the sides of experimental room while the perception system recorded his/her posture and gaze. Children had a 3 min interaction with the robot or with a therapist. Nao induced JA by gazing; by gazing and pointing and by gazing, pointing and vocalizing at pictures.
Findings /
  • The system which employed a Nao robot to interact with children and a perception system (Kinect) is able to elicit JA during interaction with a child.
  • JA performance of children with ASD was similar to TD children performance when interacting with a human partner. But interaction with Nao was more difficult and children with ASD had a significant decrease in their JA score with Nao.
  • multimodal JA induction (gazing, pointing and vocalizing) was more efficient in both groups. Yaw variance vocalizing+pointing β=1,66, p=0,002; Yaw variance gazing only β=1,55, p< 0,001; Yaw variance pointing only β=1,52, p<0,001. Pitch variance tra TD and ASD β=-0,84, p=0,019; Pitch variance tra femmine e maschi β=-0,89, p=0,03

Conclusions of our interest /
  • Infants use prior experience with robot interactions as evidence that the robot is a psychological agent and, therefore, want to engage in JA with the robot if it changes its gaze direction by directing it to an object. The current results suggest, however, that this response might be not as fluent as with a human partner.
  • The system could be a part of therapeutic activities that monitor in real-time the reaction of the child and inform the therapist of the JA events

Study / Bekele et al. 2013 / N° participants / 6 ASD, 6 TD
Method / Each child sits (one by one) on a Rifton chair between two monitors (on the left and on the right to the child) on which images appear during the experiment (random, right or left). Experiment allows 4 sessions of JA task, 2 run by a man and 2 by a robot. Each session consists of four trials presented in random order. Each trial begins with the man / robot that instructs child to look up one of two monitors. There are 6 prompt levels. Each trial’s prompt was 8 s long, including a monitoring interval (approximately 5 s for prompting and a 3-s monitoring interval). If the participant did not respond based on robot/human prompt, an audio (approximately 5 s) and then a video (approximately 5 s), which did not directly address the participant, were used as additional attention capturing mechanisms. If the participant responded at any level of the prompt hierarchy, reinforcement was given via verbal feedback from the robot followed by a 10-s video.
Findings /
  • Participants with ASD spent 52,8% (sd= 21,4%) of time watching the administrator during robotics session, TD 54,3% (sd=17,7%). Participants with ASD spent 25,1% (sd= 18,7%) of time watching the administrator during human session, TD 33,6% (sd= 16%). The difference between time watching of administrator of the session between robotic and human session in ASD is 27,7% (p<0,05), in TD 20,7% (p<0,05). So, both participants with ASD and TD participants spend more time watching the administrator during robotics than human session. This phenomenon is more pronounced in subjects with ASD, but difference is not statistically significant.
  • TD participants spent more time than ASD to watch administrator during human session (8,53%, p>0,1).
  • ASD participants need 26,4% (sd=11,7%) of prompts during human session and 40,9% (sd=20%, p<0,05) during robotic session. TD participants need 20,1% (sd=4,8%) of prompts during human session and 29,5% (sd= 15%, p<0,05) during robotic session. So, both groups needed more prompts during robotics than human session.
  • In robotic session, subjects turn the head toward target before its activation the 77,08% of participants with ASD and the 93,75% of TD participants; during human condition 93,75% of participants with ASD and the totality of TD participants. In both cases, the target has been achieved mainly with the first prompt.
  • In human condition, both ASD and TD reach the target in 100% of times. In robot condition, TD reach the target in 97,9% of cases; ASD in 95,8% of cases.
  • There were no statistically significant differences in hit frequency between ASD (2,06; sd=0,71) and TD (2,02; sd=0,71) children and between the two conditions in participants with ASD. In TD children, hit frequency was greater during human rather than robotic session.

Conclusions of our interest /
  • No specific data suggested that ASD children exhibited preferences or performance advantages within system when compared to their TD counterparts within system. The current system only provides a preliminary structure for examining ideal instruction and prompting patterns.
  • both children with ASD and TD children required higher levels of prompting with the robot administrator when compared to a human administrator in this study. It is also entirely plausible that such a difference highlights the fact that humanoid-robotic technologies, in many of their current forms, are not as capable of performing sophisticated actions, eliciting responses from individuals, and adapting their behaviour within social environments as their human counterparts.

Study / Bird et al. 2007 / N° participants / 16 ASD, 15 TD
Method / All stimuli were presented to participants in photo. It was originally show them a stimulus in a neutral position for each type of hand (human or robotic). After it was show to the subjects a open or a closed hand. The transition from one photo to another generated the illusion of movement. Participants in some cases had to imitate the view in photo (compatible trials), in other cases should instead do the opposite (incompatible bias). This difference in tasks was used to test the effect of automatic imitation.
Findings /
  • The compatibility effect was greater when responding to human compared with robot stimulus
  • The difference between the human and robotic compatibility effect was larger in ASD group than in the TD group
  • The ASD group exhibited greater compatibility effect in response to observed human action.
  • Difference between compatible and incompatible trials F(1,30)= 79, p<0,001, =0,73.
  • Animacy bias F(1,30)= 29,6; p<0,001; =0,50.
  • Difference between human compatibility effect and robot compatibility effect F(1,30)= 4,6; p<0,04; =0,13

Conclusions of our interest /
  • In comparison with matched, typically developing controls, the ASD group showed an equivalent automatic imitation effect, and signs of an increased animacy bias, namely, a greater difference in automatic imitation of actions of humans and of robots.
  • Authors suggest that ASD group showed a greater compatibility effect because they had problems inhibiting imitation of human actions. This effect with the robots is more attenuated

Study / Chaminade et al. 2012 / N° participants / 12 ASD, 18 TD
Method / Participants were playing a game against three different opponents: an experimenter ("Intentional agent”, Int), a small humanoid robot specifically implemented to this game ("Artificial agent", Art) and a random generator of responses ("Random agent" Rnd). Participants were told that the first two agents would try to win using the strategies, in contrast, to Rnd, that has no strategy.
Findings /
  • In both groups it’s reported an increased response in brain regions devoted to attentional and executive functions and an increased involvement in the interaction when participants were confronted with an active agent, rather than the random agent. The results also seem gradual: the increase is greater when the opponent appears more intentional than when the opponent appears less intentional
  • The posterior superior temporal gyrus, a region of the cortex involved in social cognition, in particular when visual cues are involved, is more active when controls, but not ASD patients, believe they interact with an intentional vs a non- intentional agent. This suggests that ASD patients represent interacting robots differently than TD participants. Temporal areas have the same level of activity when ASD patients interact with a robot and with a human, implying that they fail to represent intentional and artificial interacting partners differently. In contrast, the two agents’ response profile, in lateral and medial frontal lobe clusters, imply that ASD patients use, when interacting with artificial agents, resources controls that TD use with intentional agents.

Conclusions of our interest / Present fMRI results support the proposal that ASD patients may consider artificial agents as social interacting partners in the way controls consider fellow humans.
Study / Conn et al. 2008 / N° participants / 6 ASD
Method / The experiment consists of two phases: one in which (with a system based on measurements ECG, EDA, EMG and a feedback of a parent and a therapist who knows the participant) it’s study the measurable characteristics of participants’ emotional state; the second which exploit this knowledge to implement a robotic session that takes into account child’s tastes and preferences. The subjects participated in two robotic sessions after seven days apart.
Findings /
  • For five out of six children, the differences of reported liking of the experience were statistically significant (p<0,05 ANOVA)
  • The predictive accuracy of their emotional states was very high: 85% for liking, 79,5% for anxiety, 84,3% for engagement.
  • The predictive accuracy of EMG is 69,7%; of ECG is 73,5%, of EDA is 73%; the maximum prediction accuracy is obtainable by a combination of all signals (82,9%).
  • The average predictive accuracy across all the participants was approximately 81,1%
  • Reported liking by therapist in RBB2 is highest than in RBB1 (RBB1, mean about all participant in score= 0,50, sd=0,059; in RBB2= 0,69, sd=0,1)

Conclusions of our interest / The affect-sensitive robot behaviour adaptation led to an increase in reported liking level of the children with ASD.
Study / Cook et al. 2014 / N° participants / 10 ASD, 12 TD
Method / Participants were asked to imitate the movement of three agents: two virtual and one real. The two virtual agents were a caucasian man of about thirty years and a humanoid robot very similar to it. The real agent was a man. Each of the three agents make a movement: it was coherent or incoherent with the movement that it had been asked to run to the participants. The two virtual agents performed the first movement in two ways: (i) following the rules of biological motion and (ii) at a constant speed. The human agent has done his movement just following the rules of biological motion.
Findings /
  • A mixed-model 2 × 2 × 2 × 2 ANOVA with factors ‘group’ (ASC, control), ‘actor form’ (virtual human agent, virtual robot agent), ‘actor motion’ (BM, CV) and movement ‘congruency’ (congruent, incongruent) showed a significant interaction between group × actor form × congruency (F1,20 = 5.05, p = 0.04, η2p = 0.20). This interaction was also significant if age and (full-scale) IQ were included as covariates (F1,18 = 4.83, p = 0.04, η2p = 0.20).
  • Whereas TD produced significantly more error in planning movements when observing incongruent [adjusted mean (S.E.M.) = 423.14 (98.42)] compared with congruent [342.21 (77.50); F1,18 = 5.12, p = 0.04)] movements conducted by the virtual human agent, individuals with ASD did not [incongruent adjusted mean (S.E.M.) = 335.03 (107.84), congruent = 370.25 (84.90); F1,18 = 0.80, p = 0.38].
  • Neither group showed a significant difference between incongruent and congruent movement observation in RC (all F1,18 < 1, p > 0.3).
  • The 2×2×2×2 ANOVA also showed a significant actor motion×group interaction (F1,20=6.82, p=0.02, η2p = 0.25), which was also significant if age and IQ were included as covariates (F1,18=6.78, p=0.02, η2p=0.21).
  • Numerically, TD produced more error in planning movements when observing biological motion (BM) [adjusted mean (S.E.M.)=408.47 (95.90)] compared with constant velocity (CV) motion [377.15 (91.20)]. This trend did not reach significance (F1,18=4.07, p=0.06) but approached it.
  • For participants with ASD there wasn’t significant difference (F1,18 = 2.85, p = 0.11) between error in planning movements when observing BM [339.56 (S.E.M. 105.07)] compared with CV [368.27 (S.E.M. 99.92)]
  • Simple-effects analyses with age and IQ as covariates revealed that the group × actor form × congruency interaction was driven by a non- significant trend towards a difference between incongruent (adjusted mean (S.E.M.)=491.04(128.51)] and congruent [416.25 (122.41)] movement observation in the real HC for the control group [F1,18 = 2.56, p = 0.07 (one-tailed), η2p = 0.12] but not for the ASC group [incongruent: 292.25 (140.80); congruent: 355.01 (134.11); F1,18 = 1.48, p = 0.24].
  • Neither group showed a significant difference between incongruent and congruent movement observation in the RC [control: incongruent: 411.73 (105.84); congruent: 456.90 (123.53) (F1,18 = 0.69, p = 0.42); ASC: incongruent: 339.13 (115.95); congruent: 329.62 (135.34) (F1,18 = 0.025, p = 0.87)
  • Children with ASD did not exhibit this modulatory effect of human form

Conclusions of our interest / For TD, virtual human agent but not virtual robot agent movements produced a significant interference effect, whereas neither virtual human or robot agent movements produced a significant interference effect for the ASD group
Study / Damm et al. 2013 / N° participants / 9 ASD; 15 TD
Method / Participants were asked to decide which of two cards placed in front of them was favoured by the interaction partner according to his eye gaze behaviour. Before the participants’ decision, the favoured card was shortly fixated by the opposing robot or human actors. Participant evaluated 10 pairs of cards in each condition (condition 1: human-human interaction (HHI); condition 2: human-robot interaction (HRI)).
Findings / ROBOT / HUMAN / Wilocoxon test
ASD / 60,89% / 34,45% / p = .043
TD / 59,07% / 56,36% / p = .767
  • The table show percentage of time spent during two different sessions in watching robotic or human partner
  • The two groups showed significantly differences in fixations for the HHC (Mann-Whitney-U test, p = .021), but not for the HRC (Mann-Whitney- U test, p = .719).
  • Patients with ASD focused less frequently on face of robot at the end of the interaction as compared to the beginning.

Conclusions of our interest /
  • diminished eye contact in participants with ASD during direct social interactions, especially during interactions with humans. Remarkably, we found that abnormal eye contact is evident not when ASD patients interact with robots, suggesting that they might prefer social robots in experimental settings.
  • robots can serve as social mediators for children with ASD

Study / Duquette et al. 2008 / N° participants / 4 ASD
Method / Participants are divided into two groups: one interact with a human agent, the other with a robotic agent. In each group there is a child pre- verbal and another non-verbal. The mediator performs games involving imitation facial expressions, body movements, familiar actions with objects and unfamiliar actions without objects. Every child has participated at 22 sessions, three times a week for seven weeks.
Findings /
  • Children paired with the robot mediator show
  • more shared attention
  • more visual contact and proximity (β=0,40) with their mediator
  • less gestures that were not directed toward the robot mediator (β=-0,28)
  • more imitation of facial expressions
than the ones paired with the human mediator.
  • reduced repetitive plays with inanimate objects of interest (their favourite toy), and no repetitive or stereotyped behaviour toward the robot.
  • To pair an autistic child with the robot mediator had a negative influence on the imitation of words (β=-0,44)

Conclusions of our interest / The results are very encouraging and support the continuation of work on this research field.
Study / François et al. 2009 / N° participants / 6 ASD
Method / Semi-structured interactions with robot and experimenter and sometimes with another researcher with whom the children were familiar. Each child participated in a maximum of ten sessions. Not every child could take part in 10 sessions. The duration of the sessions was variable. The child was free to play as long as he/she wanted with the upper limit of 40 minutes. The child was the major leader for play.
Findings /
  • Children progressed differently and their profiles are unique.
  • Three groups can be highlighted:
  • the first group (2 children) who mostly played solitarily and possibly encountered rudimentary situations of imitation;
  • the second group (1 child) who communicated mainly non- verbally and showed pre-social or basic social play during the last sessions;
  • the third group (3 children) that proactively played socially.
  • All children tend to express interest in the robot.

Conclusions of our interest /
  • The use of robot allow us to simplify the interaction and to initially create a relatively predictable environment for play.
  • The robotic pet can be considered as a good medium for developing and/or expressing reasoning on mental states and social rapport upon, and for learning about basic causal reactions.

Study / Giannopulu 2013 / N° participants / 4 ASD
Method / participants are involved in free dyadic and triadic interactions with a robot and with a robot and a man. Five parameters are measured in time: eye contact, touching, manipulation, posture and positive emotions.
Findings /
  • In dyadic interactions children spent most of their time playing with the robot: 238,7 seconds (s. d.=58,57; r.=133), thus the 80% of their time.
  • The duration of eye contact is similar for all the children
  • Touching, manipulating and posture reflect inter-individual differences, possibly related to different form of autism
  • Interaction with the robot changes over a period of time, this suggests that a mobile toy robot could reduce repetitive and stereotypical behaviours
  • In triadic interactions (about 30 sec) children spent half of their time playing with the robot and the other half of time playing with the robot and the adult.
  • Eye contact and touching are similar in both kind of interaction; manipulating, posture and positive emotions differ between the two situations
  • In both kind of interactions children express their positive emotions, but most in triadic than in dyadic interaction

Conclusions of our interest /
  • What is important is the “passage” from dyadic interaction to triadic interaction. Indeed, when “A” interacts with both the robot and the adult, he changes his behaviour. Experimenters think that the robot as a mediator could bring about neurocognitive improvements to the autistic child.
  • free game play, i.e., an ecological situation, encourages an autistic child to interact with the robot in a spontaneous manner and could reduce repetitive and stereotypical behaviour.

Study / Kim et al. 2013 / N° participants / 24 ASD
Method / Subjects were involved in three types of triadic interactions in which, in addition to the therapist, had to interact with another human agent, with a touchscreen computer game and with Pleo. It were measured the number of utterances expressed during experimental condition.
Findings /
  • Participants expressed more utterances in robotic condition rather than in human condition, and more in human condition than in computer condition. Utterances produced during RC: M=43, s.d.=19,4. Utterances produced during HC: M=36,8, s. d.=19,2, t(23)=1,97, p<0,05. More utterance in RC (t(23)=4,47,p<0,001) or HC (t(23)=3,61, p<0,001) than in the touchscreen computer game condition (M=25,2, s.d.=13,4).
  • Children spoke with therapist more in robotic and human condition than in computer condition. Utterances to the agent in RC: M=29,5, s. d.=16,6. Utterances to the agent in HC: M=25,5, s.d.=15,5. Utterances to the agent in computer game condition M=0,05, s.d.=0,8.