Human-Robot Interaction (HRI) unfortunately cannot assume that all functionality needed for autonomous interaction is in place. Often parts of the processing chain in HRI need to be simplified or replaced by human teleoperation or Wizard of Oz approaches. A recent survey of work presented at the HRI conference series shows that only 40% of papers use an autonomous interactive robot. Fortunately, social signal processing and AI have improved in leaps and bounds, and the prospect of having fully, albeit limited, HRI now seems real. This talk will go over a number of social HRI systems used in our research, and will highlight how we deal with shortcomings in autonomy, and how simple decisions and current developments hold promise for moving towards fully autonomous systems that can be deployed in the wild.
Bio-inspired robot swarms are being designed and studied for many problems including search, pollution monitoring and control, and security. These swarms have some important advantages compared to many distributed multi-agent AI approaches, including: resilience to robot attrition, robustness to communication failures, ability to explore multiple solutions to a single problem, and ability to appropriately (re)distribute resources when problems arise. These advantages come from how decentralized computation and sensing of the robots lead to robust emergent collective behaviors A fundamental challenge is figuring out how to allow humans to influence and manage swarms without imposing the human as a single point of failure, defeating the advantage of decentralized/emergent behaviors. In this talk, I will discuss some of the AI and human-interaction research necessary to enable a human to manage and influence swarms.
I will present our recent advances at characterizing natural social interactions amongst children, and how this informs the design of complex social behaviours for robots. I will specifically introduce a novel and large open dataset of complex social interactions designed with machine-learning in mind, and I will sketch a roadmap for social & cognitive robotics building upon deep learning techniques.
Natural language is the most versatile means of communicating knowledge and thus a prime vehicle for teaching robots. Different from other learning techniques such as learning from demonstrations, reinforcement learning, and others, learning from natural language instructions can be very efficient and potentially allow the robot to use the newly acquired knowledge right away during task execution. In this presentation I will give an overview of our efforts to develop an integrated cognitive robotic architecture, the DIARC architecture, that allows for fast natural language guided learning ("zero-shot" and "one-shot" learning) of new objects, actions, rules, concepts, and norms, and will demonstrate the effectiveness of the approach with various examples from human-robot teaching interactions.
In the last decade, there has been a slowly growing interaction between robotics researchers and clinicians to look at the viability of using robots as a tool for enhancing therapeutic and diagnostic options for individuals with autism spectrum disorder.
While much of the early work in using robots for autism therapy lacked clinical rigor, new research is beginning to demonstrate that robots improve engagement and elicit novel social behaviors from people (particularly children and teenagers) with autism. However, why robots in particular show this capability, when similar interactions with other technology or with adults or peers fails to show this response, remains unknown. This talk will present some of the most recent evidence showing robots eliciting social behavior from individuals with autism and discuss some of the mechanisms by which these effects may be generated.
As a diagnostic tool, robots offer a social press that is repeatable and controllable to allow for standardization of interactive stimuli across individuals and across time. Because robots can provide consistent, reliable actions, clinicians can ensure that identical stimuli are presented at each diagnostic session. Furthermore, the component systems in socially aware robots may offer non-interactive methods for tracking human-human social behaviors. The perceptual systems of these robots are designed to measure and quantify social behavior—that is, exactly the skills that must be identified during diagnosis.
AI-HRI 2017 home page