Introduction
The goal of the Interactive Learning for Artificial Intelligence (AI) for Human-Robot Interaction (HRI) symposium is to bring together the large community of researchers working on interactive learning scenarios for interactive robotics. While current HRI research involves investigating ways for robots to effectively interact with people, HRI's overarching goal is to develop robots that are autonomous while intelligently modeling and learning from humans. These goals greatly overlap with some central goals of AI and interactive machine learning, such that HRI is an extremely challenging problem domain for interactive learning and will elicit fresh problem areas for robotics research.
Present-day AI research still does not widely consider situations for interacting directly with humans and within human-populated environments, which present inherent uncertainty in dynamics, structure, and interaction. We believe that the HRI community already offers a rich set of principles and observations that can be used to structure new models of interaction. The human-aware AI initiative has primarily been approached through human-in-the-loop methods that use people's data and feedback to improve refinement and performance of the algorithms, learned functions, and personalization. We thus believe that HRI is an important component to furthering AI and robotics research.
Our symposium will focus on one common area of interest lying at the intersection between HRI and AI: interactive machine learning (for interactive robotics). We believe that through interactive learning, HRI can enrich AI and AI can find useful HRI applications and problems. This fusion of HRI and AI may provide new insights and discussions that could benefit both fields. The symposium will include research talks and discussions both to share work in this intersectional area, guidance for how to best learn to combine these fields, and a great deal of community building through discussion and tutorials.
Format
In addition to oral and poster presentations of accepted papers, the symposium will include panel discussions, position talks, keynote presentations, and a hack session with ample time for networking.
SPEAKERS: Keynote talks will give different perspectives on AI-HRI and showcase recent advances towards humans interacting with robots on everyday tasks. Moderated discussions and debates will allow participants to engage in collaborative public discussion on controversial topics and issues of interest to the AI-HRI community.
NETWORKING: A large part of this effort is to bring together a community of researchers, strengthen old connections, and build new ones. Ample time will be provided for networking and informal discussions.
Presentation and publication
All accepted full and short papers will be presented orally and published in the proceedings. Authors will be notified as to whether they have been assigned a “full-length” or “lightning” presentation slot. Authors assigned to full-length talks will be invited to participate in a panel discussion. Authors assigned to lightning talks will be invited to participate in a poster session.
Important dates
The symposium will be held October 18-20, 2018 at the Westin Arlington Gateway in Arlington, Virginia, USA.
Submission Instructions
Authors may submit under one of three paper categories:
- Full papers (6-8 pages) highlighting state-of-the-art HRI-oriented interacting learning research, HRI research focusing on the use of autonomous AI systems, or the implementation of AI systems in commercial HRI products.
- Short position papers (3-4 pages) outlining new or controversial views on AI-HRI research or describing ongoing AI-oriented HRI research.
- Tool papers (1-2 pages) describing novel software, hardware, or datasets of interest to the AI-HRI community.
In addition, philosophy and social science researchers are encouraged to submit short papers suggesting AI advances that would facilitate the design, implementation, or analysis of HRI studies.
Industry professionals are encouraged to submit short papers suggesting AI advances that would facilitate the development, enhancement, or deployment of HRI technologies in the real world.
Please see the AAAI Author Kit for paper templates to ensure that your submission has proper formatting.
Contributions may be submitted here:
https://easychair.org/conferences/?conf=fss18
Invited Keynote Speakers
- Toward Robust Autonomy for Interactive Robotic Systems.
Sonia Chernova. - Learning Grounded Language For and From Interaction.
Cynthia Matuszek. - Designing vs. Learning - Challenges in creating autonomy for social robots.
Dylan Glas. - From Soup to Nuts: Using a mix of AI, robotics, and empirical methods to improve HRI.
Greg Trafton.
Accepted Papers
Balancing Efficiency and Coverage in Human-Robot Dialogue Collection PDF.
Matthew Marge, Claire Bonial, Stephanie Lukin, Cory Hayes, Ashley Foots, Ron Artstein, Cassidy Henry, Kimberly Pollard, Carla Gordon, Felix Gervits, Anton Leuski, Susan Hill, Clare Voss and David Traum
Multimodal Interactive Learning of Primitive Actions PDF.
Tuan Do, Nikhil Krishnaswamy, Kyeongmin Rim and James Pustejovsky
Apprenticeship Bootstrapping via Deep Learning with a Safety Net for UAV-UGV Interaction PDF.
Hung Nguyen, Phi Vu Tran, Duy Tung Nguyen, Matthew Garratt, Kathryn Kasmarik, Michael Barlow, Sreenatha Anavatti and Hussein Abbass
Deep HMResNet Model for Human Activity-Aware Robotic Systems PDF.
Hazem Abdelkawy, Naouel Ayari, Abdelghani Chibani, Yacine Amirat and Ferhat Attal
Cycle-of-Learning for Autonomous Systems from Human Interaction PDF.
Nicholas Waytowich, Vinicius Goecks and Vernon Lawhern
Towards a Unified Planner For Socially-Aware Navigation PDF.
Santosh Balajee Banisetty and David Feil-Seifer
Playing Pairs with Pepper PDF.
Abdelrahman Yaseen and Katrin Lohan
Using pupil diameter to measure Cognitive load PDF.
Georgios Minadakis and Katrin Lohan
BubbleTouch: A Quasi-Static Tactile Skin Simulator PDF .
Brayden Hollis, Stacy Patterson, Jinda Cui and Jeff Trinkle
Interaction and Autonomy in RoboCup@Home and Building-Wide Intelligence PDF .
Justin Hart, Harel Yedidsion, Yuqian Jiang, Nick Walker, Rishi Shah, Jesse Thomason, Aishwarya Padmakumar, Rolando Fernandez, Jivko Sinapov, Raymond Mooney and Peter Stone
Adaptive Grasp Control through Multi-Modal Interactions for Assistive Prosthetic Devices PDF .
Michelle Esponda and Thomas Howard
Towards Online Learning from Corrective Demonstrations PDF .
Reymundo Gutierrez, Elaine Short, Scott Niekum and Andrea Thomaz
Program
Day 1: Thursday, October 18, 2018
09:00 – 09:15 | Introduction and Announcements |
09:15 – 10:15 | Invited Speaker: Sonia Chernova (Toward Robust Autonomy for Interactive Robotic Systems) |
10:15 – 10:30 | Breakout Session: Topics and Team-ups |
10:30 – 11:00 | Coffee Break |
11:00 – 12:20 | Paper Presentations Balancing Efficiency and Coverage in Human-Robot Dialogue Collection Matthew Marge, Claire Bonial, Stephanie Lukin, Cory Hayes, Ashley Foots, Ron Artstein, Cassidy Henry, Kimberly Pollard, Carla Gordon, Felix Gervits, Anton Leuski, Susan Hill, Clare Voss and David Traum Interaction and Autonomy in RoboCup@Home and Building-Wide Intelligence Justin Hart, Harel Yedidsion, Yuqian Jiang, Nick Walker, Rishi Shah, Jesse Thomason, Aishwarya Padmakumar, Rolando Fernandez, Jivko Sinapov, Raymond Mooney and Peter Stone Towards a Unified Planner For Socially-Aware Navigation Santosh Balajee Banisetty and David Feil-Seifer Cycle-of-Learning for Autonomous Systems from Human Interaction Nicholas Waytowich, Vinicius Goecks and Vernon Lawhern |
12:20 – 14:00 | Lunch |
14:00 – 15:00 | Invited Speaker: Cynthia Matuszek (Learning Grounded Language For and From Interaction) |
15:00 – 15:30 | Short Paper Presentations Playing Pairs with Pepper Abdelrahman Yaseen and Katrin Lohan BubbleTouch: A Quasi-Static Tactile Skin Simulator Brayden Hollis, Stacy Patterson, Jinda Cui and Jeff Trinkle |
15:30 – 16:00 | Coffee Break |
16:00 – 17:00 | Breakout Session |
17:00 – 17:30 | Breakout Session: Discussion |
18:00 – 19:00 | Reception |
Day 2: Friday, October 19, 2018
09:00 – 09:15 | Announcements |
09:15 – 10:15 | Invited Talk: Dylan Glas (Challenges in creating autonomy for social robots) |
10:15 – 10:30 | Breakout Session: Topics and Team-ups |
10:30 – 11:00 | Coffee Break |
11:00 – 12:20 | Paper Presentations Adaptive Grasp Control through Multi-Modal Interactions for Assistive Prosthetic Devices Michelle Esponda and Thomas Howard Deep HMResNet Model for Human Activity-Aware Robotic Systems Hazem Abdelkawy, Naouel Ayari, Abdelghani Chibani, Yacine Amirat and Ferhat Attal Apprenticeship Bootstrapping via Deep Learning with a Safety Net for UAV-UGV Interaction Hung Nguyen, Phi Vu Tran, Duy Tung Nguyen, Matthew Garratt, Kathryn Kasmarik, Michael Barlow, Sreenatha Anavatti and Hussein Abbass Multimodal Interactive Learning of Primitive Actions Tuan Do, Nikhil Krishnaswamy, Kyeongmin Rim and James Pustejovsky |
12:20 – 14:00 | Lunch |
14:00 – 15:00 | Invited Speaker: Greg Trafton (From Soup to Nuts: Using a mix of AI, robotics, and empirical methods to improve HRI) |
15:00 – 15:20 | Paper Presentation Towards Online Learning from Corrective Demonstrations Reymundo Gutierrez, Elaine Short, Scott Niekum and Andrea Thomaz |
15:20 – 15:30 | Short Paper Presentation Using pupil diameter to measure Cognitive load Georgios Minadakis and Katrin Lohan |
15:30 – 16:00 | Coffee Break |
16:00 – 17:00 | Breakout Session |
17:00 – 17:30 | Breakout Session: Discussion |
18:00 – 19:00 | Plenary Session |
Saturday, October 20, 2018 (Half Day)
09:00–10:30 | Tech Talks Turn-taking Simulator Nick DePalma Semio: Software for Social Robots Ross Mead td> |
10:30–11:00 | Coffee Break |
11:00–12:00 | Future Directions and Discussion |
12:00–12:30 | Wrap-Up/Closing Remarks |
Organizing Committee
Kalesha Bullard (@Georgia Institute of Technology)
Nick DePalma (@Samsung Research of America)
Richard G. Freedman (@University of Massachusetts Amherst/@SIFT)
Bradley Hayes (@University of Colorado Boulder)
Luca Iocchi (@Sapienza University of Rome)
Katrin Lohan (@Heriot-Watt University)
Ross Mead (@Semio)
Emmanuel Senft (@Plymouth University)
Tom Williams (@Colorado School of Mines)