Towards Natural Dialogue with Robots: Bot Language

 

Matthew Marge1, Claire Bonial1, Kimberly A. Pollard1, Cassidy Henry2, Ron Artstein3, Brendan Byrne1, Susan G. Hill1, Clare Voss1, and David Traum3

1U.S. Army Research Laboratory; 2UCLA; 3USC - Institute for Creative Technologies

 

 

ARL’s Intelligent Systems research vision for the future is effective human-robot teaming. To fully achieve this goal, we are investigating natural and intuitive bi-directional communication methods between Soldiers and intelligent, autonomous systems.

The objective of this research is to advance the state of the art in natural language dialogue processing for multimodal human-robot communication. We hypothesize that progress toward robots that can autonomously engage in multimodal dialogue (text, speech, mapping, images, and video) can be made by adopting a multi-phase plan initially developed for Virtual Human research at the USC Institute for Creative Technologies.

In the first phase, we conduct exploratory data collection in tasks where naïve humans provide spoken instructions to a robot, but the robot’s communications intelligence is controlled by a wizard experimenter.

A second phase automates some of the wizard labor, where instead of free response, the wizard uses an interface that generalizes command handling and response generation. In a third final phase, the wizard will be “automated away” with a dialogue system trained from wizard decisions. We describe our progress upon completing the first phase of this research.

The scenario used to elicit data is one in which a human-robot team is tasked with exploring an unknown environment: a human gives verbal instructions from a remote location and the robot follows them. Misunderstandings and possible communication breakdowns can be clarified via dialogue by the “robot” wizard experimenter, unbeknownst to the human providing instructions.