2011 •
Towards multi-state visuo-spatial reasoning based proactive human-robot interaction
Authors:
Amit Kumar Pandey, Muhammad Intizar Ali, Matthieu Warnier, Rachid Alami
Abstract:
Robots are expected to co-operate with humans in day-to-day interaction. One aspect of such co-operation is behaving proactively. In this paper, our robot will exploit the visuo-spatial perspective-taking of the human partner not only from his current state but also from a set of different states he might attain from his current state. Such rich information will help the robot in better predicting ‘where’ the human can perform a particular task and how the robot could support it. We have tested the system on two different robots for the tas (...)
Robots are expected to co-operate with humans in day-to-day interaction. One aspect of such co-operation is behaving proactively. In this paper, our robot will exploit the visuo-spatial perspective-taking of the human partner not only from his current state but also from a set of different states he might attain from his current state. Such rich information will help the robot in better predicting ‘where’ the human can perform a particular task and how the robot could support it. We have tested the system on two different robots for the tasks of giving and making an object accessible to the robot by the human partner. Our robots equipped with such multi-state visuo-spatial perspective-taking capabilities show different proactive behaviors depending upon the task and situation, such as reach out proactively and to a correct place, when human has to give an object to the robot. Primary results of user studies show that such proactive behaviors reduce the human's ‘confusion’ as well as ‘the robot’ seems to be more ‘aware’ about the task and the human. (Read More)
We have placed cookies on your device to help make this website and the services we offer better. By using this site, you agree to the use of cookies. Learn more