Laser Pointer Interface
We have developed a novel interface for human-robot interaction and assistive mobile manipulation. The interface enables a human to intuitively and unambiguously select a 3D location in the world and communicate it to the robot. The human points at a location of interest and illuminates it (“clicks it”) with an unaltered, off-the-shelf, green laser pointer. The robot detects the resulting laser spot with an omnidirectional, catadioptric camera with a narrow-band green filter. After detection, the robot moves its stereo pan/tilt camera to look at this location and estimates the location’s 3D position with respect to the robot’s frame of reference.
Unlike previous approaches, this interface for gesture-based pointing requires no instrumentation of the environment, makes use of a non-instrumented everyday pointing device, has low spatial error out to 3 meters, is fully mobile, and is robust enough for use in real-world applications.
A Clickable World
When a user selects a 3D location, it triggers an associated robotic behavior that depends on the surrounding context. For example, if the robot has an object in its hand and the robot detects a face near the click, the robot will deliver the object to the person at the selected location. In essence, virtual buttons get mapped onto the world, each with an associated behavior (see image above). The user can click these virtual buttons by pointing at them and illuminating them with the laser pointer.
In our object fetching application there are initially virtual buttons surrounding objects within the environment. If the user illuminates an object (“clicks it”) the robot moves to the object, grasps it, and lifts it up. Once the robot has an object in its hand, a separate set of virtual buttons get mapped onto the world. At this point, clicking near a person tells the robot to deliver the object to the person. Clicking on a tabletop tells the robot to place the object on the table. While clicking on the floor tells the robot to move to the selected location.
This project is funded by the Wallace H. Coulter Foundation as part of a Translational Research Partnership in Biomedical Engineering Award, “An Assistive Robot to Fetch Everyday Objects for People with Severe Motor Impairments”.
Videos
EL-E Retrieving an Object
EL-E Retrieving from a Coffee Table
EL-E grasping from a coffee table. Video made by Advait Jain. — Nov 7, 2008.
EL-E Retrieving an Object
Initial prototype demonstration of EL-E retrieving objects designated by the user using a clickable world interface.
Publications
A Clickable World: Behavior Selection Through Pointing and Context for Mobile Manipulation, Hai Nguyen, Advait Jain, Cressel Anderson, and Charles C. Kemp, IEEE/RJS International Conference on Intelligent Robots and Systems (IROS), 2008.
A Point-and-Click Interface for the Real World: Laser Designation of Objects for Mobile Manipulation, Charles C. Kemp, Cressel Anderson, Hai Nguyen, Alex Trevor, and Zhe Xu, 3rd ACM/IEEE International Conference on Human-Robot Interaction (HRI), 2008.