Radio Frequency Identification (RFID) in Robotics

Passive Ultra-High Frequency (UHF) RFID tags are well matched to robots’ needs. Unlike low-frequency (LF) and high-frequency (HF) RFID tags, passive UHF RFID tags are readable from across a room, enabling a mobile robot to efficiently discover and locate them. Because they don’t have onboard batteries to wear out, their lifetime is virtually unlimited. And unlike bar codes and other visual tags, RFID tags are readable when they’re visually occluded. For less than $0.25 per tag, users can apply self-adhesive UHF RFID tags throughout their home.

UHF RFID Hardware

Two <a “href=”http://www.thingmagic.com/””>ThingMagic Mercury 5e (M5e) UHF RFID modules form the core of the robot’s RFID sensors. One is connected to two body-mounted, long-range patch antennas that can read UHF RFID tags out to ~6 meters. The other is connected to custom, short-range, in-hand antennas embedded in the robot’s fingers that can read the exact same UHF tags within ~30cm of the robot’s hand. The hardware is annotated in the figure below.

ELE_with_ants_labeled_2

Capabilities

We have demonstrated a number of capabilities enabled by RFID sensing. The following is a brief list. Refer to the publications for a more detailed information:

 

Featured Videos

PPS-Tags: Physical, Perceptual and Semantic Tags for Autonomous Mobile Manipulation: Moderate level of environmental augmentation facilitates robust robot behaviors.

Additional Videos:

Publications

 

Support

Our work is generously supported in part by the Health Systems Institute and by Travis’ NSF Graduate Research Fellowship (GRFP).

 


Additional Videos

RF Vision: RFID Receive Signal Strength Indicator (RSSI) Images for Sensor Fusion and Mobile Manipulation: Long-range UHF RFID sensing and multi-sensor fusion for mobile manipulation.

project_dusty

Dusty: A Low-cost Teleoperated Robot that Retrieves Objects from the Floor

People with motor impairments have consistently ranked the retrieval of dropped objects from the floor to be a high priority task for assitive robot. Motor impairments can both increase the chances that an individual will drop an object and make recovery of the object difficult or impossible. In a study we conducted, motor impaired patients reported dropping objects an average of 5.5 times a day. The presence of a caregiver led to a recovery time of approximately 5 minutes, while the absence of a caregiver could delay the recovery as long as two hours. To meet this need, we have been developing an inexpensive teleoperated robot that is able to effectively retrieve dropped objects since 2008. We call this robot “Dusty”.

Dusty II

dusty2

Dusty now is in its second generation. For convenience, we are still using iRobot Create as its mobile platform. Its new end-effector is able to pick up almost any household object under one pound. Dusty is controlled by a wheelchair joystick. User can easily fetch an object from the floor by driving the robot to a position roughly in front of the object and pressing a button. Dusty will autonomously move forward and grasp the object into its end-effector. Then the user can navigate the robot back, press the lift button, and Dusty will lift the object to a comfortable height for the user.

We are currently doing user study with ALS patients. The results look very exciting, and we are going to release the data very soon.

Dusty I

Robot Dusty was initially inspired by dust pan and kitchen turner, that’s why we name it “Dusty”. Although Dusty I was not an functioning robot prototype, but we were able to test the robot’s noval end-effector. This end-effector, which was later modified and used in Dusty II, was already able to grasp 34 objects from the prioritized lists[*] on 4 types of floors, with a success rate of 94.7% (The new end-effector of Dusty II  has a success rate over 97%, in a much larger area).

Publications

[*]:

ROS Commander (ROSCo): Behavior Creation for Home Robots

rosco_interface

We introduce ROS Commander (ROSCo), an open source system that enables expert users to construct, share, and deploy robot behaviors for home robots. A user builds a behavior in the form of a Hierarchical Finite State Machine (HFSM) out of generic, parameterized building blocks, with a real robot in the develop and test loop. Once constructed, users save behaviors in an open format for direct use with robots, or for use as parts of new behaviors. When the system is deployed, a user can show the robot where to apply behaviors relative to fiducial markers (AR Tags), which allows the robot to quickly become operational in a new environment. We show evidence that the underlying state machine representation and current building blocks are capable of spanning a variety of desirable behaviors for home robots, such as opening a refrigerator door with two arms (video), flipping a light switch (video), opening a drawer (video), unlocking a door, and handing an object to someone (video). Our experiments show that sensor-driven behaviors constructed with ROSCo can be executed in realistic home environments with success rates between 80% and 100%.

Three Tier of Interfaces Matched to Users’ Expertise

ROSCo—as a system for using, deploying, and creating behaviors—has interfaces at three different levels. The first level, shown in the first figure, is intended for expert users that are willing to spend time learning the interface (perhaps through video online tutorials and how-to guides) but are not necessarily expert roboticists. Behaviors, represented as HFSMs, are constructed at this level using parameterized building blocks where there each block is matched an appropriate graphical interface. Once constructed, they can be saved to disk, reused, and shared.

giving_a_tour
Level 2: Interface for deploying behaviors for computer literate users or agents at a call center (video).

The second tier interface, shown above, is designed for users with an intermediate level of expertise and conceptualized as a process of users giving a tour of their home to the robot, telling it how to operate what and where. This “touring” interface works by asking users to attach an ARToolKit marker, dragging a 3D frame relative to that marker to a behavior specific spot (e.g. the middle of a drawer handle), then selecting the desired behavior.

web_interface
Level 3: Interface for running behaviors (video).

Designed for users that just wants to use the robot with its potentially numerous capabilities, the third interface allow users to start and control ROSCo behaviors. Users start by selecting the desired behavior from hierarchically organized menus, similar to music play lists in an MP3 player, then telling the behavior to start.

General Automation Enabled Opulence

Shown above are ROSCo behaviors for using the PR2 to give a back rub and use a hand fan. Although uncommon in research robotics, we believe such uses might be commonplace once general purpose robots are in homes and users empowered by expert interfaces such as ROSCo to create new robot capabilities. Uniquely, general purpose robots has the potential to displace many smart and motorized devices in homes as well as create new opportunities for automation.

In the BackRub demo (named after a famous search engine), users first activate the behavior through ROSCo’s web interface then lean into the PR2’s grippers for a “back rub.” Stopping the behavior can also be carried out through the web interface (which runs on mobile phones).

With the fanning behavior, users first activate the behavior through ROSCo’s web interface then place the fan in the robot’s gripper, after which the behavior plays a looping motion.

Shared Autonomy Teleoperation for the Motor Impaired

Home environments are complex and varied presenting significant and unsolved challenges to robotics. Tools such as ROSCo can help address these difficulties by producing autonomous capabilities that can be used during shared teleoperation. In such scenarios, robot autonomy reduces the mental demand on users and teleoperator control enable mobile manipulators to operate more robustly in home environments.

We tested our system with Henry Evans, a man with quadriplegia, in his home as part of a teleoperation interface where repeatable actions such drawer opening are performed using ROSCo behaviors, and harder tasks are performed by human teleoperator. Using such systems based on shared autonomy, we hope to restore the ability to live independently to similarly motor impaired individuals.

More Information

For questions, contact either Hai Nguyen or Charles C. Kemp, the authors of this work. The code is available in the packages rcommander_core and rcommander_pr2 at ros.org. We also refer readers to our publications for more information:

Robotic Nurse Assistant

There is a well-documented shortage of nurses and direct-care workers in the U.S. and around the world, which is expected to become more problematic as the older adult population grows and prepares for retirement. In a study of the effects of high patient-to-nurse ratio, Aiken et al. showed that each additional patient per nurse was associated with a 7% increase in patient mortality and a 23% increase in nurse burnout. Consequently, studies have suggested that lowering the patient-to-nurse ratio would result in less missed patient care. We believe robotics can play a role in assisting nurses to complete their daily tasks in order to provide better healthcare.

 

Robotic Bed Bath

Robotic Nurse Assistant

 

A Direct Physical Interface

Demonstration of the Direct Physical Interface: Lab member Tiffany Chen leads the robot Cody by the hand

In the long run, robots may be sufficiently perceptive, agile, and intelligent to autonomously perform nursing tasks. However, healthcare facilities in general, and hospitals in particular, present daunting challenges for autonomous operation. Within these highly-cluttered environments, errors can have deadly consequences. Thus, we have developed an intuitive, direct physical interface (DPI) that enables a nurse directly control the movement of the human-scale mobile manipulator Cody. Using the DPI, a nurse is able to lead and position Cody by making direct contact with its body.

When the user grabs and moves either of the robot’s end effectors (the black rubber balls), the robot responds. Pulling forward or pushing backward makes the robot move forward or backward. Moving the end effector to the left or right causes the robot to rotate, while moving it up or down, causes the robot’s torso to move up or down. The user can also grab the robot’s arm and abduct or adduct it at the shoulder, which causes the robot to move sideways.

We evaluated this interface in the context of assisting nurses with patient lifting, which we expect to be a high-impact application area. Our evaluation consisted of a controlled laboratory experiment with 18 nurses from the Atlanta area of Georgia, USA. We found that our DPI significantly outperformed a comparable wireless gamepad interface in both objective and subjective measures, including number of collisions, time to complete the tasks, workload (Raw Task Load Index), and overall preference. In contrast, we found no significant difference between the two interfaces with respect to the users’ perceptions of personal safety.

 

Publications

 

Support

This work is generously supported by Hstar Technologies, the NSF Graduate Research Fellowship Program, Willow Garage, and NSF grant IIS-0705130.

EL-E: An Assistive Robot

EL-E: An Assistive Robot (March 7, 2008)

 


videos: intro, grasping, preliminary grasping, laser scan 1, laser scan 2

Objects play an especially important role in people’s lives. Objects within human environments are usually found on flat surfaces that are orthogonal to gravity, such as floors, tables, and shelves. EL-E is an assistive robot that is explicitly designed to take advantage of this common structure in order to retrieve unmodeled, everyday objects for people with motor impairments. EL-E incorporates two key innovations.

First, EL-E is equipped with a laser pointer interface that detects when a user illuminates a location with an off-the-shelf green laser pointer and estimates its 3D position. This enables a user to unambiguously communicate a 3D location to the robot using a point-and-click style of interaction, which provides a direct way to tell the robot which object to manipulate or where to go.

Second, EL-E is able translate its manipulator and associated sensors to different heights, which enables it to grasp objects on a variety of surfaces, such as the floor and tables, using the same perception and manipulation strategies. The robot can approach an object selected with the laser pointer interface, detect if the object is on an elevated surface, raise or lower its arm and sensors to this surface, and visually and tactily grasp the object. Once the object is acquired, the robot can place the object on a laser designated surface above the floor, follow the laser pointer on the floor, or deliver the object to a seated person selected with the laser pointer.

 

EL-E Progress Sneak Preview (Feb 2, 2009)

ele_grasping

ele_grasping_pill

Funding

This project is directed by Prof. Charlie Kemp in collaboration with Dr. Jonathan Glass, director of the ALS Center at the Emory School of Medicine. Funding is provided by the Wallace H. Coulter Foundation as part of a Translational Research Partnership in Biomedical Engineering Award, “An Assistive Robot to Fetch Everyday Objects for People with Severe Motor Impairments”.

Publications