Tech Designed to Help Assistive Robots Work More Closely with Patients

Contact

Allie McFadden

Communications Officer

allie.mcfadden@cc.gatech.edu

Sidebar Content
No sidebar content submitted.
Summaries

Summary Sentence:

Researchers at the Georgia Institute of Technology and Stanford University have developed an AI-enabled smart bed and synthetic data set to study people at rest.

Full Summary:

No summary paragraph submitted.

Media
  • Researchers at the Georgia Institute of Technology and Stanford University have developed an AI-enabled smart bed and synthetic data set to study people at rest. The data set and smart bed are aimed at improving assistive robotics capabilities for people Researchers at the Georgia Institute of Technology and Stanford University have developed an AI-enabled smart bed and synthetic data set to study people at rest. The data set and smart bed are aimed at improving assistive robotics capabilities for people
    (image/png)

Artificially intelligent (AI) systems are continuously improving their understanding of physical human activities such as running, jumping, and biking. Yet, much of people’s lives are spent resting in bed.

Researchers at the Georgia Institute of Technology and Stanford University have developed an AI-enabled smart bed and synthetic data set to study people at rest. The data set and smart bed are aimed at improving assistive robotics capabilities for people with disabilities, older adults, or hospital patients where people often require assistance in bed. 

Sheets and blankets pose a difficult challenge for an AI’s cameras to discern where a body is in bed. To overcome this problem,  a special fabric mat gives the bed a sense of touch, allowing the AI to guess a person’s body shape. Outfitted with 1,728 sensors, the bed creates a pressure image. Based on a pressure image and a gender label, the AI system outputs a 3D model of the person.

Teaching the bed to make a good guess of someone’s body shape requires a lot of examples. 

“Many existing works observe people in action by letting an algorithm collect samples from film or sports footage. But it's more challenging to observe how people rest because resting poses in photos and video are less common and they are often covered by bedding or loose clothing,” said Henry M. Clever, first author of the paper and a Robotics Ph.D. student at the Machine Learning Center at Georgia Tech (ML@GT) and the Healthcare Robotics Lab.

Instead of trying to bring thousands of people into their lab, the team used a computer simulation to generate synthetic examples. 

Their system uses a fast physics simulator for video games and methods similar to the ragdoll physics used to simulate a video game character falling down or being thrown from a car. The simulation randomly drops bodies onto a bed with the pressure-sensing mat. They used this simulation to generate over 200,000 examples with an extremely diverse set of body shapes.

Understanding body shape is crucial information for a robot to perform an assistive task like a bed bath. The robot needs to understand if a person is short, tall, heavy, or petite so that it can accommodate many different people.

“Instead of sampling from real data, we can create data from scratch in a simulated environment in the computer. This allows us to generate a large dataset of people with diverse body shapes resting in simulation, which is much easier than recruiting a lot of people to participate in a lab study,” said Clever.

To confirm that the synthetic examples worked well with the AI system, the researchers brought real people into their lab to test resting on a real bed, resulting in positive outcomes. Their tests also showed that the network performed best when it could check itself by imagining what the pressure image should look like for the guessed body shape.

“This is a promising example of how physics simulations can be used to teach robotic caregivers to provide more intelligent assistance,” said Charlie Kemp, senior author of the paper, director of the Healthcare Robotics Lab and an associate professor in ML@GT, and the Wallace H. Coulter Department of Biomedical Engineering at Georgia Tech and Emory University.

Clever hopes that this work will encourage other researchers in the computer vision field to pursue research involving assistive robotics.

“Robots have incredible potential to benefit people with motor impairments and older adults in terms of independence and overall quality of life. By creating data and developing methods that can be used by assistive robots, AI and computer vision researchers stand to raise the bar for how robots help people,” said Clever.

The team’s paper, Bodies at Rest: 3D Human Pose and Shape Estimation from a Pressure Image Using Synthetic Data has been accepted to the Computer Vision and Pattern Recognition (CVPR) conference, held June 14-19, 2020.

The team has released their synthetic data to the public with the name PressurePose: https://doi.org/10.7910/DVN/IAPI0X

They have also released their deep neural network to the public with the name PressureNet. : https://github.com/Healthcare-Robotics/bodies-at-rest 

This work was supported in part by the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE1148903, NSF award IIS-1514258, NSF award DGE1545287 and AWS Cloud Credits for Research. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the sponsors.

Kemp owns equity in and works for Hello Robot, a company commercializing robotic assistance technologies. Clever is entitled to royalties derived from Hello Robot’s sale of products.

Additional Information

Groups

College of Computing, ML@GT, School of Interactive Computing

Categories
No categories were selected.
Related Core Research Areas
Robotics
Newsroom Topics
No newsroom topics were selected.
Keywords
No keywords were submitted.
Status
  • Created By: ablinder6
  • Workflow Status: Draft
  • Created On: Jun 2, 2020 - 2:10pm
  • Last Updated: Jun 2, 2020 - 2:10pm