event

Ph.D. Proposal Oral Exam - Fu-Jen Chu

Primary tabs

Title:  Improving vision-based robotic manipulation with affordance understanding

Committee: 

Dr. Vela, Advisor

Dr. Yezzi, Chair

Dr. AlRegib

Abstract:

The objective of the proposed research is to improve robotic manipulation via vision-based affordance understanding, which would advance the application of robotics to industrial use cases, as well as impact the area of assistive robotics. Recent modern methods for grasp recognition emphasize data-driven machine learning techniques, which improve the generalizability of grasp representation over novel objects, while falling short of real-time robotic application. Beyond grasp affordance, real-world applications of robotic manipulation involve exploiting more general affordances. Recent studies on detecting object affordances in images address the problem at the pixel-level. Though achieving state-of-the-art performance, per-pixel segmentation approach requires labor-intensive manual annotations. In this proposal, we propose to (1) tackle the problem of identifying viable candidate robotic grasps of objects in practical scenarios with coupled newtorks. A multi-object grasp dataset is collected for evaluations along with physical robotic grasping experiments, and (2) we seek for more general affordance map prediction methods with reduced annotation costs. A synthetic affordance dataset is created. To maintain the advantage of jointly optimizing detection and affordance prediction, labelled synthetic data is applied and jointly adapted to unlabelled real images for detection and affordance segmentation. Evaluation involves real-world robotic manipuations. The work is planed to extend for multiple affordance identification and rank

Status

  • Workflow Status:Published
  • Created By:Daniela Staiculescu
  • Created:05/09/2019
  • Modified By:Daniela Staiculescu
  • Modified:05/09/2019

Categories

Target Audience