Artificial Intelligence Agents Begin to Learn New Skills from Watching Videos

Contact

Allie McFadden

Communications Officer

allie.mcfadden@cc.gatech.edu

Sidebar Content
No sidebar content submitted.
Summaries

Summary Sentence:

Using video and existing data, Georgia Tech researchers are teaching artificial agents how to do a variety of tasks more efficiently.

Full Summary:

No summary paragraph submitted.

Media
  • Georgia Tech researchers are looking at how to more efficiently teach robots and artificial agents how to do tasks using video. Georgia Tech researchers are looking at how to more efficiently teach robots and artificial agents how to do tasks using video.
    (image/jpeg)

Data is a hot word in 2019 and according to Ashley Edwards, there is a lot of data out there that can be used more efficiently for teaching robots and artificial agents how to do a variety of tasks.

Edwards, a recent computer science Ph.D. graduate from Georgia Tech, details her research in a new paper, Imitating Latent Policies from Observation.

The new approach uses imitation learning from observation and video data. This new way of thinking could eventually teach agents how to do tasks like make a sandwich, play a videogame, or even drive a car, all from watching videos. In most experiments, Edwards and her fellow researchers algorithm was able to complete a task in 200 to 300 steps while previous methods have gone into the thousands.

“This approach is exciting because it unpeels another layer for how we can train artificial agents to work with humans. We have hardly skimmed the surface of this problem space, but this is a great next step,” said Charles Isbell, dean designate of the College of Computing and paper co-author.

To accomplish this, researchers have an agent watch a video and guess what actions are being taken. In the paper, this is referred to as a latent policy. Given that guess, the agent tries to predict movements and learn what to do. When the agent is then placed into an actual environment, it can take what it has learned from the videos and apply its knowledge to real-world actions.

In previous research using “imitation from observation,” humans must physically show agents how to do an action or train a computer to use a dynamic model to learn how to do a new task, both of which are time-consuming, expensive, and potentially dangerous.

“There are thousands of videos out there documenting people doing things, but it can be hard to know what they are doing in a way that can be applied to artificial systems,” said Edwards.

For example, there are countless hours of dashcam footage from autonomous cars driving on streets, but there isn’t much information about why self-driving cars make the decisions that they do. The videos rarely have detailed telemetry information about the vehicle, like what angle the steering wheel was pointed when the car moved a certain way. Edwards and her team hope that their algorithm will be able to analyze video footage and piece together not only how to do an action, but why.

During their research, Edwards and her co-authors performed four experiments to prove their idea. Using a platform game called Coinrun, they trained an agent to jump over platforms and avoid traps to solve a task. They also used classic control environments in their experiments to get a cart to balance a pole and teach a mountain car to drive itself up a mountain.

Their approach was able to beat the expert in two of the experiments and was considered “state-of-the-art” in all four.  

Despite its achievements, the current model is only created for discrete actions like moving right, left, forward or backward one step at a time. So, Edwards and her team are continuing to push their work forward toward being able to achieve smoother and more continuous actions for their models.

This research is one of 18 accepted papers from the Machine Learning Center at Georgia Tech’s (ML@GT) and will be presented at the 36th Annual International Conference on Machine Learning (ICML) held June 9 through 15 in Long Beach, Calif.

Additional Information

Groups

College of Computing, ML@GT, School of Interactive Computing

Categories
No categories were selected.
Related Core Research Areas
People and Technology, Robotics
Newsroom Topics
No newsroom topics were selected.
Keywords
No keywords were submitted.
Status
  • Created By: ablinder6
  • Workflow Status: Published
  • Created On: Jun 4, 2019 - 11:00am
  • Last Updated: Jun 5, 2019 - 5:59pm