<node id="115111">
  <nid>115111</nid>
  <type>news</type>
  <uid>
    <user id="27174"><![CDATA[27174]]></user>
  </uid>
  <created>1331203863</created>
  <changed>1475896308</changed>
  <title><![CDATA[Teach Your Robot Well (Georgia Tech Shows How)]]></title>
  <body><![CDATA[<p><strong>ATLANTA – March 8, 2012 –</strong> Within a decade, personal robots 
could become as common in U.S. homes as any other major appliance, and 
many if not most of these machines will be able to perform innumerable 
tasks not explicitly imagined by their manufacturers. This opens up a 
wider world of personal robotics, in which machines are doing anything 
their owners can program them to do—without actually being programmers.</p><p>Laying some helpful groundwork for this world is, a new study by researchers in Georgia Tech’s <a href="http://robotics.gatech.edu/" target="_self">Center for Robotics &amp; Intelligent Machines</a>
 (RIM), who have identified the types of questions a robot can ask 
during a learning interaction that are most likely to characterize a 
smooth and productive human-robot relationship. These questions are 
about certain features of tasks, more so than labels of task components 
or real-time demonstrations of the task itself, and the researchers 
identified them not by studying robots, but by studying the everyday 
(read: non-programmer) people who one day will be their masters. The 
findings were detailed in the paper, “Designing Robot Learners that Ask 
Good Questions,” presented this week in Boston at the <a href="http://hri2012.org/program/" target="_blank">7th ACM/IEEE Conference on Human-Robot Interaction</a> (HRI).</p><p>“People
 are not so good at teaching robots because they don’t understand the 
robots’ learning mechanism,” said lead author Maya Cakmak, Ph.D. student
 in the School of Interactive Computing. “It’s like when you try to 
train a dog, and it’s difficult because dogs do not learn like humans 
do. We wanted to find out the best kinds of questions a robot could ask 
to make the human-robot relationship as ‘human’ as it can be.”</p><p>Cakmak’s
 study attempted to discover the role “active learning” concepts play in
 human-robot interaction. In a nutshell, active learning refers to 
giving machine learners more control over the information they receive. 
Simon, a humanoid robot created in the lab of Andrea Thomaz (assistant 
professor in the Georgia Tech’s School of Interactive Computing, and 
co-author), is well acquainted with active learning; Thomaz and Cakmak 
are programming him to learn new tasks by asking questions.</p><p>Cakmak designed two separate experiments (<a href="http://www.youtube.com/watch?v=6FKaEOSVczM" target="_blank">see video</a>):
 first, she asked human volunteers to assume the role of an inquisitive 
robot attempting to learn a simple task by asking questions of a human 
instructor. Having identified the three main question types (feature, 
label and demonstration), Cakmak tagged each of the participants’ 
questions as one of the three. The overwhelming majority (about 82 
percent) of questions were feature queries, showing a clear cognitive 
preference in human learning for this query type.</p><p><strong>Type of question</strong>&nbsp;&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;&nbsp; <strong>Example</strong></p><p>Label query&nbsp;&nbsp; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; “Can I pour salt like this?"</p><p>Demonstration query&nbsp;&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; “Can you show me how to pour salt from here?”</p><p>Feature query&nbsp;&nbsp; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; “Can I pour salt from any height?”</p><p>Next,
 Cakmak recruited humans to teach Simon new tasks by answering the 
robot’s questions and then rating those questions on how “smart” they 
thought they were. Feature queries once again were the preferred 
interrogatory, with 72 percent of participants calling them the smartest
 questions.</p><p>“These findings are important because they help give 
us the ability to teach robots the kinds of questions that humans would 
ask,” Cakmak said. “This in turn will help manufacturers produce the 
kinds of robots that are most likely to integrate quickly into a 
household or other environment and better serve the needs we’ll have for
 them.”</p><p>Georgia Tech is fielding five of the 38 papers accepted for <a href="http://hri2012.org/program/" target="_blank">HRI’s technical program</a>, making it the largest academic contributor to the conference. Those five include:</p><ul><li>“<a href="http://www.cc.gatech.edu/social-machines/papers/cakmak12_hri_active.pdf" target="_self">Designing Robot Learners that Ask Good Questions</a>,” by Maya Cakmak and Andrea L. Thomaz</li><li>“Real World Haptic Exploration for Telepresence of the Visually Impaired,” by Chung Hyuk Park and Ayanna M. Howard</li><li>“The
 Domesticated Robot: Design Guidelines for Assisting Older Adults to Age
 in Place,” by Jenay Beer, Cory-Ann Smarr, Tiffany Chen, Akanksha 
Prakash, Tracy Mitzner, Charles Kemp and Wendy Rogers</li><li>“<a href="http://www.cc.gatech.edu/social-machines/papers/gielniak12_hri_exaggeration.pdf" target="_self">Enhancing Interaction Through Exaggerated Motion Synthesis</a>,” by Michael Gielniak and Andrea Thomaz</li><li>“<a href="%20A%20Human-Robot%20Interaction%20Perspective" target="_self">Trajectories and Keyframes for Kinesthetic Teaching: A Human-Robot Interaction Perspective</a>,” by Baris Akgun, Maya Cakmak, Jae Wook Yoo and Andrea L. Thomaz</li></ul><p>All
 five papers describe research geared toward the realization of in-home 
robots assisting humans with everyday activities. Ph.D. student Baris 
Akgun’s paper, for example, assumes the same real-life application 
scenario as Cakmak’s—a robot learning new tasks from a 
non-programmer—and examines whether robots learn more quickly from 
continuous, real-time demonstrations of a physical task, or from 
isolated key frames in the motion sequence. The research is nominated 
for Best Paper at HRI 2012.</p><p>“Georgia Tech is certainly a leader in
 the field of human-robot interaction; we have more than 10 faculty 
across campus for whom HRI is a primary research area,” Thomaz said. 
“Additionally, the realization of ‘personal robots’ is a shared vision 
of the whole robotics faculty—and a mission of the RIM research center.”</p><p>###</p><p><strong>Contacts</strong></p><p><strong>Michael Terrazas</strong></p><p>Assistant Director of Communications</p><p>College of Computing at Georgia Tech</p><p><a href="mailto:mterraza@cc.gatech.edu">mterraza@cc.gatech.edu</a></p><p>404-245-0707</p>]]></body>
  <field_subtitle>
    <item>
      <value><![CDATA[Study on human-robot interaction highlights strong Institute showing at conference]]></value>
    </item>
  </field_subtitle>
  <field_dateline>
    <item>
      <value>2012-03-08T00:00:00-05:00</value>
      <timezone><![CDATA[America/New_York]]></timezone>
    </item>
  </field_dateline>
  <field_summary_sentence>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_summary_sentence>
  <field_summary>
    <item>
      <value><![CDATA[<p><strong>ATLANTA – March 8, 2012 – </strong>A new study by Maya Cakmak and Andrea Thomaz (<em>Interactive
Computing</em>) identifies the types of questions a robot can ask during a
learning interaction that are most likely to characterize a smooth and
productive human-robot relationship. <em>Source: Office of Communications</em></p>]]></value>
    </item>
  </field_summary>
  <field_media>
          <item>
        <nid>
          <node id="115121">
            <nid>115121</nid>
            <type>image</type>
            <title><![CDATA[Simon Learning]]></title>
            <body><![CDATA[]]></body>
                          <field_image>
                <item>
                  <fid>194225</fid>
                  <filename><![CDATA[simon_learning.jpg]]></filename>
                  <filepath><![CDATA[/sites/default/files/images/simon_learning_0.jpg]]></filepath>
                  <file_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/simon_learning_0.jpg]]></file_full_path>
                  <filemime>image/jpeg</filemime>
                  <image_740><![CDATA[]]></image_740>
                  <image_alt><![CDATA[Simon Learning]]></image_alt>
                </item>
              </field_image>
            
                      </node>
        </nid>
      </item>
      </field_media>
  <field_contact_email>
    <item>
      <email><![CDATA[mterraza@cc.gatech.edu]]></email>
    </item>
  </field_contact_email>
  <field_location>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_location>
  <field_contact>
    <item>
      <value><![CDATA[<p>Michael Terrazas</p><p><a href="mailto:mterraza@cc.gatech.edu">mterraza@cc.gatech.edu</a></p><p>404-245-0707</p>]]></value>
    </item>
  </field_contact>
  <field_sidebar>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_sidebar>
  <field_boilerplate>
    <item>
      <nid><![CDATA[]]></nid>
    </item>
  </field_boilerplate>
  <!--  TO DO: correct to not conflate categories and news room topics  -->
  <!--  Disquisition: it's funny how I write these TODOs and then never
         revisit them. It's as though the act of writing the thing down frees me
         from the responsibility to actually solve the problem. But what can I
         say? There are more problems than there's time to solve.  -->
  <links_related> </links_related>
  <files> </files>
  <og_groups>
          <item>47223</item>
      </og_groups>
  <og_groups_both>
      </og_groups_both>
  <field_categories>
      </field_categories>
  <core_research_areas>
      </core_research_areas>
  <field_news_room_topics>
      </field_news_room_topics>
  <links_related>
      </links_related>
  <files>
      </files>
  <og_groups>
          <item>47223</item>
      </og_groups>
  <og_groups_both>
          <item><![CDATA[College of Computing]]></item>
      </og_groups_both>
  <field_keywords>
          <item>
        <tid>11526</tid>
        <value><![CDATA[Andrea Thomaz]]></value>
      </item>
          <item>
        <tid>9167</tid>
        <value><![CDATA[machine learning]]></value>
      </item>
          <item>
        <tid>26421</tid>
        <value><![CDATA[maya cakmak]]></value>
      </item>
          <item>
        <tid>26431</tid>
        <value><![CDATA[personal robots]]></value>
      </item>
          <item>
        <tid>12920</tid>
        <value><![CDATA[RIM center]]></value>
      </item>
          <item>
        <tid>26441</tid>
        <value><![CDATA[robot learning]]></value>
      </item>
          <item>
        <tid>12919</tid>
        <value><![CDATA[robotics & intelligent machines]]></value>
      </item>
          <item>
        <tid>166848</tid>
        <value><![CDATA[School of Interactive Computing]]></value>
      </item>
      </field_keywords>
  <field_userdata><![CDATA[]]></field_userdata>
</node>
