<node id="64819">
  <nid>64819</nid>
  <type>news</type>
  <uid>
    <user id="27310"><![CDATA[27310]]></user>
  </uid>
  <created>1299578937</created>
  <changed>1475896102</changed>
  <title><![CDATA[How Can Robots Get Our Attention?]]></title>
  <body><![CDATA[<p>Getting someone’s attention can be easy with a loud noise or
a shout, but what if the situation calls for a little more tact? How can a
robot use subtle cues to attract a human’s notice and tell when it has captured
it? In a preliminary study, researchers at the Georgia Institute of Technology have
found that they can program a robot to understand when it gains a human’s
attention and when it falls short. The research is being presented today at the
Human-Robot Interaction conference in Lausanne, Switzerland.</p>



<p>“The primary focus was trying to give Simon, our robot, the
ability to understand when a human being seems to be reacting appropriately, or
in some sense is interested now in a response with respect to Simon and to be
able to do it using a visual medium, a camera,” said Aaron Bobick, professor
and chair of the School of Interactive Computing in Georgia Tech’s College of
Computing.</p>



<p>Using the socially expressive robot Simon, from Assistant Professor
Andrea Thomaz’s Socially Intelligent Machines lab, researchers wanted to see if
they could tell when he had successfully attracted the attention of a human who
was busily engaged in a task and when he had not.&nbsp; </p>



<p>“Simon would make some form of a gesture, or some form of an
action when the user was present, and the computer vision task was to try to
determine whether or not you had captured the attention of the human being,”
said Bobick.</p>



<p>With close to 80 percent accuracy Simon was able to tell,
using only his cameras as a guide, whether someone was paying attention to him
or ignoring him. </p>



<p>“We would like to bring robots into the human world. That
means they have to engage with human beings, and human beings have an
expectation of being engaged in a way similar to the way other human beings
would engage with them,” said Bobick.</p>



<p>“Other human beings understand turn-taking. They understand
that if I make some indication, they’ll turn and face someone when they want to
engage with them and they won’t when they don’t want to engage with them. In
order for these robots to work with us effectively, they have to obey these
same kinds of social conventions, which means they have to perceive the same
thing humans perceive in determining how to abide by those conventions,” he
added.</p>



<p>Researchers plan to go further with their investigations
into how Simon can read communication cues by studying whether he can tell by a
person’s gaze whether they are paying attention or using elements of language
or other actions. </p>



<p>“Previously people would have pre-defined notions of what
the user should do in a particular context and they would look for those,” said
Bobick. “That only works when the person behaves exactly as expected. Our
approach, which I think is the most novel element, is to use the user’s current
behavior as the baseline and observe what changes.”</p>



<p>The research team for this study consisted of Bobick, Thomaz,
doctoral student Jinhan Lee and undergraduate student Jeffrey Kiser. </p>]]></body>
  <field_subtitle>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_subtitle>
  <field_dateline>
    <item>
      <value>2011-03-08T00:00:00-05:00</value>
      <timezone><![CDATA[America/New_York]]></timezone>
    </item>
  </field_dateline>
  <field_summary_sentence>
    <item>
      <value><![CDATA[Researchers have found that they can program a robot to understand when it gains a human’s attention and when it falls short.]]></value>
    </item>
  </field_summary_sentence>
  <field_summary>
    <item>
      <value><![CDATA[<p>Researchers at the Georgia Institute of Technology have found
that they can program a robot to understand when it gains a human’s attention
and when it falls short.</p>]]></value>
    </item>
  </field_summary>
  <field_media>
          <item>
        <nid>
          <node id="64817">
            <nid>64817</nid>
            <type>image</type>
            <title><![CDATA[How can robots get our attention? Simon photo]]></title>
            <body><![CDATA[]]></body>
                          <field_image>
                <item>
                  <fid>192105</fid>
                  <filename><![CDATA[11P1000-P43-005.jpg]]></filename>
                  <filepath><![CDATA[/sites/default/files/images/11P1000-P43-005_0.jpg]]></filepath>
                  <file_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/11P1000-P43-005_0.jpg]]></file_full_path>
                  <filemime>image/jpeg</filemime>
                  <image_740><![CDATA[]]></image_740>
                  <image_alt><![CDATA[How can robots get our attention? Simon photo]]></image_alt>
                </item>
              </field_image>
            
                      </node>
        </nid>
      </item>
      </field_media>
  <field_contact_email>
    <item>
      <email><![CDATA[david.terraso@comm.gatech.edu]]></email>
    </item>
  </field_contact_email>
  <field_location>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_location>
  <field_contact>
    <item>
      <value><![CDATA[<p>David Terraso</p><p>Communications and Marketing </p><p>404-385-2966</p>]]></value>
    </item>
  </field_contact>
  <field_sidebar>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_sidebar>
  <field_boilerplate>
    <item>
      <nid><![CDATA[]]></nid>
    </item>
  </field_boilerplate>
  <!--  TO DO: correct to not conflate categories and news room topics  -->
  <!--  Disquisition: it's funny how I write these TODOs and then never
         revisit them. It's as though the act of writing the thing down frees me
         from the responsibility to actually solve the problem. But what can I
         say? There are more problems than there's time to solve.  -->
  <links_related> </links_related>
  <files> </files>
  <og_groups>
          <item>1183</item>
      </og_groups>
  <og_groups_both>
          <item>
        <![CDATA[Computer Science/Information Technology and Security]]>
      </item>
          <item>
        <![CDATA[Robotics]]>
      </item>
      </og_groups_both>
  <field_categories>
          <item>
        <tid>153</tid>
        <value><![CDATA[Computer Science/Information Technology and Security]]></value>
      </item>
          <item>
        <tid>152</tid>
        <value><![CDATA[Robotics]]></value>
      </item>
      </field_categories>
  <core_research_areas>
      </core_research_areas>
  <field_news_room_topics>
      </field_news_room_topics>
  <links_related>
      </links_related>
  <files>
      </files>
  <og_groups>
          <item>1183</item>
      </og_groups>
  <og_groups_both>
          <item><![CDATA[Home]]></item>
      </og_groups_both>
  <field_keywords>
          <item>
        <tid>12286</tid>
        <value><![CDATA[Aaron Bobick]]></value>
      </item>
          <item>
        <tid>11526</tid>
        <value><![CDATA[Andrea Thomaz]]></value>
      </item>
          <item>
        <tid>654</tid>
        <value><![CDATA[College of Computing]]></value>
      </item>
          <item>
        <tid>109</tid>
        <value><![CDATA[Georgia Tech]]></value>
      </item>
          <item>
        <tid>4887</tid>
        <value><![CDATA[GVU Center]]></value>
      </item>
          <item>
        <tid>11892</tid>
        <value><![CDATA[RIM@GT]]></value>
      </item>
          <item>
        <tid>1356</tid>
        <value><![CDATA[robot]]></value>
      </item>
          <item>
        <tid>166848</tid>
        <value><![CDATA[School of Interactive Computing]]></value>
      </item>
          <item>
        <tid>168887</tid>
        <value><![CDATA[simon]]></value>
      </item>
      </field_keywords>
  <field_userdata><![CDATA[]]></field_userdata>
</node>
