<node id="640132">
  <nid>640132</nid>
  <type>event</type>
  <uid>
    <user id="27707"><![CDATA[27707]]></user>
  </uid>
  <created>1602530010</created>
  <changed>1602530010</changed>
  <title><![CDATA[PhD Proposal by Hyeokhyen Kwon]]></title>
  <body><![CDATA[<p><strong>Title</strong>: Opportunistic Use of Video Data For Wearable-based Human Activity Recognition</p>

<p>&nbsp;</p>

<p>Hyeokhyen Kwon</p>

<p>Ph.D. student in Computer Science</p>

<p>School of Interactive Computing</p>

<p>College of Computing</p>

<p>Georgia Institute of Technology</p>

<p>&nbsp;</p>

<p><strong>Date</strong>: Monday, October 19<sup>th</sup>, 2020</p>

<p><strong>Time</strong>: 10:00 AM to 12:00 PM (EST)</p>

<p><strong>Location</strong>: <a href="https://bluejeans.com/3537894193">https://bluejeans.com/3537894193</a></p>

<p>&nbsp;</p>

<p><strong>Committee</strong>:</p>

<p>Dr. Gregory D. Abowd (Advisor) &ndash; School of Interactive Computing, Georgia Institute of Technology</p>

<p>Dr. Thomas Ploetz (Co-Advisor) &ndash; School of Interactive Computing, Georgia Institute of Technology</p>

<p>Dr. Thad Starner &ndash; School of Interactive Computing, Georgia Institute of Technology</p>

<p>Dr. Irfan Essa &ndash; School of Interactive Computing, Georgia Institute of Technology</p>

<p>Dr. Nicholas D. Lane &ndash; Dept. of Computer Science &amp; Tech., University of Cambridge</p>

<p>&nbsp;</p>

<p><strong>Abstract</strong>:</p>

<p>Wearable Inertial Measurement Unit (IMU)-based human activity recognition is at the core of continuous monitoring for human well-being, which can detect precursors of health risks in everyday life. Conventionally, wearable sensor data is collected from recruited users, where user engagement is expensive, and the annotation is time-consuming. &nbsp;Due to the lack of large-scale labeled datasets, the wearable-based human activity recognition model has yet to experience significant improvements in recognition performance. To tackle the scale limitations in the wearable sensor dataset, this dissertation proposes a novel approach, which aims at harvesting existing video data from virtually unlimitedly large repositories, such as YouTube. I introduce an automated processing pipeline that integrates existing computer vision and signal processing techniques to convert human activity videos into virtual IMU data streams. I show how the virtually-generated IMU data improves the performance of various models on known human activity recognition datasets. I also proposed approaches to improve the quality of the generated virtual IMU data and decrease the domain gap between virtual and real IMU data. To further improve the recognition accuracy, I discuss a novel model training approach to handle human activity annotation noise in video datasets. This dissertation shows the promise of using video as a novel source for human activity recognition with wearables, representing a paradigm shift for deriving a robust human activity recognition system.</p>
]]></body>
  <field_summary_sentence>
    <item>
      <value><![CDATA[Opportunistic Use of Video Data For Wearable-based Human Activity Recognition]]></value>
    </item>
  </field_summary_sentence>
  <field_summary>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_summary>
  <field_time>
    <item>
      <value><![CDATA[2020-10-19T11:00:00-04:00]]></value>
      <value2><![CDATA[2020-10-19T13:00:00-04:00]]></value2>
      <rrule><![CDATA[]]></rrule>
      <timezone><![CDATA[America/New_York]]></timezone>
    </item>
  </field_time>
  <field_fee>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_fee>
  <field_extras>
      </field_extras>
  <field_audience>
          <item>
        <value><![CDATA[Faculty/Staff]]></value>
      </item>
          <item>
        <value><![CDATA[Public]]></value>
      </item>
          <item>
        <value><![CDATA[Undergraduate students]]></value>
      </item>
      </field_audience>
  <field_media>
      </field_media>
  <field_contact>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_contact>
  <field_location>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_location>
  <field_sidebar>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_sidebar>
  <field_phone>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_phone>
  <field_url>
    <item>
      <url><![CDATA[https://bluejeans.com/3537894193]]></url>
      <title><![CDATA[Bluejeans]]></title>
            <attributes><![CDATA[]]></attributes>
    </item>
  </field_url>
  <field_email>
    <item>
      <email><![CDATA[]]></email>
    </item>
  </field_email>
  <field_boilerplate>
    <item>
      <nid><![CDATA[]]></nid>
    </item>
  </field_boilerplate>
  <links_related>
      </links_related>
  <files>
      </files>
  <og_groups>
          <item>221981</item>
      </og_groups>
  <og_groups_both>
          <item><![CDATA[Graduate Studies]]></item>
      </og_groups_both>
  <field_categories>
          <item>
        <tid>1788</tid>
        <value><![CDATA[Other/Miscellaneous]]></value>
      </item>
      </field_categories>
  <field_keywords>
          <item>
        <tid>102851</tid>
        <value><![CDATA[Phd proposal]]></value>
      </item>
      </field_keywords>
  <field_userdata><![CDATA[]]></field_userdata>
</node>
