<node id="684638">
  <nid>684638</nid>
  <type>event</type>
  <uid>
    <user id="28475"><![CDATA[28475]]></user>
  </uid>
  <created>1757352358</created>
  <changed>1757352426</changed>
  <title><![CDATA[Ph.D. Dissertation Defense - Harish Haresamudram]]></title>
  <body><![CDATA[<p><strong>Title</strong><em>:&nbsp; Learning Representations for Sensor Based Human Activity Recognition for Challenging Application Scenarios</em></p><p><strong>Committee:</strong></p><p>Dr. Thomas Ploetz, CoC, Chair, Advisor</p><p>Dr. Irfan Essa, CoC, Co-Advisor</p><p>Dr. Omer Inan, ECE</p><p>Dr. Thad Starner, CoC</p><p>Dr. Nic Lane, Cambridge</p><p>Dr. Diane Cook, Wshinghton State</p>]]></body>
  <field_summary_sentence>
    <item>
      <value><![CDATA[Learning Representations for Sensor Based Human Activity Recognition for Challenging Application Scenarios ]]></value>
    </item>
  </field_summary_sentence>
  <field_summary>
    <item>
      <value><![CDATA[<p>We are currently in a golden age for wearables. Rapid proliferation driven by affordability and widespread availability has enabled human sensing at scale, and in naturalistic, day-to-day settings. An expanding array of onboard sensors unobtrusively captures different aspects of human life: from sleep patterns, daily activities and routines, and location, to bio-signals like heart rate and oxygen levels. As a result, there is increasing reliance on these devices for deriving objective views into human health and well-being. In particular, sensing and automatically recognizing activities, i.e., Human Activity Recognition (HAR), is critical for wide ranging applications. A variety of approaches have been developed to summarize recorded movements into representations that can discriminate between activities. Ranging from early hand crafting of statistical descriptors of movements, to, by the mid-2010s, learning task-relevant representations via end-to-end training. The focus of my dissertation is on this core component -- representations, and my contributions are five-fold: first, I show that end-to-end training is not the best option for all HAR scenarios, and instead, unsupervised representation learning can be more advantageous when considering factors relevant to wearable computing. This motivates a pursuit of developing more effective unsupervised methods. Second, I show how enhancing the task design of representation learning via self-supervised learning and cross-modal contrastive training yields powerful representations. Third, I develop an assessment framework to perform a well-rounded evaluation of self-supervised methods, to discover their areas of strength as well as limitations. Fourth, I demonstrate that a combination of self-supervised learning and vector quantization results in the discovery of atomic, recurring sub-movements that can be used for improved analysis of activities. Finally, I expand the scope of learned representations beyond the HAR task to a real-world health scenario -- of discovering longitudinal changes in movements in older adults, and investigate how such changes relate to their cognitive health.</p>]]></value>
    </item>
  </field_summary>
  <field_time>
    <item>
      <value><![CDATA[2025-09-15T13:00:00-04:00]]></value>
      <value2><![CDATA[2025-09-15T15:00:00-04:00]]></value2>
      <rrule><![CDATA[]]></rrule>
      <timezone><![CDATA[America/New_York]]></timezone>
    </item>
  </field_time>
  <field_fee>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_fee>
  <field_extras>
      </field_extras>
  <field_audience>
          <item>
        <value><![CDATA[Public]]></value>
      </item>
      </field_audience>
  <field_media>
      </field_media>
  <field_contact>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_contact>
  <field_location>
    <item>
      <value><![CDATA[Online]]></value>
    </item>
  </field_location>
  <field_sidebar>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_sidebar>
  <field_phone>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_phone>
  <field_url>
    <item>
      <url><![CDATA[]]></url>
      <title><![CDATA[]]></title>
            <attributes><![CDATA[]]></attributes>
    </item>
  </field_url>
  <field_email>
    <item>
      <email><![CDATA[]]></email>
    </item>
  </field_email>
  <field_boilerplate>
    <item>
      <nid><![CDATA[]]></nid>
    </item>
  </field_boilerplate>
  <links_related>
          <item>
        <url>https://gatech.zoom.us/j/6343539858?pwd=dGk3TUZkY1Q5MkttSUpwdlNoQm5NUT09</url>
        <link_title><![CDATA[Zoom link]]></link_title>
      </item>
      </links_related>
  <files>
      </files>
  <og_groups>
          <item>434381</item>
      </og_groups>
  <og_groups_both>
          <item><![CDATA[ECE Ph.D. Dissertation Defenses]]></item>
      </og_groups_both>
  <field_categories>
          <item>
        <tid>1788</tid>
        <value><![CDATA[Other/Miscellaneous]]></value>
      </item>
      </field_categories>
  <field_keywords>
          <item>
        <tid>100811</tid>
        <value><![CDATA[Phd Defense]]></value>
      </item>
          <item>
        <tid>1808</tid>
        <value><![CDATA[graduate students]]></value>
      </item>
      </field_keywords>
  <field_userdata><![CDATA[]]></field_userdata>
</node>
