<node id="681406">
  <nid>681406</nid>
  <type>event</type>
  <uid>
    <user id="28475"><![CDATA[28475]]></user>
  </uid>
  <created>1743075655</created>
  <changed>1743075744</changed>
  <title><![CDATA[Ph.D. Proposal Oral Exam - Kuo-Wei Lai]]></title>
  <body><![CDATA[<p><strong>Title:&nbsp; </strong><em>Learning implicitly biased overparameterized models</em></p><p><strong>Committee:</strong></p><p>Dr. Muthukumar, Advisor</p><p>Dr. Davenport, Chair</p><p>Dr. Tao</p>]]></body>
  <field_summary_sentence>
    <item>
      <value><![CDATA[Learning implicitly biased overparameterized models]]></value>
    </item>
  </field_summary_sentence>
  <field_summary>
    <item>
      <value><![CDATA[<p>The objective of the proposed research is to investigate the nature of implicit biases in overparameterized machine learning models, with an emphasis on extending findings from linear to nonlinear settings. In modern machine learning, the classical bias-variance trade-off framework fails to explain why overparameterized models often exhibit excellent generalization performance. This surprising phenomenon, known as ``double descent," describes how generalization error can decrease again in the overparameterized regime. Recent studies suggest that specific optimization processes may ``implicitly bias" these models toward solutions with desirable properties, such as minimum norm, which can contribute to their effectiveness. Understanding this implicit bias is crucial to grasping the mechanisms underlying the success of overparameterized models. Moreover, while deep learning models have demonstrated remarkable performance across diverse applications, their nonlinearity presents challenges for theoretical analysis. This research aims not only to explore the more tractable linear settings but also to extend the investigation to nonlinear models, seeking to clarify the factors driving the success of overparameterized deep networks. This proposal will first review the background and previous work of implicit bias and generalization in overparameterized models. Following this, I will present my preliminary research on linear models, covering implicit bias characterization for general loss functions, error bounds in out-of-distribution generalization, and task shifts from classification to regression. Finally, I will outline the proposed work on characterizing implicit bias in ReLU-based models, focusing on expanding insights from linear to nonlinear frameworks.</p>]]></value>
    </item>
  </field_summary>
  <field_time>
    <item>
      <value><![CDATA[2025-03-27T12:00:00-04:00]]></value>
      <value2><![CDATA[2025-03-27T14:00:00-04:00]]></value2>
      <rrule><![CDATA[]]></rrule>
      <timezone><![CDATA[America/New_York]]></timezone>
    </item>
  </field_time>
  <field_fee>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_fee>
  <field_extras>
      </field_extras>
  <field_audience>
          <item>
        <value><![CDATA[Public]]></value>
      </item>
      </field_audience>
  <field_media>
      </field_media>
  <field_contact>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_contact>
  <field_location>
    <item>
      <value><![CDATA[Room C1115, CODA]]></value>
    </item>
  </field_location>
  <field_sidebar>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_sidebar>
  <field_phone>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_phone>
  <field_url>
    <item>
      <url><![CDATA[]]></url>
      <title><![CDATA[]]></title>
            <attributes><![CDATA[]]></attributes>
    </item>
  </field_url>
  <field_email>
    <item>
      <email><![CDATA[]]></email>
    </item>
  </field_email>
  <field_boilerplate>
    <item>
      <nid><![CDATA[]]></nid>
    </item>
  </field_boilerplate>
  <links_related>
      </links_related>
  <files>
      </files>
  <og_groups>
          <item>434371</item>
      </og_groups>
  <og_groups_both>
          <item><![CDATA[ECE Ph.D. Proposal Oral Exams]]></item>
      </og_groups_both>
  <field_categories>
          <item>
        <tid>1788</tid>
        <value><![CDATA[Other/Miscellaneous]]></value>
      </item>
      </field_categories>
  <field_keywords>
          <item>
        <tid>102851</tid>
        <value><![CDATA[Phd proposal]]></value>
      </item>
          <item>
        <tid>1808</tid>
        <value><![CDATA[graduate students]]></value>
      </item>
      </field_keywords>
  <field_userdata><![CDATA[]]></field_userdata>
</node>
