<node id="634657">
  <nid>634657</nid>
  <type>event</type>
  <uid>
    <user id="27544"><![CDATA[27544]]></user>
  </uid>
  <created>1587584873</created>
  <changed>1587586240</changed>
  <title><![CDATA[ARC and Indo-US Virtual Center Seminar: Prasad Raghavendra (UC Berkeley)]]></title>
  <body><![CDATA[<p align = "center"><strong>Algorithms &amp; Randomness Center (ARC) and Indo-US Virtual Center Seminar</strong></p>

<p align = "center"><strong>Prasad Raghavendra (UC Berkeley)</strong></p>

<p align = "center"><strong>Monday, April 27, 2020</strong></p>

<p align = "center"><strong>Virtual via Bluejeans - 11:30 am</strong></p>

<p>&nbsp;</p>

<p><strong>Title:&nbsp; </strong>List-Decodable Learning via Sum of Squares</p>

<p><strong>Abstract:&nbsp; </strong>In the list-decodable learning setup, an overwhelming majority (say a $1-\beta$-fraction) of the input data consists of outliers and the goal of an algorithm is to output a small list $\mathcal{L}$ of hypotheses such that one of them agrees with inliers. &nbsp; We devise list decodable learning algorithms for the problem of linear regression and subspace recovery using the sum of squares SDP hierarchy.</p>

<p>&nbsp;1)&nbsp; In the list-decodable linear regression problem, we are given labelled examples $\{(X_i,y_i)\}_{i \in [N]}$ containing a subset $S$ of $\beta N$ {\it inliers} $\{X_i \}_{i \in S}$ that are drawn i.i.d. from standard Gaussian distribution $N(0,I)$ in $\mathbb{R}^d$, where the corresponding labels $y_i$ are well-approximated by a linear function $\hat{\ell}$. &nbsp;<br />
&nbsp;<br />
&nbsp;We devise an algorithm that outputs a list $\mathcal{L}$ of linear functions such that there exists some $\ell \in \mathcal{L}$ that is close to $\hat{\ell}$.&nbsp;&nbsp;&nbsp; &nbsp;This yields the first algorithm for linear regression in a list-decodable setting.&nbsp; Our results hold for a general distribution of examples whose concentration and anti-concentration properties can be certified by low degree sum-of-squares proofs.</p>

<p>&nbsp;2) In the subspace recovery problem,&nbsp; given a dataset where an $\alpha$ fraction (less than half) of the data is distributed uniformly in an unknown $k$ dimensional subspace in $d$ dimensions, and with no additional assumptions on the remaining data, the goal is to recover a succinct list of $O(\frac{1}{\alpha})$ subspaces one of which is close to the original subspace.&nbsp; We provide the first polynomial time algorithm for the &#39;list decodable subspace recovery&#39; problem.</p>

<p>Joint work with Morris Yau.</p>

<p>----------------------------------</p>

<p><a href="https://people.eecs.berkeley.edu/~prasad/">Speaker&#39;s Webpage</a></p>

<p><em>Videos of recent talks are available at: </em><a href="https://smartech.gatech.edu/handle/1853/46836"><em>https://smartech.gatech.edu/handle/1853/46836</em></a></p>

<p><a href="https://mailman.cc.gatech.edu/mailman/listinfo/arc-colloq"><em>Click here to subscribe to the seminar email list: arc-colloq@Klauscc.gatech.edu </em></a></p>
]]></body>
  <field_summary_sentence>
    <item>
      <value><![CDATA[List-Decodable Learning via Sum of Squares - Virtual via Bluejeans at 11:30am]]></value>
    </item>
  </field_summary_sentence>
  <field_summary>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_summary>
  <field_time>
    <item>
      <value><![CDATA[2020-04-27T12:30:00-04:00]]></value>
      <value2><![CDATA[2020-04-27T13:30:00-04:00]]></value2>
      <rrule><![CDATA[]]></rrule>
      <timezone><![CDATA[America/New_York]]></timezone>
    </item>
  </field_time>
  <field_fee>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_fee>
  <field_extras>
      </field_extras>
  <field_audience>
          <item>
        <value><![CDATA[Faculty/Staff]]></value>
      </item>
          <item>
        <value><![CDATA[Postdoc]]></value>
      </item>
          <item>
        <value><![CDATA[Graduate students]]></value>
      </item>
          <item>
        <value><![CDATA[Undergraduate students]]></value>
      </item>
      </field_audience>
  <field_media>
      </field_media>
  <field_contact>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_contact>
  <field_location>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_location>
  <field_sidebar>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_sidebar>
  <field_phone>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_phone>
  <field_url>
    <item>
      <url><![CDATA[]]></url>
      <title><![CDATA[]]></title>
            <attributes><![CDATA[]]></attributes>
    </item>
  </field_url>
  <field_email>
    <item>
      <email><![CDATA[]]></email>
    </item>
  </field_email>
  <field_boilerplate>
    <item>
      <nid><![CDATA[]]></nid>
    </item>
  </field_boilerplate>
  <links_related>
      </links_related>
  <files>
      </files>
  <og_groups>
          <item>70263</item>
      </og_groups>
  <og_groups_both>
          <item><![CDATA[ARC]]></item>
      </og_groups_both>
  <field_categories>
          <item>
        <tid>1795</tid>
        <value><![CDATA[Seminar/Lecture/Colloquium]]></value>
      </item>
      </field_categories>
  <field_keywords>
      </field_keywords>
  <field_userdata><![CDATA[]]></field_userdata>
</node>
