<node id="60554">
  <nid>60554</nid>
  <type>event</type>
  <uid>
    <user id="27174"><![CDATA[27174]]></user>
  </uid>
  <created>1282655446</created>
  <changed>1475891531</changed>
  <title><![CDATA[CSE Seminar: Yoram Singer]]></title>
  <body><![CDATA[<p>CSE Seminar</p>

<p>Speaker: Yoram Singer</p>

<p>Date: Friday, Sept. 3, 2010</p>

<p>Time: 2-3 p.m.</p><p>Location: KACB 1447</p><p>&nbsp;</p>

<h5>Title</h5><p><strong></strong>Composite Objective Optimization and Learning for
Massive Datasets</p><p>&nbsp;</p>













<h5>Abstract</h5><p><strong></strong>Composite objective optimization is concerned with
the problem of minimizing a two-term
objective function which consists of an empirical loss function and a regularization function. Application with massive
datasets often employ a regularization term which is non-differentiable or
structured, such as L1 or mixed-norm
regularization. Such regularizers promote sparse solutions and special structure of the parameters of the
problem, which is a desirable goal for datasets of extremely high-dimensions. In this
talk, we discuss several recently developed
methods for performing composite objective minimization in the online learning and stochastic optimization
settings. We start with a description of extensions of the well-known
forward-backward splitting method to
stochastic objectives. We then generalize this paradigm to the family of mirror-descent algorithms. Our work builds on
recent work which connects proximal minimization to online and stochastic
optimization. We focus in the algorithmic
part on a new approach, called AdaGrad, in which the proximal function is adapted throughout the course of the
algorithm in a data-dependent manner. This temporal
adaptation metaphorically allows us to find needles in
haystacks as the algorithm is able to single out very predictive yet
rarely observed features. We conclude with
several experiments on large-scale datasets that
demonstrate the merits of composite objective optimization and underscore superior performance of various instantiations of
AdaGrad.</p>

<p>&nbsp;</p>







<h5>Bio</h5><p>Yoram Singer is a senior research scientist at
Google. From 1999 through 2007 he was an
associate professor at the Hebrew university of Jerusalem, Israel. He was member of the technical staff at AT&amp;T
Research from 1995 through 1999. He served as an associate editor of Machine Learning
Journal and is now on the editorial board of
the Journal of Machine Learning Research and IEEE Signal Processing Magazine. He was the co-chair of COLT'04 and
NIPS'07. He is a AAAI Fellow and won for several awards for his research
papers, most recently the 10 years
retrospect award for the most influential paper of ICML 2000.</p>]]></body>
  <field_summary_sentence>
    <item>
      <value><![CDATA["Composite Objective Optimization and Learning for Massive Datasets"]]></value>
    </item>
  </field_summary_sentence>
  <field_summary>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_summary>
  <field_time>
    <item>
      <value><![CDATA[2010-09-03T15:00:00-04:00]]></value>
      <value2><![CDATA[2010-09-03T16:00:00-04:00]]></value2>
      <rrule><![CDATA[]]></rrule>
      <timezone><![CDATA[America/New_York]]></timezone>
    </item>
  </field_time>
  <field_fee>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_fee>
  <field_extras>
      </field_extras>
  <field_audience>
      </field_audience>
  <field_media>
      </field_media>
  <field_contact>
    <item>
      <value><![CDATA[<p>For more information, contact <a href="mailto:lebanon@cc.gatech.edu">Guy Lebanon</a>.</p>]]></value>
    </item>
  </field_contact>
  <field_location>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_location>
  <field_sidebar>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_sidebar>
  <field_phone>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_phone>
  <field_url>
    <item>
      <url><![CDATA[]]></url>
      <title><![CDATA[]]></title>
            <attributes><![CDATA[]]></attributes>
    </item>
  </field_url>
  <field_email>
    <item>
      <email><![CDATA[]]></email>
    </item>
  </field_email>
  <field_boilerplate>
    <item>
      <nid><![CDATA[]]></nid>
    </item>
  </field_boilerplate>
  <links_related>
      </links_related>
  <files>
      </files>
  <og_groups>
          <item>47223</item>
          <item>50877</item>
      </og_groups>
  <og_groups_both>
          <item><![CDATA[College of Computing]]></item>
          <item><![CDATA[School of Computational Science and Engineering]]></item>
      </og_groups_both>
  <field_categories>
          <item>
        <tid>1795</tid>
        <value><![CDATA[Seminar/Lecture/Colloquium]]></value>
      </item>
      </field_categories>
  <field_keywords>
      </field_keywords>
  <field_userdata><![CDATA[]]></field_userdata>
</node>
