ML@GT Virtual Seminar: Csaba Szepesvari, University of Alberta

Event Details
  • Date/Time:
    • Wednesday March 10, 2021
      12:15 pm - 1:15 pm
  • Location: primetime.bluejeans.com/a2m/register/ddtatyph
  • Phone:
  • URL:
  • Email:
  • Fee(s):
    N/A
  • Extras:
Contact

Allie McFadden

allie.mcfadden@cc.gatech.edu

Summaries

Summary Sentence: ML@GT invites you to a virtual seminar featuring Csaba Szepesvari from the University of Alberta.

Full Summary: No summary paragraph submitted.

ML@GT invites you to a virtual seminar featuring Csaba Szepesvari from the University of Alberta. Please check back soon for additional information

Registration is required

 

Hardness of MDP planning with linear function approximation

Markov decision processes (MDPs) is a minimalist framework to capture that many tasks require long-term plans and feedback due to noisy dynamics. Yet, as a result MDPs lack structure and as such planning and learning in MDPs with the typically enormous state and action spaces is strongly intractable; no algorithm can avoid Bellman's curse of dimensionality in the worst case. However, as recognized already by Bellman and his co-workers at the advent of our field, for many problem of practical interest, the optimal value function of an MDP is well approximated by just using a few basis functions, such as those that are standardly used in numerical calculations. As knowing the optimal value function is essentially equivalent to knowing how to act optimally, one hopes that this observation can be turned into efficient algorithms as there are only a few coefficients to compute. If this is possible, we can think of the resulting algorithms as performing computations with a compressed form of the value functions. While many algorithms have been proposed as early as in the 1960s, until recently not much has been known about whether these compressed computations are possible and when. In this talk, I will discuss a few recent results (some positive, some negative) that are concerned with these compressed computations and conclude with some open problems. As we shall see, still today, there are more open questions than questions that have been satisfactorily answered.

About Csaba

Csaba Szepesvari is a Canada CIFAR AI Chair, the team-lead for the “Foundations” team at DeepMind and a Professor of Computing Science at the University of Alberta. He earned his PhD in 1999 from Jozsef Attila University, in Szeged, Hungary. He has authored three books and over 200 peer-reviewed journal and conference papers. He serves as the action editor of the Journal of Machine Learning Research and Machine Learning, as well as on various program committees. Dr. Szepesvari's interest is artificial intelligence (AI) and, in particular, principled approaches to AI that use machine learning. He is the co-inventor of UCT, a widely successful Monte-Carlo tree search algorithm. UCT ignited much work in AI, such as DeepMind's AlphaGo which defeated the top Go professional Lee Sedol in a landmark game. This work on UCT won the 2016 test-of-time award at ECML/PKDD.

Additional Information

In Campus Calendar
Yes
Groups

College of Computing, Computational Science and Engineering, GVU Center, Machine Learning, ML@GT, OMS, School of Computational Science and Engineering, School of Computer Science, School of Interactive Computing

Invited Audience
Faculty/Staff, Postdoc, Public, Graduate students, Undergraduate students
Categories
No categories were selected.
Keywords
No keywords were submitted.
Status
  • Created By: ablinder6
  • Workflow Status: Published
  • Created On: Dec 14, 2020 - 10:08am
  • Last Updated: Mar 2, 2021 - 12:25pm