ARC Colloquium: Yashodhan Kanoria, Stanford University

Primary tabs


In many contexts, agents 'learn' behavior from interaction with friends/neighbors on a network. We call this phenomenon 'social learning'. We will focus on models of repeated interaction, with agents 'voting' in a series of rounds on some issue of interest. Votes in the initial round are based on 'private signals', whereas votes in future rounds incorporate knowledge of previous votes cast by friends.

We consider two different models of iterative learning. A very simple model is `majority dynamics' where agents choose their vote based on the majority of neighbors' votes in the previous round. We analyze this model on regular trees. At the other extreme is iterative Bayesian learning: a fully rational model introduced by Gale and Kariv (2003). We introduce new algorithms for this model, challenging a widespread belief that it is computationally intractable. We develop a novel technique -- the 'dynamic cavity method', which serves as a key tool for both models.

Based on joint work with Andrea Montanari (Ann. App. Prob. 2011) and Omer Tamuz (submitted).


  • Workflow Status: Published
  • Created By: Elizabeth Ndongi
  • Created: 01/20/2012
  • Modified By: Fletcher Moore
  • Modified: 10/07/2016


No keywords were submitted.

Target Audience

No target audience selected.