PhD Defense by Bhuvesh Kumar

Primary tabs

Title: Sequential Decision Making with Strategic Agents and Limited Feedback

Date: Monday, November 21, 2022

Time: 1:30 pm - 3:30 pm EDT

Location: (in-person): Coda C1315 Grant Park

(virtual): https://gatech.zoom.us/j/93331055942?pwd=VDZRNjViTzd3ZEhWcVBScEZhNDNSdz09


Bhuvesh Kumar

School of Computer Science

College of Computing

Georgia Institute of Technology



Dr. Jacob Abernethy (Co-advisor, School of Computer Science, Georgia Institute of Technology)

Dr. Jamie Morgenstern (Co-advisor, Paul G. Allen School, University of Washington)

Dr. Vidya Muthukumar (School of Electrical and Computer Engineering, Georgia Institute of Technology)

Dr. Florian Schäfer (School of Computational Science and Engineering, Georgia Institute of Technology)

Dr. Sahil Singla (School of Computer Science, Georgia Institute of Technology)



Sequential decision-making is a natural model for machine learning applications where the learner must make online decisions in real-time and simultaneously learn from the sequential data to make better decisions in the future. Classical work has focused on variants of the problem based on the data distribution being either stochastic or adversarial, or based on the feedback available to the learner’s decisions which could be either partial or complete. With the rapid rise of large online markets, sequential learning methods have increasingly been deployed in complex multi-agent systems where agents may behave strategically to optimize for their own personal objectives. This has added a new dimension to the sequential decision-making problem where the learner must account for the strategic behavior of the agents it is learning from who might want to steer its future decisions in their favor.


This thesis aims to design effective online decision-making algorithms from the point of view of both the system designers aiming to learn in environments with strategic agents and limited feedback and also the strategic agents seeking to optimize personal objectives. In the first part, we focus on repeated auctions and design mechanisms where the auctioneer can effectively learn in presence of strategic bidders, and conversely, address how agents can bid in repeated auctions or use data-poisoning attacks to maximize their own objectives.


Next, we consider an online learning setting where feedback about the learner’s decisions is expensive to obtain. We introduce an active learning style online learning algorithm that can fast forward a small fraction of more informative examples ahead in the queue allowing the learner to obtain the same performance as the optimal online algorithm but only by querying feedback on a very small fraction of points.  Finally, we consider a new learning objective for stochastic multi-arm bandits that promotes merit-based fairness in opportunity for individuals and groups.



  • Workflow Status:Published
  • Created By:Tatianna Richardson
  • Created:11/10/2022
  • Modified By:Tatianna Richardson
  • Modified:11/10/2022