event

PhD Defense by David Byrd

Primary tabs

 

Title: Responsible Machine Learning: Supporting Privacy Preservation and Normative Alignment with Multi-agent Simulation

 

David Byrd

Ph.D. Candidate

School of Interactive Computing

Georgia Institute of Technology

 

Date: Thursday, July 15, 2021

Time: 2:00 pm to 4:00 pm (EST)

Location (remote via BlueJeans): https://bluejeans.com/5512415242

 

Committee:

Dr. Tucker Balch (advisor), School of Interactive Computing, Georgia Institute of Technology

Dr. Mark Riedl, School of Interactive Computing, Georgia Institute of Technology

Dr. Thad Starner, School of Interactive Computing, Georgia Institute of Technology

Dr. Maria Hybinette, Department of Computer Science, University of Georgia

Dr. Jonathan Clarke, Scheller College of Business, Georgia Institute of Technology

 

Abstract:

In the last decade, machine learning (ML) algorithms have been applied to complex problems with a level of success that makes them attractive to government and industry practitioners. Post hoc analysis of some such systems has raised questions concerning the responsible use of ML, particularly with regard to unintended algorithmic bias against protected groups of interest.  This has brought attention to the need that we consider the potentially disparate outcomes that arise from the use of our models and systems, but avoiding biased decision making should be the beginning, not the end, of our commitment to responsible ML. From the safety of robotic systems, to the privacy of user training data, to the ability of autonomous systems to obey the law, any kind of unintentionally-harmful outcome is worthy of attention and mitigation. 

This dissertation aims to advance responsible machine learning through multi-agent simulation (MAS). We introduce an open source, multi-domain discrete event simulation framework and use it to: (1) improve state-of-the-art privacy-preserving federated learning (PPFL) and (2) demonstrate a novel method for normatively-aligned reinforcement learning (RL) from synthetic negative examples.

After introducing our simulation framework, we implement two PPFL protocols in single-threaded MAS: a recent state of the art protocol using differential privacy and secure multiparty computation with homomorphic encryption and a new protocol incorporating oblivious distributed differential privacy.  The simulation permits us to inexpensively evaluate both protocols for model accuracy, deployed parallel running time, and resistance to collusion attacks by participating parties.  We empirically demonstrate that the new protocol improves privacy with no loss of accuracy to the final shared model.

Next we address normatively-aligned learning with application to financial markets.  “Spoofing”, or illegal market manipulation through the placement of false orders, has been a topic of regulatory interest in the past several years.  We construct a realistic simulation of profit-driven but ethical agents trading through a stock exchange, introduce a spoofing agent, and learn to distinguish its action sequences from others.  Then we use the spoofing detector to train an intelligent RL trading agent that will generate profit while avoiding behaviors that resemble spoofing.

Status

  • Workflow Status:Published
  • Created By:Tatianna Richardson
  • Created:07/06/2021
  • Modified By:Tatianna Richardson
  • Modified:07/06/2021

Categories

Keywords