event

PhD Proposal by David Byrd

Primary tabs

 

Title: Ethically constrained and privacy preserving learning in agent-based simulation

 

David Byrd

Ph.D. Student

School of Interactive Computing

Georgia Institute of Technology

 

Date: Thursday, November 5th, 2020

Time: 3:00 pm to 5:00 pm (EST)

Location: *No Physical Location*

BlueJeans: https://bluejeans.com/5512415242

 

Committee:

Dr. Tucker Balch (advisor), School of Interactive Computing, Georgia Institute of Technology

Dr. Mark Riedl, School of Interactive Computing, Georgia Institute of Technology

Dr. Thad Starner, School of Interactive Computing, Georgia Institute of Technology

Dr. Maria Hybinette, Deptartment of Computer Science, University of Georgia

 

Abstract:

 

This dissertation aims to advance responsible machine learning in two important areas, ethically-constrained learning and privacy-preserving federated learning, through the application of agent-based simulation.

 

As machine learning (ML) models increase in complexity and capacity, there is a concomitant increase in the risk that ML-based agents may adopt unintended harmful behaviors during training.  For example, a trading algorithm with a flexible action space, optimizing for maximum profit, may inadvertently discover an unlawful approach that relies on market manipulation.  Complex models can also require vast user-collected training data sets and distributed learning techniques that increase the risk of exposing sensitive personal information.  Agent-based simulation (ABS) enables a safe and cost-effective approach to the investigation of these two important problems.

 

The first problem addressed is ethically-constrained learning, to which we propose a generic solution and demonstrate it by application to financial markets.  We construct a realistic simulation of profit-driven but ethical  agents trading through a stock exchange, introduce an unethical agent, and learn to recognize the unethical behavior.  Then we use the recognizer to train an intelligent trading agent that will generate profit while avoiding policies that approach the unethical behavior pattern.

 

The second problem addressed is privacy-preserving federated learning (PPFL).  We implement two PPFL protocols in simulation: a recent state of the art protocol using differential privacy and secure multiparty computation with homomorphic encryption and a new protocol incorporating oblivious distributed differential privacy.  The simulation permits us to inexpensively evaluate both protocols for model accuracy, computational complexity, communication load, and resistance to collusion attacks by participating parties.  We demonstrate that the new protocol increases computation and communication costs, but substantially improves privacy with no loss of accuracy to the final shared model.

 

Status

  • Workflow Status:Published
  • Created By:Tatianna Richardson
  • Created:10/29/2020
  • Modified By:Tatianna Richardson
  • Modified:10/29/2020

Categories

Keywords