PhD Proposal by David Byrd

Event Details
  • Date/Time:
    • Thursday November 5, 2020 - Friday November 6, 2020
      3:00 pm - 4:59 pm
  • Location: Remote: Blue Jeans
  • Phone:
  • URL: Bluejeans
  • Email:
  • Fee(s):
  • Extras:
No contact information submitted.

Summary Sentence: Ethically constrained and privacy preserving learning in agent-based simulation

Full Summary: No summary paragraph submitted.


Title: Ethically constrained and privacy preserving learning in agent-based simulation


David Byrd

Ph.D. Student

School of Interactive Computing

Georgia Institute of Technology


Date: Thursday, November 5th, 2020

Time: 3:00 pm to 5:00 pm (EST)

Location: *No Physical Location*




Dr. Tucker Balch (advisor), School of Interactive Computing, Georgia Institute of Technology

Dr. Mark Riedl, School of Interactive Computing, Georgia Institute of Technology

Dr. Thad Starner, School of Interactive Computing, Georgia Institute of Technology

Dr. Maria Hybinette, Deptartment of Computer Science, University of Georgia




This dissertation aims to advance responsible machine learning in two important areas, ethically-constrained learning and privacy-preserving federated learning, through the application of agent-based simulation.


As machine learning (ML) models increase in complexity and capacity, there is a concomitant increase in the risk that ML-based agents may adopt unintended harmful behaviors during training.  For example, a trading algorithm with a flexible action space, optimizing for maximum profit, may inadvertently discover an unlawful approach that relies on market manipulation.  Complex models can also require vast user-collected training data sets and distributed learning techniques that increase the risk of exposing sensitive personal information.  Agent-based simulation (ABS) enables a safe and cost-effective approach to the investigation of these two important problems.


The first problem addressed is ethically-constrained learning, to which we propose a generic solution and demonstrate it by application to financial markets.  We construct a realistic simulation of profit-driven but ethical  agents trading through a stock exchange, introduce an unethical agent, and learn to recognize the unethical behavior.  Then we use the recognizer to train an intelligent trading agent that will generate profit while avoiding policies that approach the unethical behavior pattern.


The second problem addressed is privacy-preserving federated learning (PPFL).  We implement two PPFL protocols in simulation: a recent state of the art protocol using differential privacy and secure multiparty computation with homomorphic encryption and a new protocol incorporating oblivious distributed differential privacy.  The simulation permits us to inexpensively evaluate both protocols for model accuracy, computational complexity, communication load, and resistance to collusion attacks by participating parties.  We demonstrate that the new protocol increases computation and communication costs, but substantially improves privacy with no loss of accuracy to the final shared model.


Additional Information

In Campus Calendar

Graduate Studies

Invited Audience
Faculty/Staff, Public, Graduate students, Undergraduate students
Phd proposal
  • Created By: Tatianna Richardson
  • Workflow Status: Published
  • Created On: Oct 29, 2020 - 11:22am
  • Last Updated: Oct 29, 2020 - 11:22am