PhD Defense by Ravi Mangal

Event Details
  • Date/Time:
    • Friday October 9, 2020 - Saturday October 10, 2020
      11:00 am - 12:59 pm
  • Location: REMOTE: BLUE JEANS
  • Phone:
  • URL: BlueJeans Link
  • Email:
  • Fee(s):
  • Extras:
No contact information submitted.

Summary Sentence: Reasoning about Programs in Statistically Modeled First-Order Environments

Full Summary: No summary paragraph submitted.

Title: Reasoning about Programs in Statistically Modeled First-Order Environments


Ravi Mangal

School of Computer Science

Georgia Institute of Technology


Date: Friday, October 9th, 2020

Time: 11:00 AM -1:00 PM (EDT)


**Note: this proposal is remote-only**



Dr. Alessandro Orso (Advisor), School of Computer Science, Georgia Institute of Technology

Dr. Vivek Sarkar, School of Computer Science, Georgia Institute of Technology

Dr. Qirun Zhang, School of Computer Science, Georgia Institute of Technology

Dr. Bill Harris, Principal Scientist, Galois,Inc

Dr. Aditya Nori, Partner Research Manager, Microsoft Research



The objects of study in this dissertation are programs and algorithms that reason about programs using their syntactic structure.  Such algorithms, referred to as program verification algorithms in the literature, are designed to find proofs of propositions about program behavior.


This dissertation adopts the perspective that programs operate in environments that can be modeled statistically. In other words, program inputs are samples drawn from a generative statistical model. This statistical perspective has two main advantages.  First, it allows us to reason about programs that are not expected to exhibit the desired behavior on all program inputs, such as neural networks that are learnt from data, by formulating and proving probabilistic propositions about program behavior. Second, it enables us to simplify the search for proofs of non-probabilistic propositions about program behavior by designing program verification algorithms that are capable of inferring "likely" hypotheses about the program environment.


The first contribution of this dissertation is a pair of program verification algorithms for finding proofs of probabilistic robustness of neural networks.  A trained neural network f is probabilistically robust if, for a pair of inputs that is randomly generated as per the environment statistical model, f is likely to demonstrate k-Lipschitzness, i.e., the distance between the outputs computed by f is upper-bounded by the kth multiple of the distance between the pair of inputs. A proof of probabilistic robustness guarantees that the neural network is unlikely to exhibit divergent behaviors on similar inputs.


The second contribution of this dissertation is a generic algorithmic framework, referred to as observational abstract interpreters, for designing algorithms that compute hypothetical semantic program invariants. Semantic invariants are logical predicates about program behavior and are used in program proofs as lemmas. The well-studied algorithmic framework of abstract interpretation provides a standard recipe for constructing algorithms that compute semantic program invariants. Observational abstract interpreters extend this framework to allow for computing hypothetical invariants that are valid only under specific hypotheses about program environments. These hypotheses are inferred from observations of program behavior and are embedded as dynamic/run-time checks in the program to ensure the validity of program proofs that use hypothetical invariants.

Additional Information

In Campus Calendar

Graduate Studies

Invited Audience
Faculty/Staff, Public, Undergraduate students
Phd Defense
  • Created By: Tatianna Richardson
  • Workflow Status: Published
  • Created On: Sep 28, 2020 - 11:34am
  • Last Updated: Sep 28, 2020 - 11:34am