PhD Proposal by Ravi Mangal

Event Details
  • Date/Time:
    • Thursday July 30, 2020 - Friday July 31, 2020
      11:00 am - 12:59 pm
  • Location: REMOTE: BLUE JEANS
  • Phone:
  • URL: Blue Jeans
  • Email:
  • Fee(s):
  • Extras:
No contact information submitted.

Summary Sentence: Reasoning about Programs in Statistically Modeled Environments

Full Summary: No summary paragraph submitted.

Title: Reasoning about Programs in Statistically Modeled Environments


Ravi Mangal

School of Computer Science

Georgia Institute of Technology


Date: Thursday, July 30th, 2020

Time: 11:00 AM -1:00 PM (EDT)


**Note: this proposal is remote-only**



Dr. Alessandro Orso (Advisor), School of Computer Science, Georgia Institute of Technology

Dr. Vivek Sarkar, School of Computer Science, Georgia Institute of Technology

Dr. Qirun Zhang, School of Computer Science, Georgia Institute of Technology

Dr. Bill Harris, Principal Scientist, Galois,Inc

Dr. Aditya Nori, Sr. Principal Research Manager, Microsoft Research



The objects of study in this proposal are programs and algorithms that use the syntactic structure of programs to reason about them.  Such algorithms, referred to as program verification algorithms in the literature, are designed to find proofs (in some background logic) of propositions about program behavior.


This proposal adopts the perspective that programs operate in a world/environment that can be modeled statistically. In other words, the inputs of a program are drawn from some distribution. This statistical perspective is relatively unexplored, though not new, in the context of program verification literature, and it enables us to formulate probabilistic propositions about program behavior.  Probabilistic propositions are particularly useful for reasoning about programs learnt from data (like neural networks) that are not expected to exhibit the desired behavior on all program inputs (or in all environments).  In this proposal, I define a notion of probabilistic robustness/Lipschitzness for neural networks. A trained neural network f is probabilistically robust if, for a randomly generated pair of inputs, f is likely to demonstrate k-Lipschitzness, i.e., the distance between the outputs computed by f is upper-bounded by the kth multiple of the distance between the pair of inputs. I also present verification algorithms for finding proofs of probabilistic robustness of neural networks.


Verifying probabilistic robustness of neural networks is just one application of the statistical perspective on program environments. In order to enable generic program verification algorithms (that are agnostic to the proposition to be proven or the programming language) to benefit from this statistical perspective, this proposal presents observational abstract interpreters, a generalization of standard abstract interpreters that are used for computing semantic program invariants. The invariants computed by observational abstract interpreters are permitted to be hypothetical (i.e. conditioned on hypotheses), with the hypotheses based on observed behaviors of the program. The validity of the proofs of program correctness computed using hypothetical semantic invariants is ensured by embedding the hypotheses as run time/dynamic checks in the program. We state the metatheoretic guarantees, i.e., theorems about the behavior of observational abstract interpreters, that are expected to hold. These metatheoretic propositions remain to be proven. Proving these theorems and mechanizing them in a proof assistant represents the pending work.

Additional Information

In Campus Calendar

Graduate Studies

Invited Audience
Faculty/Staff, Public, Graduate students, Undergraduate students
Phd proposal
  • Created By: Tatianna Richardson
  • Workflow Status: Published
  • Created On: Jul 23, 2020 - 11:46am
  • Last Updated: Jul 23, 2020 - 11:46am