Program Analysis Talk: Aditya Nori

Event Details
  • Date/Time:
    • Monday April 23, 2012 - Tuesday April 24, 2012
      7:00 pm - 7:59 pm
  • Location: Klaus 1116E
  • Phone:
  • URL:
  • Email:
  • Fee(s):
  • Extras:

Mayur Naik


Summary Sentence: "Invariant Generation: A Machine Learning Perspective"

Full Summary: No summary paragraph submitted.


Aditya Nori, Microsoft Research


Invariant Generation: A Machine Learning Perspective


Computing good invariants is key to effective and efficient program verification. In this talk I will describe our experiences in using machine learning techniques (Bayesian inference, SVMs) for computing invariants useful for program verification. The first project ANEK uses Bayesian inference to compute invariants useful for modular typestate verification of programs. These invariants consists of pre and postconditions along with aliasing invariants known as access permissons. A novel feature of ANEK is that it can generate program invariants even when the code under analysis gives rise to conflicting constraints, a situation that typically occurs when there are bugs. The design of ANEK also makes it easy to add heuristic constraints that encode intuitions gleaned from several years of experience writing specifications, and this allows it to infer invariants that are better in a subjective sense. The ANEK algorithm is based on a modular analysis that makes it fast and scalable, while producing reliable and sound invariants. Our implementation of ANEK infers access permissions invariants used by the PLURAL modular typestate checker for Java programs and we present empirical results that demonstrate the effectiveness of ANEK.

In the second project INTERPOL, we show how interpolants can be viewed as classifiers in supervised machine learning. This view has several advantages: First, we are able to use off-the-shelf classification techniques, in particular support vector machines (SVMs), for interpolation. Second, we show that SVMs can find relevant predicates for a number of benchmarks. Since classification algorithms are predictive, the interpolants computed via classification are likely to be relevant predicates or invariants. Finally, the machine learning view also enables us to handle superficial non-linearities. Even if the underlying problem structure is linear, the symbolic constraints can give an impression that we are solving a non-linear problem. Since learning algorithms try to mine the underlying structure directly, we can discover the linear structure for such problems. We demonstrate the feasibility of INTERPOL via experiments over benchmarks from various papers on program verification.

Speaker Bio

Aditya Nori is a member of the programming languages and machine learning groups at Microsoft Research India. He is also an adjunct professor at IIT Hyderabad. His research interests include algorithms for the analysis of programs and machine learning with special focus on tools for improving software reliability and programmer productivity. He received his PhD in Computer Science from the Indian Institute of Science, Bangalore.

Additional Information

In Campus Calendar

College of Computing, School of Computer Science

Invited Audience
No audiences were selected.
No keywords were submitted.
  • Created By: Mike Terrazas
  • Workflow Status: Published
  • Created On: Apr 12, 2012 - 6:34am
  • Last Updated: Oct 7, 2016 - 9:58pm