event

PhD Defene by Hanjun Dai

Primary tabs

Title: Learning Neural Algorithms with Graph Structures

 

Date: Tuesday, Dec 3, 2019

Time: 12:00 p.m. to 1:30 p.m. (EST)

Location: Coda C1003 Adair

 

Hanjun Dai

School of Computational Science and Engineering, 

College of Computing,

Georgia Institute of Technology

https://hanjun-dai.github.io/

 

Committee:

Dr. Le Song (Advisor, School of Computational Science and Engineering, Georgia Institute of Technology)

Dr. Richard Vuduc (School of Computational Science and Engineering, Georgia Institute of Technology)

Dr. Duen Horng (Polo) Chau (School of Computational Science and Engineering, Georgia Institute of Technology)

Dr. John Schulman (OpenAI)

Dr. Pushmeet Kohli (DeepMind)

 

Abstract:

 

Graph structures, like syntax trees, social networks, and programs, are ubiquitous in many real world applications including knowledge graph inference, chemistry and social network analysis. Over the past several decades, many expert-designed algorithms on graphs have been proposed with nice theoretical properties. Recent advances in deep learning have shown strong empirical performances for images, texts and signals. However the combinatorial and discrete nature of graphs makes it non-trivial to apply neural networks in this domain. This thesis will discuss several aspects on how to build a tight connection between neural networks and the classical algorithms for graphs. Specifically: 

 

1) Firstly, the existing algorithms provide an inspiration of deep architecture design, for both the discriminative learning and generative modeling of graphs. Regarding the discriminative representation learning, we show how the graphical model inference algorithms can inspire the design of graph neural networks, and how to scale it up with the idea borrowed from steady states algorithms like PageRank; for generative modeling, we leverage the idea of attribute grammar for syntax trees to help regulate the deep networks. 

 

2) Secondly, the algorithm framework has procedures that can be enhanced by learnable deep network components. We demonstrate by learning the heuristic function in greedy algorithms with reinforcement learning for combinatorial optimization problems over graphs, such as vertex cover and max cut. 

 

3) Finally, as the algorithm structure generally provides a good inductive bias for the problem, we take an initial step towards inductive reasoning for such structure, where we make attempts to reason about the loop invariant for program verification and the reaction templates for retrosynthesis structured prediction. 

 

Besides the topics covered above, we will also discuss limitations and future extensions of this thesis work.

 

Status

  • Workflow Status:Published
  • Created By:Tatianna Richardson
  • Created:11/25/2019
  • Modified By:Tatianna Richardson
  • Modified:11/25/2019

Categories

Keywords