PhD Defense by Namjoon Suh

Event Details
  • Date/Time:
    • Friday November 18, 2022
      9:00 am - 11:00 am
  • Location: Atlanta, GA
  • Phone:
  • URL: Zoom
  • Email:
  • Fee(s):
  • Extras:
No contact information submitted.

Summary Sentence: Statistical Viewpoints on network modeling and deep learning

Full Summary: No summary paragraph submitted.

Title: Statistical Viewpoints on network modeling and deep learning


Date: Nov 18th , 2022

Time: 8:00 - 9:00 AM EST

Meeting Link:


Namjoon Suh

Machine Learning PhD Student

School of Industrial & Systems Engineering
Georgia Institute of Technology



1 Dr. Huo, Xiaoming (Advisor, ISyE, Gatech)

2 Dr. Mei, Yajun (Co-advisor, ISyE, Gatech)

3 Dr. Kang, Sung ha (Mathematics, Gatech)

4 Dr. Zhilova, Mayya (Mathematics, Gatech ) 

5 Dr. Zhou, Ding-Xuan (School of Mathematics and Statistics, The university of Sydney) 



In my thesis presentation, two of my works will be presented:  

  1. A new statistical model for network data : We propose a combined model, which integrates the latent factor model and a sparse graphical model, for network data. It is noticed that neither a latent factor model nor a sparse graphical model alone may be sufficient to capture the structure of the data. The proposed model has a latent (i.e., factor analysis) model to represent the main trends (a.k.a., factors), and a sparse graphical component that captures the remaining ad-hoc dependence. Model selection and parameter estimation are carried out simultaneously via a penalized likelihood approach. The convexity of the objective function allows us to develop an efficient algorithm, while the penalty terms push towards low-dimensional latent components and a sparse graphical structure. The effectiveness of our model is demonstrated via simulation studies, and the model is also applied to four real datasets: Zachary's Karate club data,  Kreb's U.S. political book dataset (, U.S. political blog dataset, and citation network of statisticians; showing meaningful performances in practical situations. 
  3. New insights in approximation theory and statistical learning rate of deep ReLU network: This work provides the rigorous theoretical analysis on how the approximation rate and learning rate (i.e., excess risk) behave when deep ReLU fully connected network is used as a function approximator (estimator) when ground-truth functions are assumed to be in Sobolev spaces defined over unit sphere. With the help of spherical harmonic basis, we track the explicit dependence on data dimension d in the rates and prove that deep ReLU net can avoid “the curse of dimensionality” when the function smoothness is in the order d as d tends to infinity. This discovery is not something observed in the state-of-the-art result in deep learning theory; specifically, when the function is defined on d-dimensional cube, or convolutional neural networks are used as the function approximator.   

Additional Information

In Campus Calendar

Graduate Studies

Invited Audience
Faculty/Staff, Public, Undergraduate students
Phd Defense
  • Created By: Tatianna Richardson
  • Workflow Status: Published
  • Created On: Nov 11, 2022 - 12:01pm
  • Last Updated: Nov 11, 2022 - 12:02pm