event

Ph.D. Dissertation Defense - Tomer Harari Hamam

Primary tabs

TitleDynamic Optimization problems on graphs with overlapping variables: analysis, algorithms, and applications

Committee:

Dr. Justin Romberg, ECE, Chair, Advisor

Dr. Mark Davenport, ECE

Dr. Yorai Wardi, ECE

Dr. Arkadi Nemirovski, ISyE

Dr. Adam Charles, Princeton

Abstract: This dissertation studies optimization problems on graphs with overlapping variables. That is, optimization problems their objective can be expressed as the sum of overlapping (sharing) local loss functions; each loss function depends on a small subset of the optimization variables. Such problems arise in signal processing, machine learning, control, statistical inference, and many other related fields. For example, the data defining the loss functions have a significant spatial or temporal dependency. To expose the local structure of these problems, we cast them on graphs. Each node is associated with a loss function and edges whenever two loss functions share variables. The goal is to use the graph's topology to assume better ways to solve these problems. First, we consider problems with graphs with fixed topology; the emphasis is on deriving efficient decentralized algorithms to solve such large-scale problems. Our approach approximates how natural coupled-dynamical systems evolve. Each agent solves a local problem communicates its solution to agents with which it shares variables. We show how algorithms as the Jacobi method and ADMM solve these problems fully decentralized. Afterward, we move to graph problems with dynamical topologies that arise in time-varying optimization programs, where we get chain-like graphs. In particular, we are interested in how the solution for the t-th frame of variables changes in time: while adding a new functional and a new set of variables does, in general, change the solution everywhere, we give conditions under which the local estimates converge to limit points at a linear rate. Consequently, we derive theoretical guarantees for algorithms with limited memory and derive a new memory-efficient Newton online algorithm (NOA). We consider two natural extensions to the fundamental time-varying problem: convex optimization problems with local bounded constraints and the case where there is no transparent growth model of the dynamics. Through this work, we present numerous examples of various applications where these results apply, including signal reconstruction, image denoising, graph min-cut, Bayesian inference, simultaneous localization and mapping (SLAM), multi-task learning, and estimating neural rate function from spikes train.

Status

  • Workflow Status:Published
  • Created By:Daniela Staiculescu
  • Created:12/25/2021
  • Modified By:Daniela Staiculescu
  • Modified:12/25/2021

Categories

Target Audience