Skip to content

Optimization Algorithms

Title: Dynamical Design of Hybrid Methods for Sparse Systems
Current Status: Manuscript In Progress
Faculty Advisor: Dr. Rachel Kuske – Chair and Professor, Georgia Tech
Collaborator: Ariba Khan
Poster (Click on the poster to view the pdf version):

Presentation:

Abstract: Sparse recovery optimization algorithms are utilized in machine learning, imaging, and parameter fitting in problems, as well as a multitude of other fields. Compressive sensing, a prominent field in mathematics this past decade, has motivated the revival of sparse recovery algorithms with ℓ-1 norm minimization. Although small underdetermined problems are substantially well understood, large, inconsistent, nearly sparse systems have not been investigated with as much detail. In this dynamical study, two commonly used sparse recovery optimization algorithms, LB (Linearized Bregman) and ISTA (Iterative Shrinkage Thresholding Algorithm) are compared.

LB and ISTA have huge differences when it comes to subsampling. Subsampling is widely used in the real world setting because iterating over the entire dataset usually means a very time-consuming process. By subsampling, we strive to solve problems in shorter periods of time with only few full datapasses. From our research, we found that LB exhibits a quicker convergence and more suitable for subsampling than ISTA mainly because ISTA leaves out more information while undergoing the shrinkage process. Also, the two methods are more suitable for different kinds of subsampling. While ISTA leaves out information every time it undergoes the thresholding, it is best to use the tall subsampling to obtain the best convergence. LB, on the other hand,  is best with flat subsampling as convergence is quicker and residual is lower.

The dependence of their dynamical behaviors on the threshold hyper-parameter and different entry sizes in the solution suggests complementary advantages and disadvantages. We found that utilizing Linearized Bregman together with large thresholding could help predict large entries quickly, while ISTA with a much lower thresholding is better designed for predicting small entries without chatter. These results prompted the creation of a hybrid method which benefits from favorable characteristics from both optimization algorithms such as less chatter and quick convergence. The Hybrid method is proposed, analyzed, and evaluated as outperforming and superior to both linearized Bregman and Iterative Shrinkage Thresholding Algorithm, principally due to the Hybrid’s versatility when processing diverse entry sizes with different parameters.