CS reconstruction algorithms are always the most astonishing thing for people who know compressed sensing at the first time. Because only a few sampling (much less than Shannon-Nyquist sampling rate) can perfectly reconstruct the whole signal is really a big surprise. Rice’s single-pixel camera can recover a complex image after sampling only several random projections of its pixels:

Another successful example is the CS reconstruction for fMRI imaging (reconstruction, denoising, deblurring). Since 1) the fMRI image is naturally compressible in certain transform domains, e.g., wavelet, DCT, DFT; 2) fMRI scanners naturally sample the image in spatial frequency domain, where the signal is sparse, CS can reconstruct the fMRI images from a few samples via fMRI scanners. SparseMRI by Michael Lustig, David Donoho and John M. Pauly is a successful application of CS in fMRI imaging.

The more astonishing thing than the accurate reconstruction is that the magical reconstruction can be easily obtained by easy convex optimization (e.g., linear programming). Roughly to say, in compressed sensing, we have measurements (samples) , measuring matrix , and we know , wherein is the sparse signal we need to recover. Since the number of measurements are much less than the dimension of the signal , directly solving linear system is impossible and ill-posed. Thus we have to further restrict the solution in a small region where uniqueness of can be guaranteed, just like we did in regularization. Since we know is sparse, we can restrict the number of nonzero entries in , i.e.,

.

Sometimes it is equivalent to

.

**Greedy algorithms** can solve this problem by selecting the most significant variable in for decreasing the least square error once a time. The greedy search starts from . In each iteration, the most important and unselected variable (different importance criterion can be designed) is added to the support set, then is minimized on the support set composed of the selected variables. The iterations stop when the number of the selected variables reach . Famous representatives are Orthogonal matching pursuit (OMP), compressive sampling matching pursuit (CoSaMP) and original lasso.

However, the norm of is not convex and thus the convergence to the global optimum is hard to be ensured for greedy algorithms. In compressed sensing, E. J. Candès has proved that when the RIP of the measurement matrix satisfies certain condition (), the norm minimization can be equivalently replaced by its convex relaxation norm (we will go back to this in later notes). Thus the problems are relaxed as:

.

.

To see why can encourage a sparse solution , we show the convergence processes of constrained least square and constrained least square: The black spot is the starting point and the red one is the final solution. The above optimization problems are convex and have global optimums. Thus many **algorithms based on convex optimization methods** are proposed. But the norm in them is not smooth and thus the gradient w.r.t near is cannot be obtained. Unfortunately most convex optimization methods are based on gradient. Hence the based compressed sensing problem should be transformed to more convenient forms. The most common method to eliminate the norm is to double the size of . In particular, the norm$ can be equivalently denoted by , where is the positive part of and is the negative part of . This is exactly the technique that has been applied in Basis Pursuit (BP) and LP decoding.

.

Then standard linear programming can be applied to solve this problem. However, the time cost of this LP is not satisfying and another easier optimization formulation using penalty is often adopted:

.

This is an unconstrained formulation of compressed sensing. Its equivalence to the problem has been mentioned in many papers, but how to calculate from and vice versa to obtain an equivalent pair is still an open problem. In existing algorithms, is determined by so called “continuation” or “warm start” or “homotopy” method, in which a series of above optimization problems with different from large to small are solved, the solution of the previous optimization is adopted as the initial point of the next one, a solution path is then constructed and the preferred solution is selected from the path. Least angle regression (LARS) and its variants are the representatives.

Gradient Projection for Sparse Reconstruction (GPSR) doubles the variable size of the penalize compressed sensing problem:

.

Gradient projection method is applied to optimize the above problem. GPSR decreases the objective function on the negative gradient direction and projects the solution onto the nonnegative quadrant in each iteration. Another algorithm doubles the variable size and solves the penalize compressed sensing problem is an interior point method for large-scale regularized regression. It applies a truncated newton interior point method to solve the following problem:

.

A logarithmic barrier is conveniently build from the bound constraint in above problem to keep the central path from the bound in primal interior point method. Then a preconditioned conjugate gradient method is utilized to approximate the Hessian matrix. Thus the computation is accelerated.

Another method to eliminate the norm is to replace it with a smooth approximation that is differentiable everywhere. An efficient smoothing method is proposed in Nesterov‘s Smooth minimization of non-smooth functions paper. It firstly writes the nonsmooth function minimization in a minimax way:

.

Then smoothing norm is to add a prox-function of to the minimax objective function:

.

The norm and smoothed norm are compared in the following figure. is a parameter to adjust the smoothing accuracy.

When norm is replaced by its smooth approximation, most existing convex optimization methods are available to the compresses sensing problem. Nesterov’s method is a fast gradient method with the optimal convergence rate. It combines the current gradient and the historical gradient information to determine the optimization direction of each iteration. In compressed sensing, NESTA and FISTA are based on Nesterov’s method, the former adopts the smooth approximation of norm.

**Iterative thresholding** is a another kind of compressed sensing algorithm with fast speed for solving the penalized least square problem. The algorithm is composed of two stages: optimization of the least square term and decreasing of norm. The former stage is accomplished by solving the optimization problem without penalty, while the later stage is finished by thresholding of the magnitudes of entries in . There are two kinds of thresholding have been used. One is hard thresholding, which directly assigns the entries with smallest magnitudes to zero. Another is soft thresholding, which subscribes all the entries’ magnitudes by and assigns the ones whose magnitudes are less than to zero.

Representative algorithms are message passing, iterative splitting and thresholding (IST) and iterated hard shrinkage. Iterative thresholding methods are competitive in speed. But it is not easy to demonstrate their convergence to the global optimum in lots of situations.

**Bregman iterative algorithm and Fixed point continuation (FPC) **are developed by CAAM at Rice University around 2007. For penalized least square problem, the Bregman distance is defined by:

.

The subgradient is defined as:

In each iteration, a subproblem with Bregman distance regularization is solved:

.

This gives a soft-thresholding of . And then the subgradient is updated:

.

The speed of Bregman iterative algorithm and FPC (FPC_AS is faster) is very appealing and this kind of method has very good theoretical properties in convergence and optimality. Moreover, it has connection to widely used soft-thresholding method and augmented Lagrangian method. FPC and FPC_AS has successful applications in totalvariation minimization in image denoising.

In this note, the existing compressed sensing algorithms are classified into four sorts: those based on greedy search, those based on convex optimization, those based on iterative thresholding and those based on Bregman iteration. I believe there are a great amount of compressed sensing algorithms that are not included in this note. To find the other algorithms, their papers and codes, you can refer to Compressed sensing: A Big Picture.

There are also some model selection algorithms proposed in statistics literatures which can be applied to compressed sensing problems, e.g., Least angle regression, Elastic net and so on. These algorithms will be discussed in another note about the relationship between compressed sensing and statistics.

Great review! But I wonder if it’s okay to miss reconstruction with noise here, which leads to quadratic constrained LP.

Latex eqns not properly displayed several places, e.g. $\ell_1$ and $\ell_2$ above the 2nd figure, the $\mathcal O(1/k^2)$ convergence rate of Nestorov method, should the subgradient be defined explicitly?

Thx a lot for the suggestions! The LaTeX eqns are already revised and the definition of subgradient is given in the post.

I am not very sure about the “reconstruction with noise” you mentioned, do you mean ?

right

I didn’t know much about this problem. Would you please hand me some papers about this? Thanks a lot!

Good overview and explanation for people new to compressive reconstruction. Thanks!

Forward your new note “relationship between compressed sensing and statistics”

Could you please assess the algorithm “pathwise coordinate descent” proposed by Friedman

Your note is the most comprehensive introduction of compress sensing I have ever seen.