Monday, June 30, 2008

CS: Regularized Dictionary Learning, Non-local Regularization of Inverse Problems, Cs and Matrix multiply, and new Rice entries.


From the conferences featured last week, here two additional papers:

Regularized Dictionary Learning for Sparse Approximation by Mehrdad Yaghoobi-Vaighan, Thomas Blumensath and Mike Davies. The abstract reads:

Sparse signal models approximate signals using a small number of elements from a large set of vectors, called a dictionary. The success of such methods relies on the dictionary fitting the signal structure. Therefore, the dictionary has to be designed to fit the signal class of interest. This paper uses a general formulation that allows the dictionary to be learned form the data with some a priori information about the dictionary. In this formulation a universal cost function is proposed and practical algorithms are presented to minimize this cost under different constraints on the dictionary. The proposed methods are compared with previous approaches using synthetic and real data. Simulations highlight the advantages of the proposed methods over other currently available dictionary learning strategies.
and Non-local Regularization of Inverse Problems by Gabriel Peyré, Sébastien Bougleux and Laurent Cohen. The abstract reads:
This article proposes a new framework to regularize linear inverse problems using the total variation on non-local graphs. This nonlocal graph allows to adapt the penalization to the geometry of the underlying function to recover. A fast algorithm computes iteratively both the solution of the regularization process and the non-local graph adapted to this solution. We show numerical applications of this method to the resolution of image processing inverse problems such as inpainting, super-resolution and compressive sampling.
In a different setting, we have a fascinating article on A note on compressed sensing and the complexity of matrix multiplication by Mark Iwen, Craig Spencer. The abstract reads:
We consider the conjectured O(N2+\epsilon) time complexity of multiplying any two N × N matrices A and B. Our main result is a deterministic Compressed Sensing (CS) algorithm that both rapidly and accurately computes A · B provided that the resulting matrix product is sparse/compressible. As a consequence of our main result we increase the class of matrices A, for any given N × N matrix B, which allows the exact computation of A·B to be carried out using the conjectured O(N2+\epsilon) operations. Additionally, in the process of developing our matrix multiplication procedure, we present a modified version of Indyk’s recently proposed extractor-based CS algorithm [16] which is resilient to noise.

From the Rice Compressed Sensing repository, we also have new papers:

Discriminative learned dictionaries for local image analysis by Julien Mairal, Francis Bach, Jean Ponce, Guillermo Sapiro, and Andrew Zisserman. The abtract reads:

Sparse signal models have been the focus of much recent research, leading to (or improving upon) state-of-the-art results in signal, image, and video restoration. This article extends this line of research into a novel framework for local image discrimination tasks, proposing an energy formulation with both sparse reconstruction and class discrimination components, jointly optimized during dictionary learning. This approach improves over the state of the art in texture segmentation experiments using the Brodatz database, and it paves the way for a novel scene analysis and recognition framework based on simultaneously learning discriminative and reconstructive dictionaries. Preliminary results in this direction using examples from the Pascal VOC06 and Graz02 datasets are presented as well.


Sparse representations for image classification: Learning discriminative and reconstructive non-parametric dictionaries by Fernando Rodriguez and Guillermo Sapiro. The abstract reads:

A framework for learning optimal dictionaries for simultaneous sparse signal representation and robust class classi cation is introduced in this paper. This problem for dictionary learning is solved by a class-dependent supervised simultaneous orthogonal matching pursuit, which learns the intra-class structure while increasing the inter-class discrimination, interleaved with an efficient dictionary update obtained via singular value decomposition. This framework addresses for the first time the explicit incorporation of both reconstruction and discrimination terms in the non-parametric dictionary learning and sparse coding energy. The work contributes to the understanding of the importance of learned sparse representations for signal classi cation, showing the relevance of learning discriminative and at the same time reconstructive dictionaries in order to achieve accurate and robust classi cation. The presentation of the underlying theory is complemented with examples with the standard MNIST and Caltech datasets, and results on the use of the sparse representation obtained from the learned dictionaries as local patch descriptors, replacing commonly used experimental ones.

There is also:


and soem convergence results for reweighted Lp algorithms by Ingrid Daubechies, Ronald DeVore, Massimo Fornasier, and C. Sinan Güntürk entitled:Iteratively re-weighted least squares minimization for sparse recovery. The abstract reads:

Under certain conditions (known as the Restricted Isometry Property or RIP) on the m×N matrix \Phi (where m less than N), vectors x element of R^N that are sparse (i.e. have most of their entries equal to zero) can be recovered exactly from y := \Phi x even though \Phi^−1(y) is typically an (N − m)- dimensional hyperplane; in addition x is then equal to the element in \Phi^−1(y) of minimal ℓ1-norm. This minimal element can be identified via linear programming algorithms. We study an alternative method of determining x, as the limit of an Iteratively Re-weighted Least Squares (IRLS) algorithm. The main step of this IRLS finds, for a given weight vector w, the element in \Phi^−1(y) with smallest ℓ2(w)-norm. If x(n) is the solution at iteration step n, then the new weight w(n) is defined by w^(n)_ i := [|x^(n)_ i |^2 + \epsilon_n^2]^(−1/2) , i = 1, . . . ,N, for a decreasing sequence of adaptively defined ǫn; this updated weight is then used to obtain x(n+1) and the process is repeated. We prove that when \Phi satisfies the RIP conditions, the sequence x(n) converges for all y, regardless of whether \Phi^−1(y) contains a sparse vector. If there is a sparse vector in \Phi^−1(y), then the limit is this sparse vector, and when x(n) is sufficiently close to the limit, the remaining steps of the algorithm converge exponentially fast (linear convergence in the terminology of numerical optimization). The same algorithm with the “heavier” weight w^(n)_ i := [|x^(n)_ i |^2 +\epsilon_n^2]^(−1+ τ /2) , i = 1, . . . ,N, where τ is between 0 and 1, can recover sparse solutions as well; more importantly, we show its local convergence is superlinear and approaches a quadratic rate for τ approaching to zero.


Credit: NASA/JPL-Caltech/University of Arizona/Texas A&M. Phoenix Sol 33.

Saturday, June 28, 2008

CS: Some recovery conditions for basis learning by L1-minimization

In the category, sparse dictionary: how to make them and find them here is an interesting paper by Remi Gribonval and Karin Schnass entitled Some recovery conditions for basis learning by L1-minimization. The abstract reads:

Many recent works have shown that if a given signal admits a sufficiently sparse representation in a given dictionary, then this representation is recovered by several standard optimization algorithms, in particular the convex l1 minimization approach. Here we investigate the related problem of infering the dictionary from training data, with an approach where l1-minimization is used as a criterion to select a dictionary. We restrict our analysis to basis learning and identify necessary / sufficient / necessary and sufficient conditions on ideal (not necessarily very sparse) coefficients of the training data in an ideal basis to guarantee that the ideal basis is a strict local optimum of the l1-minimization criterion among (not necessarily orthogonal) bases of normalized vectors. We illustrate these conditions on deterministic as well as toy random models in dimension two and highlight the main challenges that remain open by this preliminary theoretical results.

The following paper is only available through its abstract: Sparse representations of audio: from source separation to wavefield compressed sensing by Remi Gribonval. The abstract reads:

Sparse signal representations, which are at the heart of today's coding standards (JPEG, MPEG, MP3), are known to have had a substantial impact in signal compression. Their principle is to represent high-dimensional data by a combination of a few elementary building blocks, called atoms, chosen from a large collection called a dictionary. Over the last five years, theoretical advances in sparse representations have highlighted their potential to impact all fundamental areas of signal processing. We will discuss some current and emerging applications of sparse models in musical sound processing including: signal acquisition (Compressed Sensing - sampling wave fields at a dramatically reduced rate) and signal manipulation (e.g., source separation and enhancement for digital remastering). We will conclude by discussing the new algorithmic and modeling challenges raised by these approaches.




Image Credit: NASA/JPL/Space Science Institute, W00046997.jpg was taken on June 24, 2008 and received on Earth June 26, 2008. The camera was pointing toward SATURN-RINGS at approximately 725,870 kilometers away, and the image was taken using the CL1 and CL2 filters. This image has not been validated or calibrated. A validated/calibrated image will be archived with the NASA Planetary Data System in 2009.

Friday, June 27, 2008

CS: Sparsity at EUSIPCO and two CS papers at ECCV

The EUSIPCO meeting is on August 25-29th and will also feature talks/presentation on sparsity which is one of the building block of Compressed sensing / Compressive Sensing. I have mentioned others before but there are some I missed because they are generally not available, here is a list of them. Thanks to Laurent and Jort for the info. The ones we have seen before here are:

The ones that I did not cover before are listed below. Only the first paper is linking to the corresponding paper.
  • Separation of stereo speech signals based on a sparse dictionary algorithm, Maria Jafari (Queen Mary, University of London, United Kingdom); Mark Plumbley (Queen Mary, University of London, United Kingdom)
  • The ReMBo Algorithm: Accelerated Recovery of Jointly Sparse Vectors, Moshe Mishali (Technion, Israel); Yonina Eldar (Technion---Israel Institute of Technology, Israel)
  • Regularized Dictionary Learning for Sparse Approximation, Mehrdad Yaghoobi (University of Edinburgh, United Kingdom); Thomas Blumensath (University of Edinburgh, United Kingdom); Mike Davies (University of Edinburgh, United Kingdom)
  • Sparse stimuli for cochlear implant, Guoping Li (University of Southampton, United Kingdom); Mark Lutman (Institute of Sound and Vibration Research, United Kingdom)
  • Learning Sparse Generative Models of Audiovisual Signals, Gianluca Monaci (University of California, Berkeley, USA); Friedrich Sommer (University of California Berkeley, USA); Pierre Vandergheynst (EPFL, Switzerland)
  • A Complementary Matching Pursuit Algorithm for Sparse Approximation, Gagan Rath (IRISA-INRIA, France, France); Christine Guillemot (IRISA-INRIA, France, France)
  • Sparse representations: recovery conditions and fast algorithm for a new criterion, Jean-Jacques Fuchs (irisa/université de Rennes, France)
  • A Sparseness Controlled Proportionate Algorithm for Acoustic Echo Cancellation, Pradeep Loganathan (Imperial College, London., United Kingdom); Andy Khong (Nanyang Technological University, Singapore); Patrick Naylor (Imperial College London, United Kingdom)
  • A sparse periodic decomposition and its application to speech representation, Makoto Nakashizuka (Osaka University, Graduate School of Engineering Science, Japan)
  • Deterministic Dictionaries for Sparsity: A Group Representation Approach, Shamgar Gurevich (University of California, Berkeley, USA); Ronny Hadani (University of Chicago, USA); Nir Sochen (Tel-Aviv University, Israel)
  • Iterative Enhancement of Event Related Potentials Through Sparsity Constraints, Nasser Mourad (McMaster University, Canada); James P. Reilly (McMaster University, Canada); Laurel Trainor (McMaster University, Canada); Bernhard Ross (Rotman Research Institute for Neuroscience, Canada)


The 10th European Conference on Computer Vision on October 12-18, 2008 in Marseille, France will feature two papers related to Compressive Sensing:


  • Compressive Structured Light for Recovering Inhomogeneous Participating Media, Jinwei Gu, Shree Nayar, Eitan Grinspun, Peter Belhumeur, Ramamoorthi Ravi
  • Background Subtraction for Compressed Sensing Camera, Volkan Cevher, Dikpal Reddy, Marco Duarte, Aswin Sankaranarayanan, Rama Chellappa, Richard Baraniuk.

Credit: NASA/JPL-Caltech/University of Arizona/Texas A&M. Sol 30 image from Phoenix SSI Imager.

Thursday, June 26, 2008

CS: Compressed sensing and transference talk.

The Colloque D'Analyse Harmonique D'Orsay will feature a talk by Yves Meyer at 2:10 till 3:00 pm on July 9th. The title of the talk is "Compressed sensing and transference".

What's transference you say ? Google says this. But I personally don't know, Terry Tao defines a transference principle:

This theorem does not directly apply to the prime numbers {\mathcal P}, as they have density zero, but it turns out that there is a trick (which Ben Green and I call the transference principle) which (very roughly speaking) lets one locate a dense set of integers A which “models” the primes, in the sense that there is a relationship between additive patterns in A and additive patterns in {\mathcal P}. (The relationship here is somewhat analogous to Monte Carlo integration, which uses the average value of a function f on a sparse set to approximate the average value of f on a much larger domain.) As a consequence of this principle, Ben and I were able to use Szemerédi’s theorem to establish that the primes contained arbitrarily long arithmetic progressions.


Primes number subsets, mmmhhh, it looks like an extension of his previous result. We'll see.

Wednesday, June 25, 2008

Jim Gray's tribute and the Amateur Search.

As some of you may recall, I tried to be of help in the search for Jim Gray last year. His friends and family hosted a tribute to him at Berkeley this past month. The video of the main event can be seen here. It looks like he was a very nice guy.

Mike Olson, presented the amateur search in that video. However to get a better understanding of what was found you really want to see the following presentation.

All the entries that were specific to Jim Gray's search can be found here. All the entries that are more related to search and rescue in general can be found here. As part of the outer circle, this is more information than I had at the time.


CS: CS and Cognitive Radio, CS of Analog Signals, Tomographic inversion using L1-norm regularization of wavelet coefficients, and two presentations

When talking to Sebastian Hoyos, at the Texas A&M L1 meeting, he promised me he would put his latest intriguing paper on compressed sensing and cognitive radio work online. Here it is (thanks Sebastian) : Mixed-Signal Parallel Compressed Sensing and Reception For Cognitive Radio by Zhuizhuan Yu, Sebastian Hoyos, Brian M. Sadler. The abstract reads:

A parallel structure to do spectrum sensing in Cognitive Radio (CR) at sub-Nyquist rate is proposed. The structure is based on Compressed Sensing (CS) that exploits the sparsity of frequency utilization. Specifically, the received analog signal is segmented or time-windowed and CS is applied to each segment independently using an analog implementation of the inner product, then all the samples are processed together to reconstruct the signal. Applying the CS framework to the analog signal directly relaxes the requirements in wideband RF receiver front-ends. Moreover, the parallel structure provides a design flexibility and scalability on the sensing rate and system complexity. This paper also provides a joint reconstruction algorithm that optimally detects the information symbols from the sub-Nyquist analog projection coefficients. Simulations showing the efficiency of the proposed approach are also presented.


Here the sparsity of spectrum utilization in cognitive radio allows compressed sensing to be used in wideband spectrum sensing. One should note the similarity with the Georgia Tech transform imager and the Random Demodulator.


Also, found on the internets:
Compressed Sensing of Analog Signals by Yonina C. Eldar

A traditional assumption underlying most data converters is that the signal should be sampled at a rate which exceeds twice the highest frequency. This statement is based on a worst-case scenario in which the signal occupies the entire available bandwidth. In practice, many signals posses a sparse structure so that a large part of the bandwidth is not exploited. In this paper, we consider a framework for utilizing this sparsity in order to sample such analog signals at a low rate. More specifically, we consider continuous-time signals that lie in a shift-invariant (SI) space generated by m kernels, so that any signal in the space can be expressed as an infinite linear combination of the shifted kernels. If the period of the underlying SI space is equal to T, then such signals can be perfectly reconstructed from samples at a rate of m/T. Here we treat the case in which only k out of the m generators are active, meaning that the signal actually lies in a lower dimensional space spanned by k generators. However, we do not know which k are chosen. By relying on results developed in the context of compressed sensing (CS) of finite-length vectors, we develop a general framework for sampling such signals at a rate much lower than m/T. The distinguishing feature of our results is that in contrast to the problems treated in the context of CS, here we consider sampling of analog-signals for which no underlying finite-dimensional model exists. Our approach combines ideas from analog sampling in a subspace with a recently developed block diagram that converts an infinite set of sparse equations to a finite counterpart. Using these two components we formulate our problem within the framework of finite CS and then rely on efficient and stable algorithms developed in that context.

Tomographic inversion using L1-norm regularization of wavelet coefficients by Ignace Loris, Guust Nolet, Ingrid Daubechies and Tony Dahlen.

Like most geophysical inverse problems, the linearized problem Am = d in seismic tomography is underdetermined, or at best offers a mix of overdetermined and underdetermined parameters. It has therefore long been recognized that it is important to suppress artifacts that could be falsely interpreted as ‘structure’ in the earth’s interior. Not surprisingly, strategies that yield the smoothest solution m — in the sense of minimizing a gradient (∇m) or second derivative (∇2m) norm — have been dominant in applications, either by using a low-degree spherical harmonic expansion (Dziewonski et al. 1975; Dziewonski & Woodhouse 1987; Masters et al. 1996) or by regularizing a dense local parametrization (Nolet 1987; Constable et al. 1987; Spakman & Nolet 1988; VanDecar & Snieder 1994; Trampert & Snieder 1996). Smooth solutions, however, while not introducing small-scale artifacts, produce a distorted image of the earth through the strong averaging over large areas, thereby making small-scale detail difficult to see, or even hiding it. Sharp discontinuities are blurred into gradual transitions. For example, the inability of global, spherical-harmonic, tomographic models to yield as clear an image of upper-mantle subduction zones as produced by more localized studies has long been held against them. Deal et al. (1999) and Deal & Nolet (1999) optimize images of upper-mantle slabs to fit physical models of heat diffusion, in an effort to suppress small-scale imaging artifacts while retaining sharp boundaries. Portniaguine and Zhdanov (1999) use a conjugate-gradient method to produce the smallest possible anomalous domain by minimizing a norm based on the gradient support ∇m/(∇m · ∇m + γ^2)^(1/2) , where γ is a small constant. Like all methods that deviate from a least-squares type of solution, both these methods are nonlinear and pose their own problems of practical implementation. The notion that we seek the ‘simplest’ model m that fits a measured set of data d to within the assigned errors is intuitively equivalent to the notion that the model should be describable with a small number of parameters. But, clearly, restricting the model to a few low-degree spherical-harmonic or Fourier coefficients, or a few large-scale blocks or tetrahedra, does not necessarily lead to a geophysically plausible solution. In this paper we investigate whether a multiscale representation based upon wavelets has enough flexibility to represent the class of models we seek. We propose an ℓ1-norm regularization method which yields a model m that has a strong tendency to be sparse in a wavelet basis (Daubechies 1992), meaning that it can be faithfully represented by a relatively small number of nonzero wavelet coefficients. This allows for models that vary smoothly in regions of limited coverage without sacrificing any sharp or small-scale features in well-covered regions that are required to fit the data. Our approach is different from an approach briefly suggested by de Hoop and van der Hilst (2005), in which the mapping between data and model is decomposed in curvelets: here we are concerned with applying the principle of parsimony to the solution of the inverse problem, without any special preference for singling out linear features, for which curvelets are probably better adapted than wavelets.

and two presentations:

  • Statistical approach for dictionary learning by Tieyong Zeng
  • Compressed Sensing by Holger Rauhut (it actually looks like a video seminar, but I could not find the video). If somebody finds it, I'll add it to the Compressed Sensing Videos page.


  • Credit: Discover Magazine.

    Tuesday, June 24, 2008

    CS: Rice entries, recovery of sparse signals, inverse scattering using S-OMP, CS by inverse scale space and curvelet thresholding



    Here some new papers from the Compressive Sensing Resources that I have not covered before:

  • A introductory lecture note by Richard Baraniuk entitled Compressive sensing and an introduction to compressive sampling by Emmanuel Candès and Michael Wakin that was featured in the IEEE magazine on compressive sensing.

  • The paper formerly known as "The Johnson-Lindenstrauss lemma meets compressed sensing" by Richard Baraniuk, Mark Davenport, Ronald DeVore, and Michael Wakin now has the title: A simple proof of the restricted isometry property for random matrices.

  • Emmanuel Candès, The restricted isometry property and its implications for compressed sensing. The abstract reads:

    It is now well-known that one can reconstruct sparse or compressible signals accurately from a very limited number of measurements, possibly contaminated with noise. This technique known as "compressed sensing" or "compressive sampling" relies on properties of the sensing matrix such as the restricted isometry property. In this note, we establish new results about the accuracy of the reconstruction from undersampled measurements which improve on earlier estimates, and have the advantage of being more elegant.


  • T. Tony Cai, Guangwu Xu, and Jun Zhang, On recovery of sparse signals via ell-1 minimization.
    This article considers constrained ℓ1 minimization methods for the recovery of high dimensional sparse signals in three settings: noiseless, bounded error and Gaussian noise. A unified and elementary treatment is given in these noise settings for two ℓ1 minimization methods: the Dantzig selector and ℓ1 minimization with an ℓ2 constraint. The results of this paper improve the existing results in the literature by weakening the conditions and tightening the error bounds. The improvement on the conditions shows that signals with larger support can be recovered accurately. This paper also establishes connections between restricted isometry property and the mutual incoherence property. Some results of Candes, Romberg and Tao (2006) and Donoho, Elad, and Temlyakov (2006) are extended.


  • Here is an important paper for the solving of the Helmholtz equation by Jong Chul Ye and Su Yeon Lee entitled: Non-iterative exact inverse scattering using simultanous orthogonal matching pursuit (S-OMP). The abstract reads:
    Even though recently proposed time-reversal MUSIC approach for inverse scattering problem is non-iterative and exact, the approach breaks down when there are more targets than sensors. The main contribution of this paper is a novel non-iterative exact inverse scattering algorithm that still guarantees the exact recovery of the extended targets under a very relaxed constraint on the number of source and receivers, where the conventional time-reversal MUSIC fails. Such breakthrough was possible from the observation that the induced currents on the unknown targets assume the same sparse support, which can be recovered accurately using the simultaneous orthogonal matching pursuit developed for multiple measurement vector problems. Simulation results demonstrate that perfect reconstruction can be quickly obtained from a very limited number of samples.
    The conclusion states:

    This paper derived a novel non-iterative exact inverse scattering algorithm using simultaneous OMP based on the observation that the induced current on the unknown targets assumes the same sparse support. Using the theoretical and numerical analysis, we showed that the maximum number of recoverable targets using the new method is upper bounded by the average of the number of transmitters and detectors. This is a significantly improvement over the conventional timereversal MUSIC especially when the number of source or detectors are significantly smaller than its counterpart. We believe that our results fill the missing gap toward the complete theory for the non-iterative exact inverse scattering theory.


  • Jianwei Ma, Compressed sensing by inverse scale space and curvelet thresholding.
    Compressed sensing provides a new sampling theory for data acquisition, which says that compressible signals can be exactly reconstructed from highly incomplete sets of linear measurements. It is significant to many applications, e.g., medical imaging and remote sensing, especially for measurements limited by physical and physiological constraints, or extremely expensive. In this paper we proposed a combinatorial recovery algorithm from a view of reaction-diffusion equations, by applying curvelet thresholding in inverse scale space flows. Numerical experiments in medical CT and aerospace remote sensing show its good performances for recovery of detailed features from incomplete and inaccurate measurements, in comparison with some existing methods.
    It may sound like a detail but I noted the comprehensive list of reconstruction techniques that gives some context into the newer method proposed in the paper:

    On the other hand, a few recover algorithms have been proposed in last couple of years. Typically, e.g., LP [7], reweighed LP [11], GRSP [23], Greed/OMP [42], StOMP [20], and iterative thresholding (IT) [4, 17, 32{34, 37]. Here we concern the nonlinear recovery algorithm from a new way: partial differential equation (PDE) method

    I am looking forward to seeing the code that implements this algorithm.

  • The papers I had already featured are listed below:


    Credit: NASA/JPL-Caltech/University of Arizona/Texas A&M, Phoenix view on Sol 28.

    Monday, June 23, 2008

    CS: LASSO algos, Comparison between SPGL1 and Bregman.

    Jort Gemmeke pointed me to the page of Mark Schmidt who made available about 15 different algorithms solving the LASSO problem. When I asked Michael Friedlander about how these algorithms mapped into the ones currently implemented in SPGL1. Here is what he responded:

    I'm glad to see that Mark still has his solvers up on the web. He wrote these as part of his project for my graduate course in numerical optimization. We were mainly interested in getting a feel for the various approaches, and were---or at least I was!---surprised to see so many variants.

    Looking through Mark's m-files, I don't think there's much overlap. Both SPGL1 and Mark's "LassoConstrained" and "LassoActiveSet" m-files can solve the LASSO problem i.e. minimize ||Ax-b||_2 subj to ||x||_1 <= tau, but the algorithms are very different. I think that LassoConstrained implements Tibshirani's (1994) original suggestion which explicitly forms a series of linear constraints for the one-norm ball; "LassoActiveSet" implements some version of Osborne, Purnell, and Turlach's homotopy approach.

    On a different note, within SPGL1, I noted the appearance of a great new page for examples which was produced using Matlab's publishing feature. As Michael noted, these features are " a great way to get documentation up on the web". Michael also mentions the following:

    Yesterday I noticed that you mentioned the latest SPGL1 ..., and also L1-Bregman. About a month ago, ... I ran some benchmarks of the L1-Bregman code [for comparison with SPGL1]...The main difficulty I had was that the Bregman algo needs a certain parameter as input. But I gave up when I couldn't figure out how to systematically choose parameters so that the method would work on the various Sparco problems. Probably I don't understand their method well enough.

    Anyway, in case you're interested, here's a set of scripts

    http://www.cs.ubc.ca/~mpf/public/BregCompare.zip (73.7K)

    that I just dusted off that compares L1-Bregman and SPGL1 on some of the test problems from the Bregman paper. The README in that directory gives a few lines of Matlab code that will run the comparison. (The script requires Sparco.)

    This is one of the many reasons this blog exists, i.e. accelerating our collective understanding of the intricacies of specific approaches. Now if we could have this type of comparison with the proximal algorithms, it would be great. While I see that they provide some type of theoretical results on convergence, do we know for a fact that an algorithm based on proximity operators is fast ? (Reference: Applying the Proximal Point Algorithm to a Non Negative Basis Pursuit Denoising model by Francois Malgouyres, Tieyong Zeng. The attendant software is here.)

    Credit: NASA/JPL-Caltech/University of Arizona/Texas A&M. Phoenix on Sol 25.

    Saturday, June 21, 2008

    CS Community: Translating and Explaining Compressed Sensing to Strangers.

    If you recall this entry (Traduction de Compressed Sensing en Francais), a discussion with Emmanuel Candes led to a french translation for CS: Acquisition Comprimée. I asked the same question to Andriyan Suksmono for how to say it in Indonesian and he responded by:
    So, I translate it the CS to "penginderaan kompresif", where "penginderaan" means "sensing" and, of course, "kompresif" means compressive in English. Additionally, imaging translate to "pencitraan".
    When I asked Jort Gemmeke, this is what he responded:
    Hmm the short answer...don't translate :-s
    Usually with (technical) terms like that, we dutch people/scientists don't even bother translating it :)

    However, if I would have to, I'd say something like "compressed/compressive
    sensing" = "gecomprimeerd waarnemen", maybe even "samengevat waarnemen".
    And "Compressed imaging" = "gecomprimeerd/samengevat afbeelden".

    I have seen the same thing in French, i.e. the desire to keep the English term as is. This can be seen as a need for non-English speaking scientific communities to have an impact and be well referenced in English publications. Using the same keyword will make sure that they are cited appropriately. However, the reason of this exercise has more to do with using the right wording when describing the subject to specialized journalists and people that are either not in the field or non technical. As it happens the term is automatically translated in tech publications, it might as well get some guidance:

    En: Compressed Sensing/Compressive Sensing *** Compressive Imaging

    Fr: Acquisition Comprimée *** Imagerie Comprimée (?)

    Id: Penginderaan Kompresif *** Pencitraan Kompresif

    Dutch: gecomprimeerd waarnemen / samengevat waarnemen *** gecomprimeerd/samengevat afbeelden

    I look forward for translations in other languages. I'll set up a page putting these contributions. As I see that the subject area being talked about in other languages: Japanese (or here), Vietnamese,...

    On a related note, how do you describe/explain Compressed Sensing to people in your family (Grandparents,....) ? Knowing that most people don't even know much about transform coding. How about: CS is about sensing and compressing information at the same time ? However, this doesn't catch the concept of sparsity.

    CS: k-t FOCUSSS: a general compressed sensing framework for high resolution dynamic MRI, Radial single-shot STEAM MRI

    It now seems that we understand what we see with MRI or fMRI when imaging the brain: Astrocytes activity (more here). I am personally surprised it took that long. Talking about fMRI here is a new and I think an important paper providing some insight about how compressed sensing can handle the time dimension. H. Jung, Kyunghyun Sung, Krishna Nayak, Eung Yeop Kim, and Jong Chul Ye just released k-t FOCUSSS: a general compressed sensing framework for high resolution dynamic MRI. The abstract reads:

    A model-based dynamic MRI called k-t BLAST/SENSE has drawn significant attention from the MR imaging community due to its improved spatio-temporal resolution. Recently, we showed that the k-t BLAST/SENSE corresponds to the special case of a new dynamic MRI algorithm called k-t FOCUSS that is optimal from a compressed sensing perspective. The main contribution of this paper is an extension of k-t FOCUSS to a more general framework with prediction and residual encoding, where the prediction provides an initial estimate and the residual encoding takes care of the remaining residual signals. Two prediction methods, RIGR and motion estimation/compensation scheme, are proposed, which significantly sparsify the residual signals. Then, using a more sophisticated random sampling pattern and optimized temporal transform, the residual signal can be effectively estimated from a very small number of k-t samples. Experimental results show that excellent reconstruction can be achieved even from severely limited k-t samples without aliasing artifacts.
    MRI is so advanced in the area of undersampling that they already have several techniques looks at the time dimension. With video we have barely touched the subject in such thourough manner (the only ones I could find are the following: Vladimir Stankovic, Lina Stankovic, and Samuel Cheng on Compressive video sampling, Compressive Coded Aperture Video Reconstruction by Roummel Marcia and Rebecca Willett).


    also of interest are the following papers: Radial single-shot STEAM MRI by Tobias Block, Jens Frahm. The abstract reads:

    Rapid MR imaging using the stimulated echo acquisition mode (STEAM) technique yields single-shot images without any sensitivity to resonance offset effects. However, the absence of susceptibility-induced signal voids or geometric distortions is at the expense of a somewhat lower signal-to-noise ratio than EPI. As a consequence, the achievable spatial resolution is limited when using conventional Fourier encoding. To overcome the problem, this study combined single-shot STEAM MRI with radial encoding. This approach exploits the efficient undersampling properties of radial trajectories with use of a previously developed iterative image reconstruction method that compensates for the incomplete data by incorporating a priori knowledge. Experimental results for a phantom and human brain in vivo demonstrate that radial single-shot STEAM MRI may exceed the resolution obtainable by a comparable Cartesian acquisition by a factor of four.

    while the paper is not available, a poster with the same title is here. Many other interesting papers can be viewed in Tobias Block's publication page such as the one featuring the ability to quantify the sensitivity of the coils with image quality: Martin Uecker, Thorsten Hohage, Kai Tobias Block, and Jens Frahm, Image Reconstruction by Regularized Nonlinear Inversion - Joint Estimation of Coil Sensitivities and Image Content,

    The use of parallel imaging for scan time reduction in MRI faces problems with image degradation when using GRAPPA or SENSE for high acceleration factors. While an inherent loss of SNR in parallel MRI is inevitable due to the reduced measurement time, the sensitivity to image artifacts that result from severe undersampling can be ameliorated by alternative reconstruction methods. While the introduction of GRAPPA and SENSE extended MRI reconstructions from a simple orthogonal transformation (Fourier transform) to the inversion of an ill-conditioned linear system, the next logical step is the use of a nonlinear inversion. Here,a respective algorithm based on a Newton-type method with appropriate regularization terms is demonstrated to improve the performance of autocalibrating parallel MRI { mainly due to a better estimation of the coil sensitivity profiles. The approach yields images with considerably reduced artifacts for high acceleration factors and/or a low number of reference lines.


    Credit: NASA/JPL-Caltech/University of Arizona/Texas A&M. Sol 24, Dodo trench, the white is disappearing under our own eyes and they think it is because this is ice.

    Thursday, June 19, 2008

    CS: SIAM IS, EUSIPCO, Papers found on the Internets, New Rice entries and a Video.

    In a previous entry, I forgot to mention the following abstract at the SIAM IS08 meeting of relevance to Compressed Sensing:

    MS9
    Semismooth Newton and Active Set Methods for Sparse Reconstruction

    The problem of sparse reconstruction by Basis Pursuit is equivalent to regularization of inverse problems with sparsity constraints. We will shortly describe regularizing properties and then employ a semismooth version of Newton’s method to solve the related non-smooth minimization
    problem. This algorithm can be formulated as an active set method and we prove its local superlinear convergence. Besides this local result, the algorithm is—in some cases—robust to the initial value. Moreover, we discuss globalization strategies of the algorithms.

    Dirk Lorenz
    Center for Industrial Mathematics
    University of Bremen

    Roland Griesse
    Technical University Chemnitz

    The EUSIPCO meeting is on August 25-29th. Thanks to Laurent Duval, I found out the program is out and here some of the presentations that seem relevant to Compressed Sensing. Some of these papers have already been featured here:

    The LNLA meeting adjacent to the EUSIPCO meeting also has a program out but I could not see anything related to CS.

    Out of the ones I have not covered yet but are available from the authors' site we have:

    Dictionary identifiability from few training samples by Remi Gribonval and Karin Schnass. The abstract reads:

    This article treats the problem of learning a dictionary providing sparse representations for a given signal class, via l_1 minimisation. The problem is to identify a dictionary from a set of training samples Y knowing that for some coefficient matrix X. Using a characterisation of coefficient matrices X that allow to recover any orthonormal basis (ONB) as a local minimum of an l_1 minimisation problem, it is shown that certain types of sparse random coefficient matrices will ensure local identifiability of the ONB with high probability, for a number of training samples which essentially grows linearly with the signal dimension.
    Under-determined source separation via mixed-norm regularized minimization by M. Kowalski, E. Vincent and Remi Gribonval. The abstract reads:

    We consider the problem of extracting the source signals from an under-determined convolutive mixture assuming that the mixing filters are known. We wish to exploit the sparsity and approximate disjointness of the time-frequency representations of the sources. However, classical time-frequency masking techniques cannot be directly applied due to the convolutive nature of the mixture. To address this problem, we first formulate it as the minimization of a functional combining a classical l_2 discrepancy term between the observed mixture and the mixture reconstructed from the estimated sources and a sparse regularization term defined in terms of mixed l_2/l_1 norms of source coefficients in a time-frequency domain. The minimum of the functional is then obtained by a thresholded Landweber iteration algorithm. Preliminary results are discussed for two synthetic audio mixtures.
    as you may recall in this recent entry, the paper by Adam Zelinski, Lawrence L. Wald, Kawin Setsompop, Vivek Goyal and Elfar Adalsteinsson entitled Sparsity-Enforced Slice-Selective MRI RF Excitation Pulse Design had a similar l_1/l_2 mixed norm in the regularization.

    There is also the potentially important Shift-invariant dictionary learning for sparse representations: extending K-SVD by Boris Mailhé, Sylvain Lesage, Remi Gribonval and Frederic Bimbot, Pierre Vandergheynst. The abstract reads:
    Shift-invariant dictionaries are generated by taking all the possible shifts of a few short patterns. They are helpful to represent long signals where the same pattern can appear several times at different positions. We present an algorithm that learns shift invariant dictionaries from long training signals. This algorithm is an extension of K-SVD. It alternates a sparse decomposition step and a dictionary update step. The update is more difficult in the shift-invariant case because of occurrences of the same pattern that overlap. We propose and evaluate an unbiased extension of the method used in K-SVD, i.e. a method able to exactly retrieve the original dictionary in a noiseless case.
    Remi Gribonval told me offline that this new technique will eventually be made available at some point in the future.

    Finally, two papers from Crete:

    Compressed Sensing of Audio Signals Using Multiple Sensors by Anthony Griffin and Panagiotis Tsakalides. The abstract reads:
    Compressed sensing is an attractive compression scheme due to its universality and lack of complexity on the sensor side. In this paper we present a study on compressed sensing of real, non-sparse, audio signals. We investigate the performance of different bases and reconstruction algorithms. We then explore the performance of multi-sensor compressed sensing of audio signals and present a novel scheme to provide improved performance over standard reconstruction algorithms. We then present simulations and measured results of a new algorithm to perform efficient detection and estimation in a sensor network that is used to track the location of a subject wearing a tracking device, which periodically transmits a very sparse audio signal. We show that our algorithm can dramatically reduce the number of transmissions in such a sensor network.
    Poster Abstract: Compressed Sensing of Audio Signals in a Wireless Sensor Network byAnthony Griffin and Panagiotis Tsakalides. The abstract reads:
    Compressed sensing is an attractive compression scheme due to its universality and lack of complexity on the sensor side. In this work we demonstrate how it could be used in a wireless sensor network. We consider a sensor network that tracks the location of a subject wearing a device that periodically transmits an audio signal. Through simulations and measurements of a simple system, we illustrate that dramatic compression can be acheived.
    Found also on the Internets:
    Weighted Superimposed Codes and Constrained Integer Compressed Sensing by Wei Dai and Olgica Milenkovic. The abstract reads:
    We introduce a new family of codes, termed weighted superimposed codes (WSCs). This family generalizes the class of Euclidean superimposed codes (ESCs), used in multiuser identification systems. WSCs allow for discriminating all bounded, integer-valued linear combinations of real-valued codewords that satisfy prescribed norm and non-negativity constraints. By design, WSCs are inherently noise tolerant. Therefore, these codes can be seen as special instances of robust compressed sensing schemes. The main results of the paper are lower and upper bounds on the largest achievable code rates of several classes of WSCs. These bounds suggest that with the codeword and weighting vector constraints at hand, one can improve the code rates achievable by standard compressive sensing.
    and Compressive Sampling of Binary Images by Vladimir Stankovic, Lina Stankovic, and Samuel Cheng. The abstract reads:
    Compressive sampling is a novel framework that exploits sparsity of a signal in a transform domain to perform sampling below the Nyquist rate. In this paper, we apply compressive sampling to reduce the sampling rate of binary images. A system is proposed whereby the image is split into non-overlapping blocks of equal size and compressive sampling is performed on selected blocks only using the orthogonal matching pursuit technique. The remaining blocks are sampled fully. This way, the complexity and the required sampling time is reduced since the orthogonal matching pursuit operates on a smaller number of samples, and at the same time local sparsity within an image is exploited. Our simulation results show more than 20% saving in acquisition for several binary images.
    Finally, there are some new papers featured at the Rice Repository Page:

    Shamgar Gurevich, Ronny Hadani, and Nir Sochen, On some deterministic dictionaries supporting sparsity. The abstract reads:
    We describe a new construction of an incoherent dictionary, referred to as the oscillator dictionary, which is based on considerations in the representation theory of fi…nite groups. The oscillator dictionary consists of approximately p^5 unit vectors in a Hilbert space of dimension p, whose pairwise inner products have magnitude of at most 4/sqrt(p). An explicit algorithm to construct a large portion of the oscillator dictionary is presented.
    Patrick Combettes and Valérie Wajs, Signal recovery by proximal forward-backward
    splitting
    . The abstract reads:
    We show that various inverse problems in signal recovery can be formulated as the
    generic problem of minimizing the sum of two convex functions with certain regularity properties. This formulation makes it possible to derive existence, uniqueness, characterization, and stability results in a unified and standardized fashion for a large class of apparently disparate problems. Recent results on monotone operator splitting methods are applied to establish the convergence of a forward-backward algorithm to solve the generic problem. In turn, we recover, extend, and provide a simplified analysis for a variety of existing iterative methods. Applications to geometry/texture image decomposition schemes are also discussed. A novelty of our framework is to use extensively the notion of a proximity operator, which was introduced by Moreau in the 1960s.
    A variational formulation for frame-based inverse problems by Caroline Chaux, Patrick Combettes, Jean-Christophe Pesquet, and Valérie Wajs. The abstract reads:
    A convex variational framework is proposed for solving inverse problems in Hilbert spaces with a priori information on the representation of the target solution in a frame. The objective function to be minimized consists of a separable term penalizing each frame coefficient individually and of a smooth term modeling the data formation model as well as other constraints. Sparsity-constrained and Bayesian formulations are examined as special cases. A splitting algorithm is presented to solve this problem and its convergence is established in infinite-dimensional spaces under mild conditions on the penalization functions, which need not be differentiable. Numerical simulations demonstrate applications to frame-based image restoration.

    and finally, Proximal thresholding algorithm for minimization over orthonormal bases by Patrick Combettes and Jean-Christophe Pesquet. The abstract reads:
    The notion of soft thresholding plays a central role in problems from various areas
    of applied mathematics, in which the ideal solution is known to possess a sparse decomposition in some orthonormal basis. Using convex-analytical tools, we extend this notion to that of proximal thresholding and investigate its properties, providing in particular several characterizations of such thresholders. We then propose a versatile convex variational formulation for optimization over orthonormal bases that covers a wide range of problems, and establish the strong convergence of a proximal thresholding algorithm to solve it. Numerical applications to signal recovery are demonstrated.

    There is also a video on Integration of Sensing and Processing (December 05-09, 2005) by Robert Nowak on Active learning vs. compressed sensing.

    All these links are also featured on The CS List. Because of its date, the video is at the end of the CS video page.

    Credit photo: NASA/JPL-Caltech/University of Arizona/Texas A&M. Sol 22 images from Phoenix. Snow White trench at Woonderland,

    Wednesday, June 18, 2008

    L'Appel du 18 juin 1940 en PowerPoint.

    [This entry has nothing to do with CS, all Compressed Sensing entries can be found here. This post is trying to fit a famous French text into a PowerPoint presentation. In English, a very famous example given by Peter Norvig in the Gettysburg PowerPoint presentation. There is no equivalent in French so a friend of mine decided to write one for the 68th birthday of the text.]

    Il y a beaucoup d'incarnations du Mal dans le Monde, mais l'une des plus quotidiennes est l'utilisation outranciere de PowerPoint, en particulier en ingénierie. En tant qu'ingénieur, il nous est facile d'en voir des exemples tous les jours mais il est difficile de faire comprendre a notre entourage la réalité de la chose. Jean-Louis Repetto, un ami, a decidé de nous ouvrir les yeux avec le texte de l'appel du 18 Juin 1940 en le mettant au format d'une presentation sous PowerPoint. Jean-Louis me communique aussi que
    Le texte intégral du message original a été repris dans la zone de commentaires, et seul l'ordre des paragraphes a été changé (pour une présentation plus dynamique, bien sûr). En note: la présentation a été réalisée en utilisant l'assistant d'aide à la création de présentation de la version Microsoft Office Standard Edition 2003. Rien n'a été entrepris pour atténuer la laideur des formes et couleurs proposées.
    La présentation peut etre téléchargée ici et est distribuée sous une license non restrictive (Paternité 2.0 France). L'adresse est: http://igorcarron.googlepages.com/Appel_du_18_Juin_1940.ppt. Je l'ai mise sous format Googledocs mais certains éléments sont perdus (comme le texte original dans les commentaires).



    Tuesday, June 17, 2008

    CS: Bregman Iterative Algorithms, CVXOPT and SPGL1

    In this webpage entitled Bregman Iterative Algorithms for Constrained Compressed Sensing and Related Problems, Wotao Yin, Stanley Osher, Donald Goldfarb, Jerome Darbon are now making their codes available. You need to request it from the site (more specifically here). To recall the code description is:

    This is a simple and extremely efficient iterative methods for solving the Basis Pursuit problem

    min ||x||1, subject to Ax = b,

    which is used in compressed sensing. This method is based on Bregman iterative regularization and it gives a very accurate solution after solving only a very small number of instances of the unconstrained problem

    min p||x||1 + (1/2)||Ax - fk||2,

    for given matrix A and vector fk. Our approach is especially useful for many compressed sensing applications where matrix-vector operations involving A and AT can be computed by fast transforms.

    The current expectation is that this algorithm is much faster than linear programming. At some point, it needs to be benchmarked against other codes.

    In another area dealing with nuclear norm minimization featured recently here and there. There is a


    presentation entitled Interior-point methods for nuclear norm minimization by Zhang Liu and Lieven Vandenberghe presented at HPOPT 2008.

    They feature CVXOPT described as :

    CVXOPT is a free software package for convex optimization based on the Python programming language. It can be used with the interactive Python interpreter, on the command line by executing Python scripts, or integrated in other software via Python extension modules. Its main purpose is to make the development of software for convex optimization applications straightforward by building on Python's extensive standard library and on the strengths of Python as a high-level programming language
    SPGL1 is now at version 1.5. From the webpage:

    SPGL1 is a Matlab solver for large-scale one-norm regularized least squares. It is designed to solve any of the following three problems:

    Basis pursuit denoise

    (BP)minimizex1subject toAxb2


    Basis pursuit

    (BP)minimizex1subject toAx=b


    Lasso

    (Lasso)minimizeAxb2subject tox1



    SPGL1 relies only on matrix-vector operations Ax and ATy and accepts both explicit matrices and functions that evaluate these products. SPGL1 is suitable for problems that are in the complex domain.

    The theory underlying SPGL1 is described in the paper Probing the Pareto Frontier for basis pursuit solutions (pdf).

    Sunday, June 15, 2008

    CS: CS for Multimedia Communications in Wireless Sensor Networks, Sherpa/Romeo, SIAM, FUSION and ISMRM CS centered abstracts.


    Here is a presentation for an EE class at University of Texas (Austin) by Wael Barakat Rabih Saliba entitled Compressive Sensing for Multimedia Communications in Wireless Sensor Networks


    If some of you are wondering about your ability to put your paper in preprint form on your webpage, you might want to check this SHERPA/ROMEO tool out. Given a journal, it gives you the policy of that paper with regards to making your own material available on your site for fair use.


    With the months of July and August coming up I expect less traffic mostly because of the expected decrease in new preprints/publications. I also expect to see more conferences. I will try to list the ones that have papers on Compressive Sensing. Today, there is the SIAM and Fusion meeting in July. I also wanted to list the papers presented at the International Society for Magnetic Resonance in Medicine annual meeting last May.

    The International Conferences on Information Fusion 2008 will feature one presentation with CS:

    Signal Extraction Using Compressed Sensing for Passive Radar with OFDM Signals by Christian Berger, Shengli Zhou and Peter Willett.


    There is the SIAM conference on Imaging Science on July 7-9th. The IS08 abstracts can be found here, I have listed the ones related to CS below. It should be nice being there:

    MS6
    Image Processing with Manifold Models
    In this talk I will study the manifold structure of sets of patches extracted from natural images. The local manifold constraint is chained into a global one since the whole image traces a smooth surface on the features manifold. We detail this manifold structure for various ensembles suitable for natural images and textures modeling. These manifolds offer a low-dimensional parameterization of geometric patterns that can be found in such datasets. One can use these manifold models in order to regularize inverse problems in signal and image processing. In this framework, one searches for a smooth surface traced on the feature manifold that matches the forward measurements. In the discrete setting, the manifold can be either known analytically or estimated from exemplars using a graph structure. Numerical simulations on inpainting and compressive sampling inversion show how such manifolds models bring an improvement for datasets with geometrical features.
    Gabriel Peyre
    CEREMADE


    MS9
    Nonconvex Compressive Sensing
    We will examine theoretical and numerical results demonstrating that replacing the convex optimization problems of compressive sensing with nonconvex ones allows sparse signals to be reconstructed from many fewer measurements. Surprisingly, very simple algorithms are able to reconstruct signals successfully, despite the huge numbers of local minima. We will see striking examples of the recovery performance of these algorithms under a variety of circumstances,
    and discuss the state of the underlying theory.

    Rick Chartrand
    Los Alamos National Laboratory


    MS17
    CoSaMP: Iterative Signal Recovery from Incomplete and Inaccurate Measurements
    Compressive Sampling offers a new paradigm for acquiring signals that are compressible with respect to an orthobasis. The major algorithmic challenge is to approximate a compressible signal from noisy samples. Until recently, all provably correct reconstruction techniques have relied on large-scale optimization, which tends to be computationally burdensome. This talk describes a new iterative, greedy recovery algorithm, called CoSaMP that delivers the same guarantees as the best optimization-based approaches. Moreover, this algorithm offers rigorous bounds
    on computational cost and storage. It is likely to be extremely efficient for practical problems because it requires only matrix–vector multiplies with the sampling matrix. For some cases of interest, the running time is just O(N ∗ log2(N)), where N is the length of the signal.

    Joel Tropp
    Applied and Computational Mathematics
    Caltech

    Deanna Needell
    Mathematics, UC Davis


    MS19
    Distributed Compressive Sensing with the Dirichlet Process
    In many applications, one is interested in simultaneously performing inversion of multiple CS measurements. We propose a novel multi-task compressive sensing framework based on a Bayesian formalism, where a sparseness prior is adopted. The key challenge is that not all of the measured data are necessarily appropriate for sharing when performing inversion, and one must therefore infer what sharing of data across the multiple CS measurements is appropriate.
    Toward this end, a Dirichlet process (DP) prior is employed.

    Larry Carin
    Electrical & Computer Engineering
    Duke University


    MS19
    Compressive Imaging for Increased Field-of-view Exploitation
    We consider the application of compressive imaging to the problem of wide-area persistent surveillance. There are cases under study where optical architectures have been developed which require the incorporation of compressive imaging to perform the indicated exploitation. We utilize one such architecture to show a dramatic increase in performance for wide-area persistent surveillance. This architecture is described as a field-of-view multiplexing imager
    and its relation to compressive imaging is discussed and exploited.
    Robert Muise
    Lockheed Martin


    MS19
    Difference Imaging from Linear Spatial-Domain Projections
    Using compressive optical measurements, we consider direct reconstruction of the difference between two images taken at different time instants. We first show that the linear MMSE reconstruction operator depends on the crosscorrelation between the two images. We then consider reconstruction performance when the measurements consist of a finite number of linear spatial projections of the two images. We also quantify performance when a time series of difference images is reconstructed from a series of linear projections. Various projection operators are compared.

    Nathan Goodman, Mark Neifeld, Shikhar Shikhar
    Department of Electrical and Computer Engineering
    University of Arizona




    MS19
    Manifold-based Image Understanding from Compressive Measurements
    The emerging theory of Compressive Sensing (CS) states that an image obeying a sparse model can be reconstructed from a small number of random linear measurements. In this talk we will explore manifold-based models as a generalization of sparse representations, and we will discuss
    the possible applications of these models not only in single and multi-image recovery from compressive measurements, but also in scalable inference settings, where a detection, estimation, or classification can be made using far fewer compressive measurements than would be required for recovery.

    Michael Wakin
    Department of Electrical Engineering and Computer Science
    University of Michigan


    MS23
    Seismic Imaging Meets Compressive Sampling
    Compressive sensing has led to fundamental new insights in the recovery of compressible signals from sub-Nyquist samplings. It is shown how jittered subsampling can be used to create favorable recovery conditions. Applications include mitigation of incomplete acquisitions and wavefield computations. While the former is a direct adaptation of compressive sampling, the latter application represents a new way of compressing wavefield extrapolation operators. Operators are not diagonalized but are compressively sampled reducing the computational costs.

    Felix J. Herrmann
    The University of British Columbia
    Department of Earth and Ocean Sciences


    MS32
    Fast BV + Texture Decomposition
    In this talk based on a joint work with Triet Le and Luminita Vese we will review former work on the problem of decomposing a digital image into an oscillatory part and a BV part. This problem was posed by Yves Meyer as a variational problem, and is also (partially) adressed by compressive sensing techniques. We shall discuss several implementations and variants, including a fast one, which gives a nonlinear version of the classical low-pass, high-pass
    decomposition.

    Antoni Buades
    MAP5, Universite Paris Descartes

    Jean-Michel Morel
    ENS Cachan


    MS40
    Adaptive Non-local Transforms for Image/video Denoising, Restoration, and Enhancement, and for Compressive Sensing Image Reconstruction
    The talk is devoted to a powerful and effective extension of the non-local filtering paradigm. We replace the conventional non-parametric image modeling based on the sample mean or weighted mean by a transform-domain representation where multiple patches from different spatial locations are collectively represented in higher-dimensional data structures. This corresponds to a non-local image transform which is adaptive with respect to the image content. We illustrate this approach and give an overview of its successful application to several image processing problems. The common point is enforcing sparsity within this overcomplete non-local representation. In most of the presented examples, the BM3D (Block-Matching and 3D filtering)
    denoising algorithm [Dabov et al., IEEE TIP 16(9) 2007] is used as a pragmatical way to generate the adaptive representation and to then enforce sparsity via shrinkage. Besides the basic image and video denoising problem, we will address image restoration (deblurring), image
    enhancement (sharpening), and compressive sensing image reconstruction (inverse tomography, image upsampling/ zooming, and image interpolation). The presented methods are expression of the state-of-the-art, in terms of both objective and subjective visual quality. This quality
    is achieved at a competitive computational cost. We conclude the talk highlighting few open problems. The material of the talk is joint work with Kostadin Dabov, Aram Danielyan, Vladimir Katkovnik, and Karen Egiazarian.

    Alessandro Foi
    Tampere University of Technology
    Tempere, Finland


    MS42
    Compressive Sensing for Multi-Static Radar Imaging: Theory and Numerical Experiments
    The compressive-sensing framework assumes the scene is relatively sparse in some (unknown) basis and the measurements are made in another basis with the appropriate properties. We address (i) the discrete sampling approximation of the image space, (ii) the satisfaction of the
    ”restricted isometry property” as a function of waveform design and collection geometry, and (iii) robustness to clutter/noise. Finally, we present some illustrative numerical experiments related to this theory.
    Mark Stuff, Brian Thelen, Nikola Subotic, Kyle Cooper,
    William Buller
    MTRI


    MS46
    New Results in Sparse Representation
    Sparse image representation benefits tremendously from L-1 based methodology. Recently, new modification (that utilizes reweighting) was proposed in both statistics and compressive sensing communities. We report theoretical and experimental results in this line of research.
    Xiaoming Huo
    Georgia Institute of Technology


    MS48
    Image Reconstruction from Incomplete Data in xray Projection-based Tomographic Imaging
    In the field of compressive sensing there has been much recent progress in sparse image recovery from sparse Fourier transform data, using L1-norm or total variation (TV)
    minimization of the estimated image. These theoretical advances have implications for tomographic image reconstruction such as Computed Tomography (CT). We have
    recently been investigating the possibility of image reconstruction from sparse or incomplete x-ray projection data. We will report on recent results for image reconstruction
    in CT and tomosynthesis.
    Emil Sidky
    Department of Radiology
    The University of Chicago

    Xiaochuan Pan
    The University of Chicago
    Department of Radiology


    PP0
    Estimating Tumor Bounds in Bioluminescence Tomography
    Bioluminescence tomography is an emerging small animal imaging modality that has many important applications in biomedical research, from drug development to studies of tumor metastasis. The prototypical model for this inverse source problem is based on the time-independent diffusion equation. While a general inversion algorithm is not feasible, the task of estimating a tumor volume can be approached using principles of compressive sensing and
    Ikehatas enclosure method.
    Matthew A. Lewis
    UT Southwestern Medical Center at Dallas

    PP0
    Orthant-Wise Gradient Projection Method for Compressive Sampling
    The compressive sampling problem can be formulated as l1-regularized least square problem. The objective function can be reformulated as a quadratic approximation orthantwise. We efficiently apply orthant-wise gradient projection method to solve this optimization problem, and prove its convergence to the global optimal solution. Computational experiments demonstrate that the proposed method outperforms other methods in previous literatures.
    Qiu Wu
    Electrical and Computer Engineering
    The University of Texas at Austin



    IC6
    Foundations of Compressed Sensing
    We will give an introduction to the emerging field of Compressed Sensing. The main objective of this new paradigm in signal processing is to capture a signal with the smallest number of measurements. We shall emphasize ways to decide what are the best sensing systems and describe how to construct near optimal systems.

    Ronald DeVore
    Department of Mathematics
    University of South Carolina


    MS1
    Sparse Approximations in Image Processing
    The search for the ”best basis” (or even good enough) in which to approximate, to denoise, or, more generally, to analyze images has led to a flurry of activity in the construction of orthonormal, bi-orthogonal, and finally, redundant or overcomplete dictionaries. With these constructions, came heuristic algorithms for how best to use them in the nonlinear approximation and analysis of images. We will survey the combined efforts of the engineering, statistics, mathematics, and computer science communities to put these heuristics on solid theoretical footing, as well as their performance in practice. The final part of the tutorial will show how a number of these techniques are used in compressed sensing.

    Anna Gilbert
    Department of Mathematics
    University of Michigan


    MS6
    Image Super-Resolution via Sparse Representation
    In this talk, we address the problem of generating a superresolution (SR) image from a single low-resolution input image. We approach this problem from the perspective of compressed sensing. The low-resolution image is viewed as downsampled version of a high-resolution image, whose patches are assumed to have a sparse representation with respect to an over-complete dictionary of prototype signalatoms. The principle of compressed sensing ensures that under mild conditions, the sparse representation can be correctly recovered from the downsampled signal. We will demonstrate the effectiveness of sparsity as a prior for regularizing the otherwise ill-posed super-resolution problem. We further show that a small set of randomly chosen raw patches from training images of similar statistical nature to the input image generally serve as a good dictionary, in the sense that the computed representation is sparse and the recovered high-resolution image is competitive or even superior in quality to images produced by other SR methods.

    Jianchao Yang, John Wright, Yi Ma
    Department of Electrical and Computer Engineering
    University of Illinois at Urbana-Champaign



    MS9
    Separable Approximation for Sparse Reconstruction
    We discuss practical algorithms for large scale unconstrained optimization problems in which the objective consists of a smooth data-fitting term added to a regularization term, which is often nonsmooth and separable. We discuss several applications of the method, highlighting why it is
    particularly effective on compressed sensing problems.

    Stephen J. Wright
    University of Wisconsin
    Dept. of Computer Sciences

    Mario T. Figueiredo
    Instituto Superior Tecnico
    Instituto de Telecomunicacoes


    MS11
    Selecting Good Fourier Measurements for Compressed Sensing
    We consider the problem of constructing good Fourier compressed sensing matrices by selecting rows of the DFT matrix. While random selection produces good matrices with high probability, particular selections can result in failure to recover certain sparse signals. Using efficiently computable bounds on the admissible signal sparsity for a given sensing matrix, we propose a method to find universally good deterministic Fourier compressed sensing matrices, without compromising average performance.
    Kiryung Lee
    University of Illinois at Urbana-Champaign


    MS11
    Analog and Digital Sparse Approximation with Applications to Compressed Sensing
    In the compressed sensing (CS) framework, the sampling phase is resource constrained, taking a small number of linear samples. However, the price paid for this simplicity is a computationally expensive reconstruction algorithm that forms a bottleneck in using the sensed data. We will present a sparse approximation framework that both unifies many of the recently proposed digital algorithms and introduces novel analog architectures that solve the same problems. We will demonstrate how these analog systems solve CS recovery problems orders of magnitude faster than current digital systems, at speeds limited only by the underlying hardware components.
    Chris Rozell
    Georgia Institute of Technology



    MS11
    Advances in Compressed Sensing for MRI
    Recent developments in Compressive Sensing (CS) theory offer the potential for significant advances in diagnostic medical imaging, and especially magnetic resonance imaging (MRI). Here, we review some of the key works on the application of CS theory and its extensions to both single
    and multi-coil or parallel MR imaging and discuss practical benefits such as improved patient comfort, reduced susceptibility to physiological motion, and increased clinical throughput.
    Joshua D. Trzasko
    Mayo Clinic



    MS17
    Fourier Sketching: Low Complexity Computation of Sparse DFT
    I will present a greedy, randomized algorithm that computes the DFT of length N when only S coefficients are nonzero but have unknown location, in the spirit of recent work by Gilbert, Strauss, and co-workers. The complexity is empirically sublinear and scales like S(logN)2. While
    some elements of the algorithm are directly taken from Gilbert et al., some steps such as the coefficient estimation are novel. Applications include fast decoding for compressed sensing, without even formulating an ell-1 problem or expliciting the measurement matrix.
    Laurent Demanet
    Mathematics, Stanford


    MS17
    One Sketch for All: Fast Algorithms for Compressed Sensing
    Compressed Sensing is a new paradigm for acquiring the compressible signals that arise in many applications. These signals can be approximated using an amount of information much smaller than the nominal dimension of the signal. Traditional approaches acquire the entire signal and
    process it to extract the information. The new approach acquires a small number of nonadaptive linear measurements of the signal and uses sophisticated algorithms to determine its information content. Emerging technologies can compute these general linear measurements of a signal at unit cost per measurement. This paper exhibits a randomized measurement ensemble and a signal reconstruction algorithm that satisfy four requirements: (1) The measurement ensemble succeeds for all signals, with high probability over the random choices in its construction. (2) The number of measurements of the signal is optimal, except for a factor polylogarithmic in the signal length. (3) The running time of the algorithm is polynomial in the amount of information in the signal and polylogarithmic in the signal length. (4) The recovery algorithm offers the strongest possible type of error guarantee. Moreover, it is a fully polynomial approximation scheme with respect to this type of error bound. Emerging applications demand this level of performance. Yet no other algorithm in the literature simultaneously achieves all four of these desiderata. This talk will present and update work that originally appeared in ACM STOC 2007. It is joint work with Anna C. Gilbert, Joel A. Tropp, and Roman Vershynin.

    Martin Strauss
    Mathematics and EECS
    University of Michigan

    MS28
    Bregman Iterative Algorithms for l1-Minimization with Applications to Compressed Sensing
    We propose simple and extremely efficient methods for solving the Basis Pursuit problem
    min{u1 : Au = f, u ∈ Rn}, which is used in compressed sensing. Our methods are based on Bregman iterative regularization and they give a very accurate solution after solving only a very small number of instances of the unconstrained problem min u∈Rn μu1 +1 2 Au − fk22,
    for given matrix A and vector fk. We show analytically that this iterative approach yields exact solutions in a finite number of steps. We were able to solve huge instances of compressed sensing problems quickly on a standard PC. We demonstrate the effectiveness of the algorithm by experiments of compressed MR imaging.
    Wotao Yin
    Rice University

    Stanley J. Osher
    University of California
    Department of Mathematics

    Donald Goldfarb
    Columbia University

    Jerome Darbon
    Mathematics
    UCLA


    MS36
    Image Analysis Through Unknown Random Compressed Sensing Using Diffusion Geometry
    We indicate that diffusion geometric analysis can be used to intrinsically process images which have been acquired as randomly encoded patches. In particular we show that for hyperspectral imagery the results are intrinsic and independent of the choice of randomized spectra. Diffusion
    Geometry considers the data base of all measurements, and proceeds to organize them enabling compressed sensing through local randomizers.

    Ronald Coifman
    Yale University
    Department of Computer Science


    MS38
    Monotone Operator Splitting and Fast Solutions to Inverse Problems with Sparse Representations
    This work focuses on several constrained optimization problems involved in sparse solutions of linear inverse problems. We formalize all these problems within the same framework of convex optimization theory, and invoke tools from convex analysis (proximity operators) and maximal monotone operator splitting. We characterize all these optimization problems, and to solve them, we propose fast iterative splitting algorithms, namely forward-backward and Peaceman/Douglas-Rachford splitting iterations. This framework includes some previously proposed algorithms as a special case. With non-differentiable sparsity-promoting penalties, the proposed algorithms are essentially based on iterative shrinkage. Furthermore, the computational burden of all our algorithms is one application of fast analysis and synthesis operators at each iteration. This makes them very competitive for large-scale problems. Applications to several inverse problems such as denoising, super-resolution and compressed sensing are also reported to demonstrate the wide range of applicability of our approach.

    Jalal Fadili
    CNRS, Univ. of Caen, France



    MS38
    Bregman Iterative Algorithms for Compressed Sensing
    Bregman iterative regularization (1967) was introduced by Osher, Burger, Goldfarb, Xu and Yin as a device for improving TV based image restoration (2004) and was used by Xu and Osher in (2006) to analyze and improve wavelet shrinkage. In recent work by Yin, Osher, Goldfarb and
    Darbon we devised simple and extremely efficient methods for solving the basis pursuit problem which is used in compressed sensing. A linearized version of Bregman iteration was also done by Osher, Dong, Mao and Yin. This requires two lines of MATLAB code and is remarkably efficient.
    This means we rapidly and easily solve the problem: minimize the L1 norm of u so that Au = f for a given k by n matrix A, with k << style="font-weight: bold;">A Taste of Compressed Sensing
    Compressed sensing seeks to capture a signal/image with as few measurements as possible. We shall give a brief discussion of this fascinating field with the emphasis on the accuracy of representation in the compressed sensing setting.

    Ronald DeVore
    Department of Mathematics
    University of South Carolina.


    In the International Society for Magnetic Resonance in Medicine annual meeting last May here are the papers that were presented:

    MRA: Contrast Without the Agent
    726. Improving Non-Contrast Enhanced SSFP Angiography with Compressed Sensing

    Tolga Çukur1, Michael Lustig1, Dwight Georger Nishimura1

    1Stanford University, Stanford, California , USA

    Flow-independent angiography offers the ability to produce vessel images without contrast agents. 3D magnetization-prepared balanced SSFP can be used to acquire these angiograms, where the phase encodes are interleaved and preparation is repeated prior to the start of each interleave. However, frequent repetition of the preparation significantly reduces the scan efficiency. The sparsity of the angiograms allows for the use of compressed sensing to undersample the phase encodes and save scan time. These savings can be allotted for preparing the magnetization more often, or alternatively, improving resolution.

    Imaging in the Post-Nyquist Era

    333. Dynamic Functional Volumetric Magnetic Resonance K-Space Inverse Imaging of Human Visual System

    Fa-Hsuan Lin1, 2, Thomas Witzel1, 3, Graham Wiggins1, Lawrence Wald1, John Belliveau1

    1Massachusetts General Hospital, Charlestown, Massachusetts, USA; 2National Taiwan University, Taipei, Taiwan; 3Harvard-MIT Division of Health Sciences and Technology, Cambridge, Massachusetts, USA

    We propose a K-space magnetic resonance Inverse Imaging (K-InI) approach to use a highly parallel radio-frequency coil array to achieve high temporal resolution MRI. K-InI solves an under-determined linear system using regularization in parallel MRI reconstruction. K-InI uses auto-calibration technique to estimate the reconstruction coefficients and it can provide coil-by-coil reconstruction to allow for more flexible combination of different channels in the coil array. We demonstrate K-InI using a 3D visual fMRI experiment to achieve 100 ms temporal resolution.

    334. High Spatial High Temporal Resolution MR-Encephalography Using Constraint Reconstruction Based on Regularization with Arbitrary Projections (COBRA)

    Thimo Grotz1, Benjamin Zahneisen1, Arsène Ella1, Jürgen Hennig1

    1University Hospital Freiburg, Freiburg, Germany

    MREG, also called inverse imaging, was introduced as a new approach to measure activation related MR-signal changes in the brain, with very high temporal resolu-tion. We present a constraint reconstruction based on regularization with arbitrary projections to localize the activation. The results demonstrate that COBRA with very low number of projections can be used to acquire activation maps with reasonable spatial resolution at very high temporal resolution. Signal time courses show excel-lent contrast-to-noise for the observed BOLD response.

    335. Quantitative 23-Sodium and 17-Oxygen MR Imaging in Human Brain at 9.4 Tesla Enhanced by Constrained K-Space Reconstruction

    Ian C. Atkinson1, Keith R. Thulborn1, Aiming Lu1, Justin Haldar2, X J. Zhou1, Ted Claiborne1, Zhi-Pei Liang2

    1University of Illinois-Chicago, Chicago, Illinois, USA; 2University of Illinois-Urbana-Champaign, Urbana, Illinois, USA

    The sensitivity of ultra-high field MRI enables quantitative imaging of non-proton species such as 23-sodium and 17-oxygen. Constrained k-space reconstruction techniques can be used to improve the spatial resolution of the acquired data without compromising the ability to quantify the final image. This approach of enhanced image reconstruction combined with the improved sensitivity of high field broadens the human applications of metabolic MR imaging by minimizing otherwise long acquisition times to achieve adequate spatial resolution for the anatomy and SNR performance for quantification.

    336. Highly Undersampled 3D Golden Ratio Radial Imaging with Iterative Reconstruction

    Mariya Doneva1, Holger Eggers2, Jürgen Rahmer2, Peter Börnert2, Alfred Mertins1

    1University of Luebeck, Luebeck, Germany; 2Philips Research Europe, Hamburg, Germany

    We illustrate the feasibility of Compressed Sensing for 3D dynamic imaging using highly undersampled 3D radial acquisition with golden ratio profile ordering. Image reconstruction from a low number of measurements could be very useful for dynamic 3D imaging, to reduce the often long acquisition times and thus improve temporal resolution in 3D MRI. Using CS, the aliasing artifacts were significantly reduced and a high frame rate was achieved, allowing dynamic imaging with good temporal resolution. The described approach could be particularly useful for dynamic studies of joint motion.

    337. Three-Dimensional Compressed Sensing for Dynamic MRI

    Ali Bilgin1, 2, Ted P. Trouard1, Maria I. Altbach1, Natarajan Raghunand1

    1University of Arizona, Tucson, Arizona , USA

    Compressed Sensing (CS) theory illustrates that a small number of linear measurements can be sufficient to reconstruct sparse or compressible signals. we introduce a CS theory based method for reconstruction of time-varying radial k-space data by exploiting the spatio-temporal sparsity of Dynamic Contrast Enhanced (DCE) MRI images. The proposed method significantly reduces undersampling artifacts and can provide high temporal and spatial resolution.

    338. Constrained Compressed Sensing for Fast 3D Visualization of Active Catheters

    Carsten Oliver Schirra1, 2, Sascha Krueger3, Steffen Weiss3, Reza Razavi1, Tobias Schaeffter1, Sebastian Kozerke2

    1King's College London, London, UK; 2University and ETH Zurich, Zurich, Switzerland; 3Philips Medical Systems, Hamburg, Germany

    With standard dynamic 3D imaging methods sufficient spatial resolution is difficult to achieve at the required temporal rates when visualizing interventional devices. Active catheters lend themselves well to undersampling methods given their confined sensitivity volume. Compressed Sensing allows exploiting the image sparseness inherent to images acquired with active catheter antennae, however the associated iterative reconstruction algorithms are time-expensive. In this work, the feasibility of using Compressed Sensing for accelerating 3D imaging of active catheters is investigated. Dedicated constraints are introduced taking into account the known catheter length and position in order to minimize the number of iterations in reconstruction.

    339. HYPR-Constrained Compressed Sensing Reconstruction for Accelerated Time Resolved Imaging

    Huimin Wu1, Walter F. Block1, Alexey A. Samsonov1

    1University of Wisconsin-Madison, Madison, USA

    Constrained reconstruction methods have been shown to produce significant accelerations to date, but suffer some temporal inaccuracy when vessels with different temporal behaviors are nearby or as the sparsity of the image volume decreases. We present simulated comparisons of a single pass reconstruction method (Highly constrained Projection Local Reconstruction or HYPRLR) and an iterative constrained reconstruction method termed HYPR Reconstruction by Iterative Estimation (HYPRIT). We demonstrate increased temporal accuracy for HYPRIT relative to HYPR LR, but also demonstrate how HYPRIT’s performance improves when using the HYPR LR image as a constraining image. Finally, rapid CE-MRA capabilities are demonstrated.

    340. A Comparison of L1 and L2 Norms as Temporal Constraints for Reconstruction of Undersampled Dynamic Contrast Enhanced Cardiac Scans with Respiratory Motion

    Ganesh Adluru1, 2, Edward VR DiBella1

    1University of Utah, Salt Lake City, USA

    Constrained reconstruction methods can be used to accelerate the acquisition of cardiac dynamic contrast-enhanced MRI data. The temporal constraint term is important for determining the quality of reconstructions from undersampled data. Here we compare and evaluate reconstructions obtained by using an L2-norm and an L1-norm as temporal constraints. The reconstructions were compared using data with simulated undersampling and using actual undersampled radial data acquired from the scanner. Using an L1-norm in the temporal constraint helps in obtaining better reconstructions as compared to using an L2-norm in the temporal constraint especially when there is respiratory motion in the data.

    341. Accelerated Dynamic Imaging by Reconstructing Sparse Differences Using Compressed Sensing

    André Fischer1, 2, Felix Breuer2, Martin Blaimer2, Nicole Seiberlich1, Peter Michael Jakob1, 2

    1University of Wuerzburg, Wuerzburg, Germany; 2Research Center for Magnetic Resonance Bavaria e.V., Wuerzburg, Germany

    The concept of Compressed Sensing offers a new perspective for accelerated magnet resonance imaging. We demonstrate the use of CS in connection with dynamic imaging. The proposed method reconstructs the differences between a certain timeframe and the composite image of a dynamic dataset. By choosing a radial trajectory, the artifacts in the undersampled image are incoherent, and, therefore, beneficial for the CS algorithm. We achieved good reconstructions with as less as 14 projections (192 x 192 matrix size). Hence, this technique is promising for future real-time dynamic applications.

    342. MRI Compressed Sensing Via Sparsifying Images

    Alexey Samsonov1, Youngkyoo Jung1, Andrew L. Alexander1, Walter F. Block1, Aaron S. Field1

    1University of Wisconsin, Madison, Wisconsin, USA

    Recently, there has been an emerging interest to accelerate MRI through iterative reconstruction of undersampled data based on compressed sensing theory. We extend the compressed sensing framework via sparsifying images. The new method utilizes the recent idea in HYPR methods to use sliding window composite images to constrain reconstruction. At the same time, such enhancement is done within the compressed sensing framework. We demonstrate that the new method, HighlY constrained back PRojection by Iterative esTimation (HYPRIT), may be a powerful tool for image reconstruction from highly undersampled data. We demonstrate its potential for accelerated radial diffusion tensor imaging.


    Image Credit: NASA/JPL/Space Science Institute, W00046494.jpg was taken on June 15, 2008 and received on Earth June 17, 2008. The camera was pointing toward SATURN-RINGS at approximately 800,600 kilometers away, and the image was taken using the CL1 and CL2 filters. This image has not been validated or calibrated. A validated/calibrated image will be archived with the NASA Planetary Data System in 2009.

    Printfriendly