Thursday, May 28, 2009

CS: Topics in Compressed Sensing, GraDes and other ICML '09 papers

Laurent Jacques let me know that Deanna Needell just released her thesis entitled: "Topics in Compressed Sensing". Congratulations Deanna.

Thanks to Lianlin Li's blog on compressed sensing (in Chinese) I noted that I had forgotten the following interesting and probably important paper. Gradient Descent with Sparsification: An iterative algorithm for sparse recovery with restricted isometry property by Rahul Garg and Rohit Khandekar. The abstract reads:

In this paper, we present an algorithm for finding an $s$-sparse vector $x$ that minimizes the {\em square-error} $\| y - \Phi x \|^2$ where $\Phi$ satisfies the {\em restricted isometry property} (RIP). Our algorithm, called {\em {\GraDeS}} (Gradient Descent with Sparsification) starts from an arbitrary $s$-sparse $x$ and iteratively updates it as: $$x \leftarrow H_s\left(x + \frac{1}{\gamma} \cdot \Phi^\t(y - \Phi x)\right)$$ where $\gamma > 1$ is a constant and $H_s$ sets all but largest $s$ coordinates in absolute value to zero.

We show that {\GraDeS}, in constant number of iterations, computes the correct $s$-sparse solution to the system $y=\Phi x$ where $\Phi$ satisfies the condition that the {\em isometric constant} $\delta_{2s} < gamma =" 1$," style="font-weight: bold;">Our Matlab implementation of {\GraDeS} out-performs previously proposed algorithms like Subspace Pursuit, StOMP, OMP, and Lasso by an order of magnitude. Curiously, our experiments also uncovered several cases where L1-regularized regression (Lasso) fails but {\GraDeS} finds the correct solution.
Just because of the potential importance of the results shown in this paper, they justify the need for benchmarks tests as mentioned earlier this week. The ICML conference also gives you the ability to discuss the paper here in discuss.

Which brings me to note that the ICML 2009 abstracts and full papers are out. The conference will take place June 14-18 in Montreal. Other papers from that conference that got my interest include:

Online Dictionary Learning for Sparse Coding by Julien Mairal, Francis Bach, Jean Ponce, Guillermo Sapiro. The abstract reads:
Sparse coding---that is, modelling data vectors as sparse linear combinations of basis elements---is widely used in machine learning, neuroscience, signal processing, and statistics. This paper focuses on learning the basis set, also called dictionary, to adapt it to specific data, an approach that has recently proven to be very effective for signal reconstruction and classification in the audio and image processing domains. This paper proposes a new online optimization algorithm for dictionary learning, based on stochastic approximations, which scales up gracefully to large datasets with millions of training samples. A proof of convergence is presented, along with experiments with natural images demonstrating that it leads to faster performance and better dictionaries than classical batch algorithms for both small and large datasets.

In a related matter, the matrix completion crows might be interested with this: An Accelerated Gradient Method for Trace Norm Minimization by Shuiwang Ji and Jieping Ye. The abstract reads:
We consider the minimization of a smooth loss function regularized by the trace norm of the matrix variable. Such formulation finds applications in many machine learning tasks including multi-task learning, matrix classification, and matrix completion. The standard semidefinite programming formulation for this problem is computationally expensive. In addition, due to the non-smoothness nature of the trace norm, the optimal first-order black-box method for solving such class of problems converges as O(1/sqrt(k)), where k is the iteration counter. In this paper, we exploit the special structure of the trace norm, based on which we propose an extended gradient algorithm that converges as O(1/k). We further propose an accelerated gradient algorithm, which achieves the optimal convergence rate of O(1/k^2) for smooth problems. Experiments on multi-task learning problems demonstrate the efficiency of the proposed algorithms.
In a different direction, a manifold based technique enabling the production of a distance on a manifold in Geometry-aware Metric Learning by Zhengdong Lu, Prateek Jain and Inderjit Dhillon. The abstract reads:
In this paper, we introduce a generic framework for semi-supervised kernel learning. Given pairwise (dis-)similarity constraints, we learn a kernel matrix over the data that respects the provided side-information as well as the local geometry of the data. Our framework is based on metric learning methods, where we jointly model the metric/kernel over the data along with the underlying manifold. Furthermore, we show that for some important parameterized forms of the underlying manifold model, we can estimate the model parameters and the kernel matrix efficiently. Our resulting algorithm is able to incorporate local geometry into metric learning task, at the same time it can handle a wide class of constraints and can be applied to various applications. Finally, our algorithm is fast and scalable -- unlike most of the existing methods, it is able to exploit the low dimensional manifold structure and does not require semi-definite programming. We demonstrate wide applicability and effectiveness of our framework by applying to various machine learning tasks such as semi-supervised classification, colored dimensionality reduction, manifold alignment etc. On each of the tasks our method performs competitively or better than the respective state-of-the-art method.
and finally, An Efficient Projection for L1,Infinity Regularization by Ariadna Quattoni, Xavier Carreras, Michael Collins and Trevor Darrell. The abstract reads:
In recent years the L1,Infinity norm has been proposed for joint regularization. In essence, this type of regularization aims at extending the L1 framework for learning sparse models to a setting where the goal is to learn a set of jointly sparse models. In this paper we derive a simple and effective projected gradient method for optimization of L1,Infinity regularized problems. The main challenge in developing such a method resides on being able to compute efficient projections to the L1,Infinity ball. We present an algorithm that works in O(n log n) time and O(n) memory where n is the number of parameters. We test our algorithm in a multi-task image annotation problem. Our results show that L1,Infinity leads to better performance than both L2 and L1 regularization and that it is is effective in discovering jointly sparse solutions.

Wednesday, May 27, 2009

CS: YALL1, RecPF, KSVD, Low Rate Sampling for Time Delay, Fast local algos for NMF, CI of Subwavelength Structures, GAMM meeting, GPUmat freeware


Yin Zhang just let me know that he and his team have released "..substantial enhancements..to RecPF v2.0 and YALL1 v.b4..." in particular:

RecPF version 2.0 now has the following new features:
  • Switched to complex computation, fully compatible with complex data;
  • Replaced penalty parameter continuation by R. Glowinski and P.L. Tallec's alternating direction method;
  • New parameter/data normalization was added to make parameters rather independent of the image size, pixel intensity range, and number of CS measurements. To find the parameters aTV and aL1 for version 2.0 corresponding to those used in versions 1.1 or 1.0, see Lines 60, 66, and 67 of RecPF.m.

while YALL1 version beta-4 no longer requires A*A' = I. I am happy to see that the counter on YALL1 shows more than 150 visits to the site.

I'll add this information shortly to the reconstruction section of the big picture.


I just found a blog in Chinese on compressive sensing. It looks like the author is indeed reading the papers he is mentioning, this is great. In a different direction, some of the Chinese readers of this blog have difficulties getting to any blogs on blogger.com including this one. One way around this is to subscribe by E-mail to Nuit Blanche (see right side of this page).

Michael Elad just provided some context to his newly released paper that I mentioned earlier:
One of the most appealing results by Candes and Tao in their analysis of the Danzig Selector (DS) is the proof that it is log (and constant) factor away from an oracle estimator that knows the original support. Can a similar claim be made with respect to the Basis Pursuit (BP)? Together with Zvika Ben-Haim and Yonina Eldar we show that such a result is indeed atainable - see here. Furthermore, the constants obtained in our bounds are even better than those found in the DS analysis, suggesting (not claiming!!) that BP may be more accurate than DS.
The reference is: Zvika Ben-Haim, Yonina Eldar and Michael Elad, Near-Oracle Performance of Basis Pursuit Under Random Noise.

In a different direction we also have: Dictionaries for Sparse Representation Modeling byRon Rubinstein, Alfred Bruckstein, Michael Elad. The abstract reads:
Sparse and redundant representation modeling of data assumes an ability to describe signals as linear combinations of few atoms from a pre-specified dictionary. As such, the choice of the dictionary that sparsifies the signals is crucial for the success of this model. In general, the choice of a proper dictionary can be done using one of two ways: (i) building a sparsifying dictionary based on a mathematical model of the data; or (ii) learning a dictionary to perform best on a training set. In this paper we describe the evolution of these two paradigms. As manifestations of the first approach, we cover topics such as Wavelets, Wavelet Packets, Contourlets, and Curvelets, all aiming to exploit 1-D and 2-D mathematical models for constructing effective dictionaries for signals and images. Dictionary learning takes a different route, as it is attached to a set of examples it is supposed to serve. From the seminal work of Field and Olshausen, through the MOD and the K-SVD, this paper surveys the various options such training has to offer, up to the most recent contributions and structures.
Ron Rubinstein has just released an update to OMP-Box and KSVD-Box, from his page:
OMP-Box v9 Implementation of the Batch-OMP and OMP-Cholesky algorithms for quick sparse-coding of large sets of signals.

KSVD-Box v10 Implementation of the K-SVD and Approximate K-SVD dictionary training algorithms, and the K-SVD Denoising algorithm. Requires OMP-Box v9.

Latest update (May 21 2009): OMP-Box v9 and KSVD-Box v10 have been released! This is a major update which includes the complete source code for compiling the MEX files on any Matlab platform (packages were tested on Matlab 2007 and 2008 versions). Also included in this update are new demos, readme and faq files, and more!
I'll add this information in the dictionary section of the big picture.

Also found on the interwebs:

Low Rate Sampling Schemes for Time Delay Estimation by Kfir Gedalyahu, Yonina Eldar. The abstract reads:
Time delay estimation arises in many applications in which a multipath medium has to be identified from pulses transmitted through the channel. Various approaches have been proposed in the literature to identify time delays introduced by multipath environments. However, these methods either operate on the analog received signal, or require high sampling rates in order to achieve reasonable time resolution. In this paper, our goal is to develop a unified approach to time delay estimation from low rate samples of the output of a multipath channel. Our methods result in perfect recovery of the multipath delays from samples of the channel output at the lowest possible rate, even in the presence of overlapping transmitted pulses. This rate depends only on the number of multipath components and the transmission rate, but not on the bandwidth of the probing signal. In addition, our development allows for a variety of different sampling methods. By properly manipulating the low-rate samples, we show that the time delays can be recovered using the well-known ESPRIT algorithm. Combining results from sampling theory with those obtained in the context of direction of arrival estimation methods, we develop necessary and sufficient conditions on the transmitted pulse and the sampling functions in order to ensure perfect recovery of the channel parameters at the minimal possible rate.

Fast local algorithms for large scale Nonnegative Matrix and Tensor Factorizations by A. Cichocki and A-H. Phan. The abtract reads:
Nonnegative matrix factorization (NMF) and its extensions such as Nonnegative Tensor Factorization (NTF) have become prominent techniques for blind sources separation (BSS), analysis of image databases, data mining and other information retrieval and clustering applications. In this paper we propose a family of efficient algorithms for NMF/NTF, as well as sparse nonnegative coding and representation, that has many potential applications in computational neuroscience, multisensory processing, compressed sensing and multidimensional data analysis. We have developed a class of optimized local algorithms which are referred to as Hierarchical Alternating Least Squares (HALS) algorithms. For these purposes, we have performed sequential constrained minimization on a set of squared Euclidean distances. We then extend this approach to robust cost functions using the Alpha and Beta divergences and derive flexible update rules. Our algorithms are locally stable and work well for NMF-based blind source separation (BSS) not only for the over-determined case but also for an under-determined (over-complete) case (i.e., for a system which has less sensors than sources) if data are sufficiently sparse. The NMF learning rules are extended and generalized for N-th order nonnegative tensor factorization (NTF). Moreover, these algorithms can be tuned to different noise statistics by adjusting a single parameter. Extensive experimental results confirm the accuracy and computational performance of the developed algorithms, especially, with usage of multi-layer hierarchical
NMF approach [3].

This article is interesting on many levels: Compressive Imaging of Subwavelength Structures by Albert Fannjiang. The abstract reads:
The problem of imaging periodic structure (source or scatterer) is formulated in the framework of compressed sensing with special attention on subwavelength resolution. It is shown that in this formulation the subwavelength resolution in the presence of noise can not be achieved by compressed sensing techniques alone. Additional techniques such as near-field measurement or illumination are required and the resolution limit is derived, which says that the smallest stably resolvable scale is about either half the wavelength or the distance of the sensors to the target, whichever is smaller. Numerical simulations are consistent with the theoretical prediction.

Some reconstruction hardware use graphics card to accelerate the reconstruction process, here is some good news from GPGPU.org:


GPUmat, developed by the GP-You Group, allows Matlab code to benefit from the compute power of modern GPUs. It is built on top of NVIDIA CUDA. The acceleration is transparent to the user, only the declaration of variables needs to be changed using new GPU-specific keywords. Algorithms need not be changed. A wide range of standard Matlab functions have been implemented. GPUmat is available as freeware for Windows and Linux from the GP-You download page.







Finally, there is a meeting that I'll add later to the CS calendar:

GAMM-Seminar on Tensor Approximations, Leipzig, Germany, Feb 2010

The 26th GAMM Seminar in Leipzig will take place from
Monday, February 22nd until Wednesday February 24th 2010.

The topic of the workshop is "Tensor Approximations"
and related topics, e.g.,
* Low Rank Tensor Approximations
* Sparse Grids
* Compressed Sensing and Data-Sparse Recovery
* PDEs with Stochastic Parameters
* Numerical Methods for Quantum-Chemistry
* Applications in High Dimensions

Invited speakers are
* Pierre Comon (Universite Nice)
* Lieven De Lathauwer (KU Leuven)
* Lars Elden (Linkoeping University)
* Markus Hegland (Australian National University)
* Christian Lubich (Universitaet Tuebingen)
* Reinhold Schneider (TU Berlin)

See also the webpage
http://www.mis.mpg.de/scicomp/gamm26/index.html
Registration is open.
Creidt: NASA/JPL/Space Science Institut, Saturn's ring as seen from Cassini two days ago.

Monday, May 25, 2009

CS: Promoting a Larger Set of Compressive Sensing Benchmarks

Following up on Friday's entry on Defining the Current State of the Art, I have already sent an e-mail out to some of you requesting that you share some of your examples or actual datasets directly with the rest of the community. The intent to gather different sets stems from the following:
  • The field is getting large by the day and we can't afford to rely only on Lena's image as has been the case in signal processing for the past few decades.
  • We can't afford to have too many reconstruction solvers with unchecked claims. By unchecked, I mean that the solvers cannot be judged on one dataset chosen by the solver's authors, we need to know how badly the solvers are doing for certain datasets. or for certain parameters We need to know in which part of the phase space certain solvers are better than others in either the quality of the reconstruction or the time it took to get there.
  • We have new hardware architectures that can only be revealed to perform well only thanks to new reconstruction solvers. Unlike the old days where the two tasks were disjoint, the quality of a solver can now have a direct consequence on the hardware. We need to have fewer artificial data and more real data to evaluate these new architectures.

Currently, two research teams have responded (I have not sent that many requests out):
Michael Friedlander told me that he would eventually add those to the current set of benchmark problems in Sparco. But I am sure that if you want to help Michael by coding directly your problem as a Sparco problem, he won't mind integrating it directly into the current list. I have mentioned it before but I have no stakes in Sparco but want to remind everybody of its nice features with regards to the implementation of masks which allows users to focus on other aspect of the CS problems such as the solvers or even enable a parametric play of said encodings.

At some point, we also need to have a nice set of dictionaries accessible to different Operating Systems, when running on my windows box, it seems that I cannot use the curvelets used in the seismic examples.

Initially, the goal of this work is to help newcomers to evaluate solvers more rapidily for specific applications but I also think that it should give us all a sense of where future work should go especially in those cases where reconstruction is still not possible today because the dataset are too large (movies, voice, etc....)


References:
[1] SPARCO: A toolbox for testing sparse reconstruction algorithms

Credit Photo: Rice University, CS Camera.

Friday, May 22, 2009

CS: Let us define the Current State of the Art, shall we ?


The day before yesterday's entry on Other People's Ideas , I wrote the following:
....From talking to some of you, there is also an element of weariness in traditional CS about the claims made on the solvers and other encoding approaches. For instance with regards to solvers, there is a sense that much parameter guessing goes into getting the results of the underlying publication. While, one way to avoid this situation is to make one's code available for people to use, another way is to probably make sure that the same benchmark exercise a la Lena is available to authors. Yet this is rarely performed even though Sparco is there for that purpose....
That seemed to have stricken a chord as some of you wondered if I talked about their solvers. But further, Arian Maleki reminded me that he tried exactly this in his presentation at SPARS 09. Except he didn't presented it as he did not get a visa on time. I am not a political person but I do not suffer lightly the incompetence of bureaucrats (those of you who know me know the colorful language I use in this type of situation). Especially in this case where a good presentation would essentially have enabled a much needed discussion on a subject of increasing urgency. But let me go back to what Arian wrote to me in response to the Other People's Idea entry:

....Actually the problem that you mentioned really exists in compressed sensing and in most of the algorithms that have been proposed so far there is a need for either some oracle information (like the sparsity) or a very precise tuning of parameters which might be either very difficult or even impossible in some cases. The theoretical guarantees of most of the algorithms are also either weak or are derived under the assumptions that the parameters are carefully tuned to satisfy certain properties and will not help the user in finding good estimates of the parameters. But, David Donoho and I proposed a method to avoid this problem and our work was published in SPARS 09.
We also applied our framework to several famous algorithms( like iterative soft\hard thresholding, CoSAMP, Subspace Pursuit and some variants, ...) and did some fair comparison among those. I believe whoever wants to use one of these algorithms, can easily realize from our paper which algorithm to use according to the application. If everyone who wants to publish an algorithm uses this framework we can easily see the state of the art and I am sure it will help the research community to improve their algorithms faster. If you are interested in reading our paper it is:
Optimally Tuned Iterative Reconstruction Algorithms for Compressed Sensing

In the paper (written with Dave Donoho), one can read the following:
....We provide three versions of iterative algorithms based on our optimal tuning exercise: Recommended-IST, Recommended-IHT and Recommended-TST. They are implemented in Matlab and published at URL :
In our recommended versions, there are no free parameters. The user specifies only the matrix A and the left-hand side y. In particular the user does not specify the expected sparsity level, which in most applications cannot be considered known. These recommended algorithms are not the same as previously published algorithms. For example, Recommended TST has parameters = 1 and = 1, so it initially seems identical to Subspace Pursuit [8]. However, Subspace Pursuit demands an oracle to inform the user of the true underlying sparsity of the vector. Recommended-TST has embedded in it a maximum value for the assumed sparsity level (see Table III). If the actual sparsity in x0 is better than the maximin value, the algorithm still works, but if it is worse, the algorithm won’t work anyway. The user does not need to know this number – it is baked into the code. In effect, we have de-oracle-ized the Subspace Pursuit method....

As one of the average engineer looking to use reconstruction solvers as a commodity because most of my time is spent on fitting the physics to the encoding problem, I am also really looking forward the same type of treatment for the Reweighted, Bregman, Nesterov algorithms or any other solvers from the Zoo (see the remark by Dirk Lorenz in A projection proximal-point algorithm for l^1-minimization). As you recall I talked about this specific issue before when I wrote about two examples of issues found by others:

...In a different area, this is an issue of general interest: what are the parameters needed to make sure your reconstruction solver will converge to the right solution. It looks like there is not an obvious choice for the Bregman solvers as pointed by Michael Friedlander when he noted an issue with the parameters choice in this entry. More recently, an unknown blogger seems to have similar issues with a Bregman's algorithm as well. And so I am not overly surprised to see two papers at SPARS'09 (list of papers are here and here) trying to deal with a similar issue with the IHT algorithm:

I think one of the other issues with finding the right parameters for specific reconstruction algorithms is also to make sure that we also are dealing with the right domain specific dataset be it a vector or a natural image or else. A case in point is the recent paper by Jianwei Ma entitled Improved Iterative Curvelet Thresholding for Compressed Sensing where a new dictionnary is investigated on top of the usual canonical basis. As it stands, this really means that we are comtemplating a combinatorial explosion of possibilities and that Sparco needs to be part of the solution.As it turns out, Michael Friedlander one of the author of Sparco, also sent me an e-mail:
...Just a quick note to say that I read your blog's May 19th entry, and your exhortation for greater reproducibility in algorithmic work. I've always been grateful that you've pushed Sparco, but alas, I'll admit that its benchmark problems are rarely used in the literature.

Interestingly, the underlying linear-operator library is used much more often. In a few months we plan on releasing a separate package that contains only the operator library, but an expanded version with much more functionality. Of course we'll send word when it's released....
I had noted that indeed the linear operator library was indeed very nifty as it took away some of the difficulties underlying the implementation of masks and so forth. This is great news but let us come back to this issue of benchmarks. Ever since the revolution of reproducible research and the birth of compressive sensing, this is probably the first time we have a way to make sure that a field does not collapse under its own complexity. Can we rise to that challenge ? While this complexity may have some attraction for the experts, the average user is rapidly turned off and this may eventually backfire on the whole field. I am not sure what the next step should be, but here are some of the questions:

  • Should we set up something like the Standard Performance Evaluation Corporation ?
  • How would the choice of solvers' parameter be addressed or resolved ?
  • Should we have a script written that run through every benchmarks once a new solver is created and made available?
  • Should we have a procedure such as the one mentioned by Arian implemented so that a parameter set is chosen for all the benchmark cases ?
  • Should we have a hidden benchmark ( a la Netflix prize) so that parameter optimization cannot be performed ?
  • Where should the scripts and figure of merits be hosted ?

One more thing, I know I don't have much traction, but if you publish a paper that references Sparco, it will get a special treatment on this blog. I might even contact some of you on the Hardware side and ask you to provide a set of data you have obtained from your technology. I'll forward it directly to the Sparco team. Michael tells me he is going to continue adding new cases from the current literature.


Disclosure, I don't have a stake in Sparco.
Credit photo: Wikipedia, Can you fit the Mona Lisa in 140 Characters ?

Thursday, May 21, 2009

CS: A long post.




Today, we have a series of papers foundthrough Google and at the Rice repository. I'll come back tomorrow to yesterday's entry. In the meantime, we have:

Convergence Analysis of Iterative Thresholding Algorithms by Arian Maleki. The abstract reads:

There is a recent surge of interest in developing algorithmsfor finding sparse solutions of underdetermined systems of linear equations y = \phi x. In many applications, extremely large problem sizes are envisioned, with at least tens of thousands of equations and hundreds of thousands of unknowns. For such problem sizes, low computational complexity is paramount. The best studied `1 minimization algorithm is not fast enough to fulfill this need. Iterative thresholding algorithms have been proposed to address this problem. In this paper we want to analyze two of these algorithms theoretically, and give sufficient conditions under which they recover the sparsest solution.
and the attendant Tech Report.

When reading the next paper I could not stop remembering Stephane Mallat's presentation at ECTV'08 ( Sparse Super-Resolution with Space Matching Pursuits by Guoshen Yu , Stéphane Mallat.) The paper is Sparse Linear Representation by Halyun Jeong and Young-Han Kim. The abstract
This paper studies the question of how well a signal can be reprsented by a sparse linear combination of reference signals from an overcomplete dictionary. When the dictionary size is exponential in the dimension of signal, then the exact characterization of the optimal distortion is given as a function of the dictionary size exponent and the number of reference signals for the linear representation. Roughly speaking, every signal is sparse if the dictionary size is exponentially large, no matter how small the exponent is. Furthermore, an iterative method similar to matching pursuit that successively finds the best reference signal at each stage gives asymptotically optimal representations. This method is essentially equivalent to successive refinement for multiple descriptions and provides a simple alternative proof of the successive refinability of white Gaussian sources.
The Residual Method for Regularizing Ill-Posed Problems by Markus Grasmair, Markus Haltmeier, Otmar Scherzer. The abstract reads:
Although the residual method, or constrained regularization, is frequently used in applications, a detailed study of its properties is still missing. In particular, the questions of stability and convergence rates have hardly been treated in the literature. This sharply contrasts the progress of the theory of Tikhonov regularization, where for instance the notion of the Bregman distance has recently led to a series of new results for regularization in Banach spaces. The present paper intends to bridge the gap between the existing theories as far as possible. We develop a stability and convergence theory for the residual method in general topological spaces. In addition, we prove convergence rates in terms of (generalized) Bregman distances. Exemplarily, we apply our theory to compressed sensing. Here, we show the well-posedness of the method and derive convergence rates both for convex and non-convex regularization under rather weak conditions. It is for instance shown that in the non-convex setting the linear convergence of the regularized solutions already follows from the sparsity of the true solution and the injectivity of the (linear) operator in the equation to be solved.

Analying Least Squares and Kalman Filtered Compressed Sensing by Namrata Vaswani. The abstract reads:
In recent work, we studied the problem of causally reconstructing time sequences of spatially sparse signals, with unknown and slow time-varying sparsity patterns, from a limited number of linear “incoherent” measurements. We proposed a solution called Kalman Filtered Compressed Sensing (KF-CS). The key idea is to run a reduced order KF only for the current signal’s estimated nonzero coefficients’ set, while performing CS on the Kalman filtering error to estimate new additions, if any, to the set. KF may be replaced by Least Squares (LS) estimation and we call the resulting algorithm LS-CS. In this work, (a) we bound the error in performing CS on the LS error and (b) we obtain the conditions under which the KF-CS (or LS-CS) estimate converges to that of a genie-aided KF (or LS), i.e. the KF (or LS) which knows the true nonzero sets.

A Convergent Overlapping Domain Decomposition Method for Total Variation Minimization by Massimo Fornasier, Andreas Langer, Carola-Bibiane Schönlieb. The abstract reads:
This paper is concerned with the analysis of convergent sequential and parallel overlapping domain decomposition methods for the minimization of functionals formed by a discrepancy term with respect to data and a total variation constraint. To our knowledge, this is the first successful attempt of addressing such strategy for the nonlinear, nonadditive, and nonsmooth problem of total variation minimization. We provide several numerical experiments, showing the successful application of the algorithm for the restoration of 1D signals and 2D images in interpolation/inpainting problems respectively, and in a compressed sensing problem, for recovering piecewise constant medical-type images from partial Fourier ensembles.
Found in the Rice Compressive Sensing repository:

Image Fusion by Compressive Sensing by Atul Divekar, Okan Ersoy. The abstract reads:
We propose a new method of image fusion that utilizes the recently developed theory of compressive sensing. Compressive sensing indicates that a signal that is sparse in an appropriate set of basis vectors may be recovered almost exactly from a few samples via l1-minimization if the system matrix satisfies some conditions. These conditions are satisfied with high probability for Gaussian-like vectors. Since zero-mean image patches satisfy Gaussian statistics, they are suitable for compressive sensing. We create a dictionary that relates high resolution image patches from a panchromatic image to the corresponding filtered low resolution versions. We first propose two algorithms that directly use this dictionary and its low resolution version to construct the fused image. To reduce the computational cost of l1-minimization, we use Principal Component Analysis to identify the orthogonal “modes” of co-occurrence of the low and high resolution patches. Any pair of co-occurring high and low resolution patches with similar statistical properties to the patches in the dictionary is sparse with respect to the principal component bases. Given a patch from a low resolution multispectral band image, we use l1-minimization to find the sparse representation of the low resolution patch with respect to the sample-domain principal components. Compressive sensing suggests that this is the same sparse representation that a high resolution image would have with respect to the principal components. Hence the sparse representation is used to combine the high resolution principal components to produce the high resolution fused image. This method adds high-resolution detail to a low-resolution multispectral band image keeping the same relationship that exists between the high and low resolution versions of the panchromatic image. This reduces the spectral distortion of the fused images and produces results superior to standard fusion methods such as the Brovey transform and principal component analysis.

This one paper is ibnteresting in that it gives some insight on the ability of certain reconstruction solvers when the basis is not canonical: Improved Iterative Curvelet Thresholding for Compressed Sensing by Jianwei Ma. The abstract reads:
A new theory named compressed sensing for simultaneous sampling and compression of signals has been becoming popular in the communities of signal processing, imaging and applied mathematics. In this paper, we present improved/accelerated iterative curvelet thresholding methods for compressed sensing reconstruction in the fields of remote sensing. Some recent strategies including Bioucas-Dias and Figueiredo's two-step iteration, Beck and Teboulle's fast method, and Osher et al's linearized Bregman iteration are applied to iterative curvelet thresholding in order to accelerate convergence. Advantages and disadvantages of the proposed methods are studied using the so-called pseudo-Pareto curve in the numerical experiments on single-pixel remote sensing and Fourier-domain random imaging.

And finally here is a presentation at the CS department at Caltech by Steve Flammia at the Perimeter Institute. The abstract of the talk is:
Everybody hates tomography. And with good reason! Experimentalists hate it because it is inefficient and difficult. Theorists hate it because it isn't very "quantum", and hence boring. But in part because of our current lack of meso-scale quantum computers capable of convincingly performing non-classical calculations, and the need to certify quantum state preparation, tomography seems like a necessary evil. In this talk, I will attempt to banish quantum state tomography to the Hell of Lost Paradigms where it belongs. I hope to achieve this by introducing several heuristics for learning quantum states more efficiently, in some cases exponentially so. One such heuristic runs in polynomial time and outputs a polynomial-sized classical approximation of the state (in matrix product state form.) Another takes advantage of the fact that most interesting states are close to pure states to get an (essentially) quadratic speedup using ideas from compressed sensing. Both algorithms come with rigorous error bounds.


Credit: NASA, Atlantis releases Hubble.

Tuesday, May 19, 2009

CS: OPIs and Playing Dumb, Number of Measurements in Sparse Signal Recovery, Wyner-Ziv Image Coding, CfP


I don't know if I would qualify as a specialist but I sense that the throughput in compressive sensing work is decreasing somehow. Let me explain, in the past two years, we have gotten better ways of describing properties for encoding matrices (null space property, RIP,.....), some new concepts of encoding matrices seem to be a better fit to the physics of interest to some engineering fields, we have more domain specific dictionaries while the reconstruction solvers are advertized to be increasingly better. In the past few months, the number of papers direcly related to these issues has decreased. At the same time, we have also seen extensions of the field in different directions: there is some maturing of hardware concepts and new ones are appearing based on the properties mentioned above. We also see concepts and techniques developed in CS but applied to other domains such as matrix completion as well as generalizations of CS in different directions (nonlinear, ....). The recent call by Muthu to define more precisely CS is a sign that indeed the subject is morphing into something wider. Indeed, there is no reason to expect the contrary as all engineering fields are drowning under linear underdetermined systems. In fact, I don't see a trend away from looking for few variables of interest in large and complex models.

Looking to the future

At a different level, if one removes the issue of reconstruction (which could be construed as a de facto proof that you guessed the right dimensionality of the system) the random projection technique gives way to a whole new world of manifold signal processing and the rise of domain specific sensors. Is the world, besides the military, ready for this type of market ? Only time will tell, I am personaly looking forward to how the issue of calibration will be investigated. Eventually, this and other areas of interest will be borne out of compressive sensing but they will hardly be recognizable as such and This Is A Good Thing (TM).

The direction of this blog

Writing this blog takes a personal toll in the form of having to read Other People's Ideas (OPI), but there is a hope that it remove situations where false dichotomies get spread too far in the system and eventually yield non-important but self-sustaining concepts. The other toll is that the average reader may feel overwhelmed. The recent poll I made last week (see graph above), is hardly statistically relevant but provides a trend that the blog is probably giving too much information about the new branches of CS without providing much context. From talking to some of you, there is also an element of weariness in traditional CS about the claims made on the solvers and other encoding approaches. For instance with regards to solvers, there is a sense that much parameter guessing goes into getting the results of the underlying publication. While, one way to avoid this situation is to make one's code available for people to use, another way is to probably make sure that the same benchmark exercise a la Lena is available to authors. Yet this is rarely performed even though Sparco is there for that purpose. Closer to home, another way to answer the sense of loss of directions expressed by the poll is to provide more context by going back to these interviews pieces of asking dumb questions to real experts in the field. While I recognize the limit of the exercise, any other insights on how to do this effectively are welcome.

And now to our regularly scheduled broadcast, we have two papers and an CfP for a publication:

Number of Measurements in Sparse Signal Recovery by Paul Tune, Sibi Raj Bhaskaran, Stephen Hanly. The abstract reads:
We analyze the asymptotic performance of sparse signal recovery from noisy measurements. In particular, we generalize some of the existing results for the Gaussian case to subgaussian and other ensembles. An achievable result is presented for the linear sparsity regime. A converse on the number of required measurements in the sub-linear regime is also presented, which cover many of the widely used measurement ensembles. Our converse idea makes use of a correspondence between compressed sensing ideas and compound channels in information theory
From the Rice Repository we have an older paper I had not covered before:

In this paper, we present a Wyner-Ziv coding based on random projections for image compression with side information at the decoder. The proposed coder consists of random projections (RPs), nested scalar quantization (NSQ), and Slepian-Wolf coding (SWC). Most of natural images are compressible or sparse in the sense that they are well-approximated by a linear combination of a few coefficients taken from a known basis, e.g., FFT or Wavelet basis. Recent results show that it is surprisingly possible to reconstruct compressible signal to within very high accuracy from limited random projections by solving a simple convex optimization program. Nested quantization provides a practical scheme for lossy source coding with side information at the decoder to achieve further compression. SWC is lossless source coding with side information at the decoder. In this paper, ideal SWC is assumed, thus rates are conditional entropies of NSQ quantization indices. Recently theoretical analysis shows that for the quadratic Gaussian case and at high rate, NSQ with ideal SWC performs the same as conventional entropy-coded quantization with side information available at both the encoder and decoder. We note that the measurements of random projects for a natural large-size image can behave like Gaussian random variables because most of random measurement matrices behave like Gaussian ones if their sizes are large. Hence, by combining random projections with NSQ and SWC, the tradeoff between compression rate and distortion will be improved. Simulation results support the proposed joint codec design and demonstrate considerable performance of the proposed compression systems.

and

IEEE Transactions on Image Processing
Special Issue on
Distributed Camera Networks: Sensing, Processing, Communication and Computing

Description of Topic:

Distributed camera networks have applications in diverse areas, including urban surveillance, environmental monitoring, healthcare, and battlefield visualization. Although distributed camera networks have been increasing in numbers, effective methods for dissemination, processing and understanding of video data collected by distributed camera networks are lacking. There is a strong need for developing distributed algorithms for compression and dissemination of video data, detection and tracking of moving objects, motion capture and higher level tasks such as visualization, recognition of objects and the activities they are involved in. Most of the existing algorithms for these tasks operate in a centralized manner---imagery or other information such as location and identity are transmitted to a server that performs necessary processing. Such server-oriented architectures do not scale for the problems mentioned above. Another feature of distributed camera networks is that the use of distributed algorithms adds network bandwidth and power to the mix of constraints; those constraints are particularly tight for wireless networks. Algorithms may need to be redesigned to meet these requirements---simple mapping onto embedded platforms is often not sufficient.
Manuscripts are solicited to address a wide range of topics in distributed camera networks, including but not limited to the following
List of Specific Topics Covered:
  • Sensing
  • Collaborative (in-network) processing
  • Compressive sensing and sparse representation
  • Adaptive sensing (e.g., using different spatial/time resolutions, bit depths, etc.)
  • Processing
  • Distributed camera calibration
  • Distributed video compression Efficient video transmission Detection and tracking of objects
  • Recognition of identity and events
  • Visualization
  • Communication Network architectures Efficient protocols
  • Secure transmission
  • Camera scheduling
  • Cross-layer protocols Computing Embedded systems
  • Low-power computing
  • Software protocols Privacy protection
Timeline for Submission, Review, and Publication:
  • Manuscript submission: 30, August 2009
  • Preliminary results: 15, November 2009
  • Revised version: 15, January 2010
  • Notification: 15, February 2010
  • Final manuscripts due: 15, March 2010
  • Anticipated publication: 01, June 2010
List of Guest Editors:
Prof. Rama Chellappa, University of Maryland
Prof. Wendi Heinzelman, University of Rochester
Prof. Janusz Konrad, Boston University
Prof. Wayne Wolf, Georgia Institute of Technology

Monday, May 18, 2009

CS: CS-BP new code, Compressive-Projection Principal Component Analysis, Matrix Completion, SDP Rank Reduction

Over the past three days since it was mentioned in this blog, the statistics counter on the YALL1 page of Yin Zhang has increased by more than 90 visits. It may look small, but it generally takes a long time to do this type of direct advertizing to connaisseurs like yourselves. If there is one figure of merit that I am proud of from running this blog, this is one of them: I am really happy that the blog provides rapid access to new idea or implementation that will over time have an impact. If you are releasing a code and do not see it either on this blog or on the big picture page, then please by all means let me know about it. I'll be glad to send the Nuit Blanche Effect your way.

While we are talking about reconstruction code, I noted that the CS-BP (Compressive Sensing - Belief Propagation) code/page had a new implementation by Danny Bickson (check the NBP Non-parametric belief propagation (NBP) implementation).

Simon Foucart has an interesting syllabus out for his Graduate Seminar on Compressed Sensing

I just found the following piece entitled Rethinking Signal Processing as published in the Communication of the ACM and Laurent Duval just tried Wolfram|Alpha and blogged on his experience: CS or underdetermined systems do not yield much response from that system.

James Fowler who commented on the fact that "all y'all" is the plural of "y'all", is also behind a concept called Compressive-Projection Principal Component Analysis. The connection to Compressive Sensing is explained in this excerpt:
We note that, in its reliance on encoder-side random projections, CPPCA bares some similarity to the emerging mathematical paradigm of compressed sensing1 (CS) (e.g., [7–11]). Although both CS and CPPCA consist of lightweight projection-based encoding, their decoder-side reconstructions differ significantly—CS assumes a fixed basis ensuring sparsity while CPPCA determines a data-specific PCA basis directly from the random projections. Experimental results presented below reveal that CPPCA achieves reconstruction performance substantially superior to that of a multiple-vector CS variant when applied to hyperspectral data.
The three papers on this concept are: Compressive-Projection Principal Component Analysis and the First Eigenvector by James E. Fowler. The abstract reads:
An analysis is presented that extends existing Rayleigh-Ritz theory to the special case of highly eccentric distributions. Specifically, a bound on the angle between the first Ritz vector and the orthonormal projection of the first eigenvector is developed for the case of a random projection onto a lower-dimensional subspace. It is shown that this bound is expected to be small if the eigenvalues are widely separated, i.e., if the data distribution is highly eccentric. This analysis verifies the validity of a fundamental approximation behind compressive projection principal component analysis, a technique proposed previously to recover from random projections not only the coefficients associated with principal component analysis but also an approximation to the principal-component transform basis itself.

Compressive-Projection Principal Component Analysis by James Fowler. The abstract reads:
Principal component analysis (PCA) is often central to dimensionality reduction and compression in many applications, yet its data-dependent nature as a transform computed via expensive eigendecomposition often hinders its use in severely resource-constrained settings such as satellite-borne sensors. A process is presented that effectively shifts the computational burden of PCA from the resource-constrained encoder to a presumably more capable base-station decoder. The proposed approach, compressive-projection PCA (CPPCA), is driven by projections at the sensor onto lower-dimensional subspaces chosen at random, while the CPPCA decoder, given only these random projections, recovers not only the coefficients associated with the PCA transform, but also an approximation to the PCA transform basis itself. An analysis is presented that extends existing Rayleigh-Ritz theory to the special case of highly eccentric distributions; this analysis in turn motivates a reconstruction process at the CPPCA decoder that consists of a novel eigenvector reconstruction based on a convex-set optimization driven by Ritz vectors within the projected subspaces. As such, CPPCA constitutes a fundamental departure from traditional PCA in that it permits its excellent dimensionality-reduction and compression performance to be realized in an light-encoder/heavy-decoder system architecture. In experimental results, CPPCA outperforms a multiple-vector variant of compressed sensing for the reconstruction of hyperspectral data.

Compressive-Projection Principal Component Analysis for the Compression of Hyperspectral Signatures by James E. Fowler. The abstract reads:
A method is proposed for the compression of hyperspectral signature vectors on severely resource-constrained encoding platforms. The proposed technique, compressive-projection principal component analysis, recovers from random projections not only transform coefficients but also an approximation to the principal-component basis, effectively shifting the computational burden of principal component analysis from the encoder to the decoder. In its use of random projections, the proposed method resembles compressed sensing but differs in that simple linear reconstruction suffices for coefficient recovery. Existing results from perturbation theory are invoked to argue for the robustness under quantization of the eigenvector-recovery process central to the proposed technique, and experimental results demonstrate a significant rate-distortion performance advantage over compressed sensing using a variety of popular bases.

and Fast and Near--Optimal Matrix Completion via Randomized Basis Pursuit by Zhisu Zhu, Anthony Man-Cho So, Yinyu Ye. The abstract reads:
Motivated by the philosophy and phenomenal success of compressed sensing, the problem of reconstructing a matrix from a sampling of its entries has attracted much attention recently. Such a problem can be viewed as an information{theoretic variant of the well{studied matrix completion problem, and the main objective is to design an efficient algorithm that can reconstruct a matrix by inspecting only a small number of its entries. Although this is an impossible task in general, Candes and co-authors have recently shown that under a so{called incoherence assumption, a rank r n£n matrix can be reconstructed using semidefinite programming (SDP) after one inspects O(nr log6 n) of its entries. In this paper we propose an alternative approach that is much more e±cient and can reconstruct a larger class of matrices by inspecting a signi¯cantly smaller number of the entries. Specifically, we ¯rst introduce a class of so{called stable matrices and show that it includes all those that satisfy the incoherence assumption. Then, we propose a randomized basis pursuit (RBP) algorithm and show that it can reconstruct a stable rank r n£n matrix after inspecting O(nr log n) of its entries. Our sampling bound is only a logarithmic factor away from the information{theoretic limit and is essentially optimal. Moreover, the runtime of the RBP algorithm is bounded by O(nr2 log n+n2r), which compares very favorably with the ­(n4r2 log12 n) runtime of the SDP{based algorithm. Perhaps more importantly, our algorithm will provide an exact reconstruction of the input matrix in polynomial time. By contrast, the SDP based algorithm can only provide an approximate one in polynomial time.

In the following paper by some of the same authors, there is a mention of the Johnson–Lindenstrauss lemma: A Unified Theorem on SDP Rank Reduction by Anthony Man-Cho So, Yinyu Ye, Jiawei Zhang. The abstract reads:
We consider the problem of finding a low–rank approximate solution to a system of linear equations in symmetric, positive semidefinite matrices, where the approximation quality of a solution is measured by its maximum relative deviation, both above and below, from the prescribed quantities. We show that a simple randomized polynomial–time procedure produces a low–rank solution that has provably good approximation qualities. Our result provides a unified treatment of and generalizes several well–known results in the literature. In particular, it contains as special cases the Johnson–Lindenstrauss lemma on dimensionality reduction, results on low–distortion embeddings into low–dimensional Euclidean space, and approximation results on certain quadratic optimization problems.
In [1], the connection between the Johnson-Lindenstrauss Lemma (an older result) and Compressive Sensing is made clear:
....Also, it is known that any matrix satisfying the Johnson-Lindenstrauss lemma also satisfies the near isometry property required for compressed learning [19]. However, the reverse is not true: there are some matrices, such as random Fourier matrices, that satisfy the near isometry property of compressed sensing, known as the restricted isometry property, but they do not satisfy the Johnson-Lindenstrauss lemma; ....

Reference:

Credit photo: NASA/JPL/Space Science Institute, Image taken on May 13, 2009.

Friday, May 15, 2009

CS: RTIT, Dictionary Learning, Variable Splitting and Constrained Optimization solver.






Mark Neifeld, Jun Ke, Pawan Baheti, at University of Arizona and Peter Ilinykh, Pavel Shekhtmeyster at UCSD are looking into reducing the size of the current one pixel camera. Here is the presentation on the subject and the lab presentation on Compressive Imaging.



A reconstruction solver that can solve for not so sparse signals, this is interesting: Turbo-like Iterative Thresholding for SAR Image Recovery from Compressed Measurements by Lei Yu, Yi Yang and Hong Sun. The abstract reads:

Compressive sensing (CS) has attracted many researchers since it offers a novel paradigm that one can acquire signals at a sub-Nyquist rate, and provides an implicit application of imaging and compressing simultaneously for SAR. This paper focuses on the recovery algorithms for compressed measurements of SAR data. Considering the characteristics of the SAR images, leading their not-very-sparsity in common representation space, a Turbo-like Iterative Thresholding algorithm based on Residual shrinkage operator (RTIT) is proposed. This algorithm aims at recovering signals which can not be very sparsely represented through some representation spaces. Some numerical results are presented to illustrate the performance of the proposed RTIT algorithm.

In a different direction, the identification of a dictionary from training sample does not seem to need many samples, this is intriguing: Dictionary Identification - Sparse Matrix-Factorisation via ℓ1-Minimisation by Remi Gribonval, Karin Schnass. The abstract reads:

This article treats the problem of learning a dictionary providing sparse representations for a given signal class, via ℓ1-minimisation. The problem can also be seen as factorising a d×N matrix Y = (y1 . . . yN), yn element of Rd of training signals into a d×K dictionary matrix \phi and a K×N coefficient matrix X = (x1 . . . xN), xn 2 RK, which is sparse. The exact question studied here is when a dictionary coefficient pair (\phi,X) can be recovered as local minimum of a (nonconvex) ℓ1-criterion with input Y = \phi X. First, for general dictionaries and coefficient matrices, algebraic conditions ensuring local identifiability are derived, which are then specialised to the case when the dictionary is a basis. Finally, assuming a random Bernoulli-Gaussian sparse model on the coefficient matrix, it is shown that sufficiently incoherent bases are locally identifiable with high probability. The perhaps surprising result is that the typically sufficient number of training samples N grows up to a logarithmic factor only linearly with the signal dimension, i.e. N = CK logK, in contrast to previous approaches requiring combinatorially many samples.

Finally, Mario Figueiredo just let me know that he and his co-authors have developed algorithms close to the split-Bregman that will eventually be used to solve CS problems. Now the question is: will they be faster than the Nesterov schemes ? In the meantime, here they are:


Although much research has been devoted to the problem of restoring Poissonian images, namely in the fields of medical and astronomical imaging, applying the state of the art regularizers (such as those based on wavelets or total variation) to this class of images is still an open research front. This paper proposes a new image deconvolution approach for images with Poisson statistical models, with the following building blocks: (a) a standard regularization/MAP criterion, combining the Poisson log-likelihood with a regularizer (log-prior) is adopted; (b) the resulting optimization problem (which is difficult, since it involves a non-quadratic and non-separable term plus a non-smooth term) is transformed into an equivalent constrained problem, via a variable splitting procedure; (c) this constrained problem is addressed using an augmented Lagrangian framework. The effectiveness of the resulting algorithm is illustrated in comparison with current state-of-the-art methods.

Fast Frame-Based Image Deconvolution Using Variable Splitting and Constrained Optimization by Mario Figueiredo, Jose M. Bioucas-Dias, Manya V. Afonso. The abstract reads:

We propose a new fast algorithm for solving one of the standard formulations of frame-based image deconvolution: an unconstrained optimization problem, involving an $\ell_2$ data-fidelity term and a non-smooth regularizer. Our approach is based on using variable splitting to obtain an equivalent constrained optimization formulation, which is then addressed with an augmented Lagrangian method. The resulting algorithm efficiently uses a regularized version of the Hessian of the data fidelity term, thus exploits second order information. Experiments on a set of image deblurring benchmark problems show that our algorithm is clearly faster than previous state-of-the-art methods.
Credit: Arianespace, Herschel Launch, the end show the fairing removal.

Thursday, May 14, 2009

CS: Herschel has separated. Godspeed Herschel


This is several seconds after take off...
This is the booster separation.

This is the separation with the fairing (the cone on top of the rocket)


This is the ejection of the first stage.


This is a cartoon of the separation of Herschel from Planck. Here is the reason why the Compressive Sensing community should care about this launch.



Good job Arianespace!

CS: Herschel Launch, QUAPO, YALL1, a poll, ADMiRA, Wiring Diagnostics, DSFT,Joint-sparse recovery, PhD.



Damaris tells us that the tiles underneath STS-125 are OK. This gives NASA the signal that the Shuttle can go do some good work on the Hubble Space Telescope. Today, another telescope, Herschel, launches with a camera that will implement a Compressive Sensing encoding. Godspeed Herschel!


In other news, we have a presentation with some potential significance: QUAPO : Quantitative Analysis of Pooling in High-Throughput Drug Screening a presentation by Raghu Kainkaryam (a joint work with Anna Gilbert, Paul Shearer and Peter Woolf). For information, Raghu also has a review paper Pooling in High-Throughput Drug Screening with Peter Woolf that you can get by kindly requesting a copy to him.


Finally, I spoke too fast yesterday but this is now ready to go: Yin Zhang just released a reconstruction solver called YALL1 or Your ALgorithms for L1
This Matlab code can currently be applied to the following six (6) L1-minimization problems:
   (BP)       min ||x||1 s.t. Ax = b
(L1/L1) min ||x||1 + (1/ν)||Ax - b||1
(L1/L2) min ||x||1 + (1/2ρ)||Ax - b||22
(BP+) min ||x||1 s.t. Ax = b s.t. x \ge 0
(L1/L1+) min ||x||1 + (1/ν)||Ax - b||1 s.t. x \ge 0
(L1/L2+) min ||x||1 + (1/2ρ)||Ax - b||22 s.t. x \ge 0


where A is m by n with m less than n, and the solution x is supposed to be (approximately) sparse. The data and signal, (A, b, x), can be real or complex. Further restrictions on A may apply.

As stated in the user's guide,

YALL1 version beta-3 utilizes a single and simple algorithm to solve all the six models (1)/(6), hence a one-for-six algorithm. It is derived from the classic approach of Alternating Direction Method, or ADM, that minimizes augmented Lagrangian functions through an alternating minimization scheme and updates the multiplier after each sweep. The convergence of such algorithms has been well analyzed in the literature (see [2], for example, and the references thereof). Such algorithms can also be characterized as first-order primal-dual algorithms because they explicitly update both primal and dual variables at every iteration.

For those of you who's first language is not English nor Texan for that matter, y''all is a typical southern contraction to "you all". For instance, instead of saying, "let's get in the truck, we're going fishing" some people might say "y'all get in the truck, we're going fishing" but it can be made more ambiguous in "All y'all get in the truck, we're going fishing". It takes a while to get used to that one because as any reasonable person would argue, there is absolutely no good reason to put an "all" in front of "y'all" since there is already an "all" in "y'all".

Giuseppe seems to have reached the "I-can't-keep-up" stage when it comes to following news in Compressive Sensing, what about you ? Here is the poll



For those of you interested to answer but who cannot see the poll, you may want to check the site directly.


Oopsie, in the presentations at the Sparsity in Machine Learning meeting I forgot to mention the online lecture on Latent Variable Sparse Bayesian Models by David Wipf

ADMiRA: Atomic Decomposition for Minimum Rank Approximation by Kiryung Lee and Yoram Bresler. The abstract reads:
Recht, Fazel, and Parrilo provided an analogy between rank minimization and ℓ0-norm minimization. Subject to the rank restricted isometry property, nuclear norm minimization is a guaranteed algorithm for rank minimization. The resulting semidefinite formulation is a convex problem but in practice the algorithms for it do not scale well to large instances. Instead, we explore missing terms in the analogy and propose a new algorithm which is computationally efficient and also has a performance guarantee. The algorithm is based on the atomic decomposition of the matrix variable and extends the idea in the CoSaMP algorithm for ℓ0-norm minimization. Combined with the recent fast low rank approximation of matrices based on randomization, the proposed algorithm can efficiently handle large scale rank minimization problems.

Wiring Diagnostics via ℓ1-Regularized Least Squares by Stefan Schuet. The abstract reads:
A new method for detecting and locating wiring damage using time domain reflectometry is presented. This method employs existing ℓ1 regularization techniques from convex optimization and compressed sensing to exploit sparsity in the distribution of faults along the length of a wire, while further generalizing commonly used fault detection techniques based on correlation and peak detection. The method’s effectiveness is demonstrated using a simulated example, and it is shown how Monte Carlo techniques are used to tune it to achieve specific detection goals, like a certain false positive error rate. In addition, this method is easily implemented by adapting readily available optimization algorithms to quickly solve large, high resolution, versions of this estimation problem. Finally, the technique is applied to a real data set, which reveals its impressive ability to identify a subtle type of chaffing damage on real wire.


Empirical Evaluation of Two Deterministic Sparse Fourier Transforms by Mark Iwen. The abstract reads:
This paper empirically evaluates a recently proposed Deterministic Sparse Fourier Transform algorithm (hereafter called DSFT) for the first time. Our experiments indicate that DSFT is capable of guaranteed general frequency-sparse signal recovery using subNyquist sampling for realistic bandwidth values. Furthermore, we show that both variants of DSFT have fast reconstruction runtimes. In fact, the sublinear-time DSFT variant is shown to be faster than a traditional Fast Fourier Transform (FFT) for highly-sparse wideband signals.
I believe that the sublinear DSFT is the one featured in the Ann Arbor FFT


Joint-sparse recovery from multiple measurements by Ewout van den Berg and Michael Friedlander. The abstract reads:
The joint-sparse recovery problem aims to recover, from sets of compressed measurements, unknown sparse matrices with nonzero entries restricted to a subset of rows. This is an extension of the single-measurement-vector (SMV) problem widely studied in compressed sensing. We analyze the recovery properties for two types of recovery algorithms. First, we show that recovery using sum-of-norm minimization cannot exceed the uniform recovery rate of sequential SMV using `1 minimization, and that there are problems that can be solved with one approach but not with the other. Second, we analyze the performance of the ReMBo algorithm [M. Mishali and Y. Eldar, IEEE Trans. Sig. Proc., 56 (2008)] in combination with `1 minimization, and show how recovery improves as more measurements are taken. From this analysis it follows that having more measurements than number of nonzero rows does not improve the potential theoretical recovery rate.

Matt Herman and Thomas Strohmer produced an important revision to Theorem 2 in General Deviants: An Analysis of Perturbations in Compressed Sensing.

Stefano Marchesini let me know about this meeting on the island of Porquerolles (the program is here). What's up with organizing workshops on islands when in France ? are they scared the folks will run away once they get to the meeting :-)

There is a PhD position at University of Aalborg in Denmark that requires some knowledge of compressive sensing. The title of the research in "Sparse Sampling Techniques for Multi-Mode Receivers".

Finally, Yi Ma will give a talk in Tsinghua today and the same one in Paris on the 20th (Thanks Laurent)

Credit: NASA, the tank of the space shuttle Atltantis on STS-125 is slowly reentering the atmosphere.

Printfriendly