Tuesday, November 08, 2011

Compressive Sensing Literature This Week

Because we had such a large influx of codes and implementations last week, I did not cover papers, talks, posters, jobs and some presentation. I am correcting this today, enjoy:

This paper presents a novel approach to capture light field in camera arrays based on the compressive sensing framework. Light fields are captured by a linear array of cameras with overlapping field of view. In this work, we design a redundant dictionary to exploit crosscameras correlated structures to sparsely represent cameras image. Our main contributions are threefold. First, we exploit the correlations between the set of views by making use of a specially designed redundant dictionary. We show experimentally that the projection of complex scenes onto this dictionary yields very sparse coefficients. Second, we propose an efficient compressive encoding scheme based on the random convolution framework [1]. Finally, we develop a joint sparse recovery algorithm for decoding the compressed measurements and show a marked improvement over independent decoding of CS measurements.

In this paper, we study the degrees of freedom (df) of penalized l1 minimization (also known as the Lasso) for linear regression models. We give a closed-form expression of the degrees of freedom of the Lasso response. Namely, we show that for any given Lasso regularization parameter \lambda and any observed data y belongs to a set of full measure, the cardinal of the support of a particular solution of the Lasso problem is an unbiased estimator of the degrees of freedom of the Lasso response. This is achieved without any assumption on the uniqueness of the Lasso solution. Thus, our result remains true for both the underdetermined and the overdetermined case studied originally in Zou et al.. We also prove that a key result in Zou et al. is not true by providing a simple counterexample. An effective estimator of the number of degrees of freedom may have several applications including an objectively guided choice of the regularization parameter in the Lasso through the SURE framework.

Compressed sensing posits that, within limits, one can undersample a sparse signal and yet reconstruct it accurately. Knowing the precise limits to such undersampling is important both for theory and practice. We present a formula precisely delineating the allowable degree of of undersampling of generalized sparse objects. The formula applies to Approximate Message Passing (AMP) algorithms for compressed sensing, which are here generalized to employ denoising operators besides the traditional scalar shrinkers (soft thresholding, positive soft thresholding and capping). This paper gives several examples including scalar shrinkers not derivable from convex optimization -- the firm shrinkage nonlinearity and the minimax} nonlinearity -- and also nonscalar denoisers -- block thresholding (both block soft and block James-Stein), monotone regression, and total variation minimization. Let the variables \epsilon = k/N and \delta = n/N denote the generalized sparsity and undersampling fractions for sampling the k-generalized-sparse N-vector x_0 according to y=Ax_0. Here A is an n\times N measurement matrix whose entries are iid standard Gaussian. The formula states that the phase transition curve \delta = \delta(\epsilon) separating successful from unsuccessful reconstruction of x_0 by AMP is given by: \delta = M(\epsilon| Denoiser), where M(\epsilon| Denoiser) denotes the per-coordinate minimax mean squared error (MSE) of the specified, optimally-tuned denoiser in the directly observed problem y = x + z. In short, the phase transition of a noiseless undersampling problem is identical to the minimax MSE in a denoising problem.


Generalized sampling and infinite-dimensional compressed sensing by Ben AdcockAnders C. Hansen. The abstract reads:
We introduce and analyze an abstract framework, and corresponding method, for compressed sensing in infinite dimensions. This extends the existing theory from signals in finite-dimensional vectors spaces to the case of separable Hilbert spaces. We explain why such a new theory is necessary, and demonstrate that existing finite-dimensional techniques are ill-suited for solving a number of important problems. This work stems from recent developments in generalized sampling theorems for classical (Nyquist rate) sampling that allows for reconstructions in arbitrary bases. The main conclusion of this paper is that one can extend these ideas to allow for both reconstructions in arbitrary bases and significant subsampling of sparse or compressible signals.



We extend Tropp's analysis of Orthogonal Matching Pursuit (OMP) using the Exact Recovery Condition (ERC) to a first exact recovery analysis of Orthogonal Least Squares (OLS). We show that when ERC is met, OLS is guaranteed to exactly recover the unknown support. Moreover, we provide a closer look at the analysis of both OMP and OLS when ERC is not fulfilled. We show that there exist dictionaries for which some subsets are never recovered with OMP. This phenomenon, which also appears with $\ell_1$ minimization, does not occur for OLS. Finally, numerical experiments based on our theoretical analysis show that none of the considered algorithms is uniformly better than the other.

We discuss a novel sampling theorem on the sphere developed by McEwen & Wiaux recently through an association between the sphere and the torus. To represent a band-limited signal exactly, this new sampling theorem requires less than half the number of samples of other equiangular sampling theorems on the sphere, such as the canonical Driscoll & Healy sampling theorem. A reduction in the number of samples required to represent a band-limited signal on the sphere has important implications for compressive sensing, both in terms of the dimensionality and sparsity of signals. We illustrate the impact of this property with an inpainting problem on the sphere, where we show superior reconstruction performance when adopting the new sampling theorem.

Implications for compressed sensing of a new sampling theorem on the sphere by Jason McEwenGilles PuyJean-Philippe ThiranPierre VandergheynstDimitri Van De VilleYves Wiaux. The abstract reads:
A sampling theorem on the sphere has been developed recently, requiring half as many samples as alternative equiangular sampling theorems on the sphere. A reduction by a factor of two in the number of samples required to represent a band-limited signal on the sphere exactly has important implications for compressed sensing, both in terms of the dimensionality and sparsity of signals. We illustrate the impact of this property with an inpainting problem on the sphere, where we show the superior reconstruction performance when adopting the new sampling theorem compared to the alternative.


Radio interferometry is a powerful technique for astronomical imaging. The theory of Compressed Sensing (CS) has been applied recently to the ill-posed inverse problem of recovering images from the measurements taken by radio interferometric telescopes. We review novel CS radio interferometric imaging techniques, both at the level of acquisition and reconstruction, and discuss their superior performance relative to traditional approaches. In order to remain as close to the theory of CS as possible, these techniques necessarily consider idealised interferometric configurations. To realise the enhancement in quality provided by these novel techniques on real radio interferometric observations, their extension to realistic interferometric configurations is now of considerable importance. We also chart the future direction of research required to achieve this goal.

We advocate a compressed sensing strategy that consists of multiplying the signal of interest by a wide bandwidth modulation before projection onto randomly selected vectors of an orthonormal basis. Firstly, in a digital setting with random modulation, considering a whole class of sensing bases including the Fourier basis, we prove that the technique is universal in the sense that the required number of measurements for accurate recovery is optimal and independent of the sparsity basis. This universality stems from a drastic decrease of coherence between the sparsity and the sensing bases, which for a Fourier sensing basis relates to a spread of the original signal spectrum by the modulation (hence the name "spread spectrum"). The approach is also efficient as sensing matrices with fast matrix multiplication algorithms can be used, in particular in the case of Fourier measurements. Secondly, these results are confirmed by a numerical analysis of the phase transition of the l1- minimization problem. Finally, we show that the spread spectrum technique remains effective in an analog setting with chirp modulation for application to realistic Fourier imaging. We illustrate these findings in the context of radio interferometry and magnetic resonance imaging.

In traditional framework of compressive sensing (CS), only sparse prior on the property of signals in time or frequency domain is adopted to guarantee the exact inverse recovery. Other than sparse prior, structures on the sparse pattern of the signal have also been used as an additional prior, called model-based compressive sensing, such as clustered structure and tree structure on wavelet coefficients. In this paper, the cluster structured sparse signals are investigated. Under the framework of Bayesian compressive sensing, a hierarchical Bayesian model is employed to model both the sparse prior and cluster prior, then Markov Chain Monte Carlo (MCMC) sampling is implemented for the inference. Unlike the state-of-the-art algorithms which are also taking into account the cluster prior, the proposed algorithm solves the inverse problem automatically--prior information on the number of clusters and the size of each cluster is unknown. The experimental results show that the proposed algorithm outperforms many state-of-the-art algorithms.

On the Relation between Block Diagonal Matrices and Compressive Toeplitz Matrices by Han Lun Yap and Christopher J. Rozell. The abstract reads:
In a typical communications problem, Toeplitz matrices Φ arise when modeling the task of determining an unknown impulse response a from a given probe signal φ. When a is sparse, then whenever Φ formed from the probe signal φ satisfy the Restricted Isometry Property (RIP), a can be robustly recovered from its measurements via ℓ1-minimization. In this paper, we derived the RIP for compressive Toeplitz matrices whose number of rows of the matrices J is much smaller than the number of columns N. We show that J should scale like J ∼ S2log(N), where S is the sparsity of the impulse response. While this is marginally worse than the state-of-the-art scaling currently achieved in the literature, the novelty of this work comes from making the relation between the Toeplitz matrix of interest to a block diagonal matrix. The proof of the RIP then follows from using recent results on the concentration of measure inequalities of block diagonal matrices, together with a standard covering-and-counting arg


We study the problem of obtaining efficient, deterministic, black-box polynomial identity testing algorithms for depth-3 set-multilinear circuits (over arbitrary fields). This class of circuits has an efficient, deterministic, white-box polynomial identity testing algorithm (due to Raz and Shpilka [RS05]), but has no known such black-box algorithm. We recast this problem as a question of nding a low-dimensional subspace H, spanned by rank 1 tensors, such that any non- zero tensor in the dual space ker(H) has high rank. We obtain explicit constructions of essentially optimal-size hitting sets for tensors of degree 2 (matrices), and obtain quasi-polynomial sized hitting sets for arbitrary tensors (but this second hitting set is less explicit). We also show connections to the task of performing low-rank recovery of matrices, which is studied in the eld of compressed sensing. Low-rank recovery asks (say, over R) to recover a matrix M from few measurements, under the promise that M is rank r. In this work, we restrict our attention to recovering matrices that are exactly rank r using deterministic, non-adaptive, linear measurements, that are free from noise. Over R, we provide a set (of size 4nr) of such measurements, from which M can be recovered in O(rn2 + r 3 n) field operations, and the number of measurements is essentially optimal. Further, the measurements can be taken to be all rank-1 matrices, or all sparse matrices. To the best of our knowledge no explicit constructions with those properties were known prior to this work. We also give a more formal connection between low-rank recovery and the task of sparse (vector) recovery: any sparse-recovery algorithm that exactly recovers vectors of length n and sparsity 2r, using m non-adaptive measurements, yields a low-rank recovery scheme for exactly recovering n n matrices of rank r, making 2nm non-adaptive measurements. Furthermore, if the sparse-recovery algorithm runs in time , then the low-rank recovery algorithm runs in time O(rn2 + n ). We obtain this reduction using linear-algebraic techniques, and not using convex optimization, which is more commonly seen in compressed sensing algorithms. Finally, we also make a connection to rank-metric codes, as studied in coding theory. These are codes with codewords consisting of matrices (or tensors) where the distance of matrices A and B is rank(A  B), as opposed to the usual hamming metric. We obtain essentially optimal-rate codes over matrices, and provide an efficient decoding algorithm. We obtain codes over tensors as well, with poorer rate, but still with efficient decoding.

COMBINED COMPRESSED SENSING AND PARALLEL MRI COMPARED FOR UNIFORM AND RANDOM CARTESIAN UNDERSAMPLING OF K-SPACE by Daniel S. Weller, Jonathan R. Polimeni, Leo Grady, Lawrence L. WaldElfar Adalsteinsson, Vivek K Goyal. The abstract reads:
Both compressed sensing (CS) and parallel imaging effectively reconstruct magnetic resonance images from undersampled data. Combining both methods enables imaging with greater undersampling than accomplished previously. This paper investigates the choice of a suitable sampling pattern to accommodate both CS and parallel imaging. A combined method named SpRING is described and extended to handle random undersampling, and both GRAPPA and SpRING are evaluated for uniform and random undersampling using both simulated and real data. For the simulated data, when the undersampling factor is large, SpRING performs better with random undersampling. However, random undersampling is not as beneficial to SpRING for real data with approximate sparsity

Compressive Sensing for Gauss-Gauss Detection by J. Derek TuckerNick Klausner. The abstract reads:
The recently introduced theory of compressed sensing (CS) enables the reconstruction of sparse signals from a small set of linear measurements. If properly chosen, the number of measurements can be much smaller than the number of Nyquist rate samples. However, despite the intense focus on the reconstruction of signals, many signal processing problems do not require a full reconstruction of the signal and little attention has been paid to doing inference in the CS domain. In this paper we show the performance of CS for the problem of signal detection using Gauss-Gauss detection. We investigate how the J-divergence and Fisher Discriminant are affected when used in the CS domain. In particular, we demonstrate how to perform detection given the measurements without ever reconstructing the signals themselves and provide theoretical bounds on the performance. A numerical example is provided to demonstrate the effectiveness of CS under Gauss-Gauss detection.

A large portion of work on compressive sampling and sensing has focused on reconstructions from a given measurement set. When the individual samples are expensive and optional, as is the case with autonomous agents operating in a physical domain and under specific energy limits, the CS problem takes on a new aspect because the projection is columnsparse, and the number of samples is not necessarily large. As a result, random sampling may no longer be the best tactic. The underlying incoherence properties in l0 reconstruction, however, can still motivate the purposeful design of samples in planning for CS with one or more agents; we develop here a greedy and computationally tractable sampling rule that will improve errors relative to random points. Several example cases illustrate that the approach is effective and robust.


Medical imaging is an important element in the medical diagnostic process. However, storing medical images in a certain format is quite challenging. The size of a digital medical image is in the range of 3–32 MB per examination, depending on the imaging modality. Image compression will be a suitable solution when it comes to minimizing the size of stored medical images. The quest to find a compression method that has very good quality image reconstruction and a high compression ratio is limited by the Shanon/Nyquist sampling theorem and image entropy. However, in recent years, compressive sampling (CS) has arisen as an alternative sampling method. It guarantees the reconstruction of a high-quality signal from a small number of samples, and can be carried out if the signal is sparse enough. A set of trained bases grouped as a dictionary will guarantee the sparsest representation of the signal. This paper propose a low bit-rate medical image compression scheme based on CS and an overcomplete bases set to represent the medical images as sparse as possible.

We address in this paper the feasibility of the recent  compressive sensing (CS) technique for raw RF signals reconstruction in medical ultrasound. Successful CS reconstruction implies selecting a basis where the  signal to be recovered has a sparse expansion. RF signals represent a specific challenge, because their oscillatory nature does not easily lend itself to a sparse representation. In that perspective, we propose to use the recently introduced directional wave atoms [1], which shows good properties for sparsely representing warped oscillatory patterns. Experiments were performed on  simulated RF data from a cyst numerical phantom. CS reconstruction was applied to subsampled RF data, by removing 50% to 90% of the original samples. Reconstruction using wave atoms were compared to reconstruction obtained from Fourier and Daubechies wavelets. The obtained accuracies were in the range [0.4-4.0].10-3, [0.2-2.8].10-3 and [0.1-1.7].10-3, using wavelets, Fourier and wave atoms respectively, showing the superiority of  the wave atoms representation, and the feasibility  of CS for the  reconstruction of US RF data.  


Three posters:



and a job at Politechnico de Torino.


Two papers behind a paywall:

Estimation of missing RTTs in computer networks: Matrix completion vs compressed sensing by Ziqian Dong, Santhanakrishnan Anand, Rajarathnam Chandramouli. The abstract reads:
We estimate the missing round trip time (RTT) measurements in computer networks using doubly non-negative (DN) matrix completion and compressed sensing. The major contributions of this paper are the following: (i) an iterative DN matrix completion that minimizes the mean square estimation error; (ii) mathematical conditions for the convergence of the algorithm; (iii) systematic and detailed experimental comparison of DN matrix completion and compressed sensing for estimating missing RTT estimation in computer networks. To our knowledge, this is the first work that compares the pros and cons of compressed sensing and DN matrix completion for RTT estimation using actual Internet measurement data. Results indicate that compressed sensing provides better estimation in networks with sporadic missing values while DN completion of matrices is more suitable for estimation in networks which miss blocks of measurements. Our proposed DN matrix completion method is one of the first approaches to matrix completion, that minimizes the estimation error.

Compressive Sampling with Prior Information in Remotely Detected MRI of Microfluidic Devices by Thomas Z. Teisseyre, Jeffrey L. Paulsen, Vikram S. Bajaj, Nicholas W. Halpern-Manners,  Alexander Pines. The abstract reads:
The design and operation of microfluidic analytical devices depends critically on tools to probe microscale chemistry and flow dynamics. Magnetic Resonance Imaging (MRI) seems ideally suited to this task, but its sensitivity is compromised because the fluid-containing channels in “lab on a chip” devices occupy only a small fraction of the enclosing detector’s volume; as a result, the few microfluidic applications of NMR have required custom-designed chips harboring many detectors at specific points of interest. To overcome this limitation, we have developed remotely detected microfluidic MRI, in which an MR image is stored in the phase and intensity of each analyte’s NMR signal and sensitively detected by a single, volume-matched detector at the device outflow, and combined it with compressed sensing for rapid image acquisition. Here, we build upon our previous work and introduce a method that incorporates our prior knowledge of the microfluidic device geometry to further decrease acquisition times. We demonstrate its use in multidimensional velocimetric imaging of a microfluidic mixer, acquiring microscopically detailed images 128 times faster than is possible with conventional sampling. This prior information also informs our choice of sampling schedule, resulting in a scheme that is optimized for a specific flow geometry. Finally, we test our approach in synthetic data and explore potential reconstruction errors as a function of optimization and reconstruction parameters.



Talks:

Polarization methods in coding and compressed sensing by Emmanuel Abbe
Friday 11 November 2011, 11:00-12:00

The polarization method has recently emerged in coding theory as a new powerful tool. The codes resulting from this method are the first codes to provably achieve the capacity of discrete memoryless channels with an explicit low-complexity construction. We review in this talk the basic idea behind polarization and its applications to channel and source coding. We then discuss a new polarization result for analogue sources, and show how this framework can be used to construct efficient sensing matrices for bio-medical signals.

While UIUC:

Signal Processing SeminarTitle:  Semi-Quantitative Group Testing with Applications in GenotypingSpeaker:  Amin EmadPh.D. Student - UIUCDate:  Wednesday, November 2, 2011Time:  4:00 - 5:00 pmLocation:  4269 Beckman Institute
Abstract:  Genotyping is the process of identifying variations in the genetic makeup (genotype) of individuals of a large population. One of its most important applications is to determine the gene alleles that cause genetic diseases in humans. Until very recently, the DNA sequencers used to find these “defective” alleles only allowed serial processing of a specific region of the genome. Recently, new developments of the “next-generation sequencing technologies” allow for parallel processing of these regions; however, in order to make these technologies (cost) efficient, one needs to take advantage of ideas from compressed sensing and group testing.Group testing is a method of finding “defectives” in large populations by testing subsets of the population. In this work, we address the issues in applying group testing to the new genotyping paradigm and introduce new results as the first stepping stone to address these issues. We describe a generalization of the group testing problem termed symmetric group testing. Unlike in classical binary group testing, the roles played by the input symbols zero and one are “symmetric” while the outputs are drawn from a ternary alphabet. Using an information-theoretic approach, we derive sufficient and necessary conditions for the number of tests required for noise-free and noisy reconstructions. Furthermore, we extend the notion of disjunct (zero-false-drop) and separable (uniquely decipherable) codes to the case of symmetric group testing. For the new family of codes, we derive bounds on their size based on probabilistic methods, and provide construction methods based on coding theoretic ideas.




Special Issue of Algorithmica on "Algorithms & DataStructures for Selection, Identification and Encoding"
CALL FOR PAPERS
Special Issue of Algorithmica on:
"Algorithms and Data Structures for selection, identification and encoding -
Group testing, compressed sensing, multi access communication and more"
Manuscripts are solicited for a special issue in the international journal ALGORITHMICA on
"Algorithms and Data Structures for selection, identification and encoding - Group testing,
compressed sensing, multi access communication and more"
Theoretical and practical studies in group testing, compressed sensing and in a broader sense
combinatorial identification are attracting increasing interest in the areas of Theoretical Computer Science, Computational Biology, Combinatorial Optimization, Multi-access Communications and Distributed Computing.
With this Special Issue, we wish to foster research in group testing and compressed sensing, and their many different facets, by exposing new results and directions for further research and to contribute to the development of further scientific links with this flourishing area of investigation. Papers presenting use of group testing and selection primitives in pattern matching, data  structures for static membership, communication protocols, cryptographic protocols, streaming computation, bioinformatics and computational biology, compressed sensing as well as papers focussing on theoretical aspects of combinatorial structures for identification and coding, like randomness extractors, superimposed codes, list-decodable codes, selectors, and the like are welcome.
Submissions must be received by December 31, 2011. To ensure that all manuscripts are correctly  identified for inclusion into the special issue, it is important that authors select "SI: Group Testing  & Compressed Sensing" when they reach the step in the submission process.
Papers will be refereed according to the standards of Algorithmica. We will do our best to have the  refereeing procedure timely finished.
Please contact the guest editors for additional information.
The Guest Editors
Ferdinando Cicalese and Ely Porat
Ferdinando Cicalese,
Department of Computer Science
University of Salerno, Italy
cicalese@dia.unisa.itEly Porat,
Department of Computer Science
Bar Ilan University, Israel
porately@gmail.com




 

Image Credit: NASA/JPL/Space Science Institute, W00070712.jpg was taken on November 06, 2011 and received on Earth November 06, 2011. The camera was pointing toward DIONE at approximately 135,145 kilometers away, and the image was taken using the CL1 and RED filters. This image has not been validated or calibrated.
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments:

Printfriendly