Tuesday, September 07, 2010

CS: The long entry of the week

[Dear Boing Boing readersNuit Blanche does not fulfill the Physics.org criteria as that criteria asks for a "non-specialist audience". The audience of this blog is nothing but made up of specialists. The fact that compressed sensing is at the heart of a potential revolution in sensors is of interest to a larger audience but these people are specialists nonetheless. Sorry!. And yes we occasionally have fluff stories here, here or here.]

Today we have an exciting and long entry made up of the different tidbits orbiting the compressed sensing world. Let us first check the blogs:

How does the Johnson-Lindenstrauss Lemma implies the Restricted Isometry Property ? Well you need to read the interesting: New and improved Johnson-Lindenstrauss embeddings via the Restricted Isometry Property by Felix Krahmer and Rachel Ward. The abstract reads:
Consider a set E of O(e^k) points in R^N, and an m by N matrix Phi with the Restricted Isometry Property of order k and level delta, that is, (1 - delta) || x ||_2^2 <= || Phi x ||_2^2 <= (1 + delta) || x ||_2^2 for all k-sparse x in R^N. We show that with high probability, such a matrix with randomized column signs embeds E into R^m without distorting the distance between any two points by more than a factor of 1 +- epsilon, with epsilon = 4delta. Consequently, matrices with the Restricted Isometry Property and with randomized column signs provide optimal Johnson-Lindenstrauss embeddings up to logarithmic factors in N. Our results improve the best known bounds on the necessary embedding dimension m for a wide class of structured random matrices, optimizing the dependence on the distortion epsilon. In particular, for partial Fourier and partial Hadamard matrices, we improve the recent bound m = O(epsilon^(-4) log(p) log^4(N)) appearing in Ailon and Liberty to m = O(epsilon^(-2) log(p) log^4(N)), which is optimal up to the logarithmic factors in N. Our results also yield new Johnson-Lindenstrauss bounds for several deterministic matrices with randomized column signs.

Compressed sensing is a recently developing area which is interested in reconstruction of sparse signals acquired in reduced dimensions. Acquiring the data with a small number of samples makes the reconstruction problem under-determined. The required solution is the one with minimum $l_0$ norm due to sparsity, however it is not practical to solve the $l_0$ minimization problem. Some methods, such as Basis Pursuit (BP) propose casting the problem as an $l_1$ minimization. Greedy pursuit algorithms, such as Orthogonal Matching Pursuit (OMP) and Subspace Pursuit (SP), perform a greedy search among the vectors in the basis with the goal of stagewise constrained minimization of the residual error. This manuscript proposes a new semi-greedy algorithm which employs a best-first search technique, the A* search. This approach searches for the solution on several paths of a search tree, where the paths are evaluated and extended according to some cost function, which should be carefully selected to compensate for paths with different lengths. Each path on the tree grows similar to the Orthogonal Matching Pursuit algorithm, hence this new approach is called A* Orthogonal Matching Pursuit (A*OMP). In this manuscript, A*OMP algorithm is introduced and three different cost functions are discussed. We show that novel dynamic auxiliary cost functions provide improved results as compared to a conventional choice. Finally, we provide reconstruction results on both synthetically generated data and real images which show that the proposed scheme outperforms well-known CS reconstruction methods, BP, SP and OMP.

The group Lasso is an extension of the Lasso for feature selection on (predefined) non-overlapping groups of features. The non-overlapping group structure limits its applicability in practice. There have been several recent attempts to study a more general formulation, where groups of features are given, potentially with overlaps between the groups. The resulting optimization is, however, much more challenging to solve due to the group overlaps. In this paper, we consider the efficient optimization of the overlapping group Lasso penalized problem. We reveal several key properties of the proximal operator associated with the overlapping group Lasso, and compute the proximal operator by solving the smooth and convex dual problem, which allows the use of the gradient descent type of algorithms for the optimization. We have performed empirical evaluations using the breast cancer gene expression data set, which consists of 8,141 genes organized into (overlapping) gene sets. Experimental results demonstrate the efficiency and effectiveness of the proposed algorithm.

From the same group here is a tutorial on Mining Sparse Representations: Formulations, Algorithms, and ApplicationsJun Liu and  Jieping Ye are also the originators of the SLEP package which should be in the reconstruction section of the Big Picture. As you probably recall:
The SLEP (Sparse Learning with Efficient Projections) package provides a set of programs for sparse learning:
  •  ℓ1-regularized (constrained) sparse learning
  •  ℓ1/ℓq-regularized sparse learning (q>1)
  •  Fussed Lasso
  •  Sparse inverse covariance estimation
  •  Trace norm regularized learning

I also found: Compressive Coded Aperture Computed Tomography — Part I: Coded Aperture Snapshot Spectral Imager (CASSI) by  Kerikil Choi, Ryoichi Horisaki, Ashwin Wagadarikar, and David Brady. The abstract reads:
Diverse physical measurements can be modeled by X-ray transforms. While X-ray tomography is the canonical example, reference structure tomography (RST) and coded aperture snapshot spectral imaging (CASSI) are examples of physically unrelated but mathematically equivalent sensor systems. Historically, most X-ray transform based systems sample continuous distributions and apply analytical inversion processes. On the other hand, RST and CASSI generate discrete multiplexed measurements implemented with coded apertures. Multiplexing of coded measurements allows for compressive sensing (CS). Using CS, an object sparse on some basis may be accurately described by fewer measurements than predicted by Nyquist sampling. This paper investigates the role of coded apertures in X-ray transform measurement systems (XTMs) in terms of data efficiency and reconstruction fidelity from a CS perspective. We construct a unified analysis using RST and CASSI measurement models. Using this analysis, we perform a qualitative study on how coded apertures can be exploited to implement physical compressive measurements by “regularizing” the measurement systems. Numerical studies and simulation results demonstrate several examples of the impact of coding.

Compressive Coded Aperture Computed Tomography — Part II: Compressive X-ray Tomography (CXT)  by  Kerikil Choi, Ryoichi Horisaki, Ashwin Wagadarikar, and David Brady. The abstract reads:
In the companion paper (Part I), we propose an analysis model for x-ray transform measurement systems. In particular, the model is applied to two compressive measurement systems that our group has recently developed: coded aperture snapshot spectral imagers and reference structure tomography. These modalities exploit the concepts of coding and multiplexing to create compressive sensing (CS) measurements. This second part of our paper extends the idea of coding and multiplexing to computed X-ray tomography to propose a novel compressive Xray tomography. We also propose a method of creating synthetic CS measurements that fall on the category of incoherent measurements in CS theory. These synthetic measurements combined with the proposed analysis model provides a means for evaluating and investigating the performance of CXT as a compressive sensing system. Various numerical studies illustrate the potential performance of our proposed system and suggest useful criteria for designing physical systems.
and (no link to the paper just the abstract): Sparse aperture coding for compressive sampling by David Brady, Alexander Mrozack, and Kerikil Choi. The abstract reads:
For many years, the basic goal of sparse aperture design has been to maximize the support of the modulation transfer function (MTF). Golay apertures and related nonredundant arrays are typically used to achieve this objective. Unfortunately, maximizing the support of the MTF has the necessary effect of decreasing the magnitude of the MTF at mid-band spatial frequencies. Fienup has shown that the decreased magnitude of the MTF for nonredundant arrays contributes as much as reduced throughput to the loss of SNR in sparse apertures relative to full aperture systems. This paper considers the use of periodic sparse arrays to improve the mid-band MTF at the cost of reduced spatial frequency coverage. We further consider methods to recover lost spatial frequencies using multispectral and multiframe sampling and decompressive inference.

The following poster proposes a different algorithm than the one based on compressed sensing/l1 reconstruction which it qualifies as the state of the art in this area. However, it uses an OMP greedy algorithm!
We propose a face recognition approach based on hashing. The approach yields comparable recognition rates with the random `1 approach [5], which is considered the state-of-the-art. But our method is much faster: it is up to 150 times faster than [5] on the YaleB dataset. We show that with hashing, the sparse representation can be recovered with a high probability because hashing preserves the restrictive isometry property. Moreover, we present a theoretical analysis on the recognition rate of the proposed hashing approach. Experiments show a very competitive recognition rate and significant speedup compared with the state-of-the art.

In recent years, Compressive Sampling (CS), a new research topic in signal processing, has piqued interest of a wide range of researchers in different fields. In this paper, we present a sub-Nyquist Audio Fingerprinting (AF) system for song identification, which utilizes CS theory to generate a compact signature, and to achieve significant reduction of the dimensionality of the input signal. The presented experimental results demonstrate that by using the CS-based AF system, which downsamples to 30%, the average accuracy is 93.43% under various distorted environments, compared to Nyquist sampling methods. The advantages of the proposed process lie in the comparable performance under sub-Nyquist sampling rate and more compact audio fingerprint.
In the upcoming NVIDIA GPU Technology Conference in San Diego, I found the following :
2300 - High-Performance Compressive Sensing using Jacket
This talk will present the ongoing work that I am doing in the L1-optimization group at Rice Universtiy. The purpose of the work is to merge both compressive sensing, for image/signal reconstructions and GPU computation, using NVIDIA’s GPUs to enhance the technology of CS. This talk will cover basic concepts in compressive sensing and the easy adaptation of operating on the GPU, in particular working with Jacket (by AccelerEyes). We willthen cover some of our numerical experiments that encompass the use of different flavors of algorithms.
Speaker:
Nabor Reyna
Topics:
Imaging, Tools & Libraries
Time:
Wednesday, September, 22nd, 10:30 - 10:50

Humm, Jacket, GPU and Compressed Sensing, we noted these subjects in one entry a while back. This is good, somebody had the same idea and implemented it, woohoo. The folks making Jacket are talking about it on their blog.

I also caught this:

Over the past few years, compressive sensing, also known as compressive sampling, has made a tremendous impact on signal processing and statistical learning, and has facilitated numerous applications in areas ranging from medical imaging and computational biology to astronomy. Recently, there has been a growing interest in applying the principles of compressive sensing to an even wider range of topics, including those in communications and networking. The basic thesis of compressive sensing—and its powerful appeal for many applications—is that under certain conditions, it suffices to collect only a small number of observations (linear projections of the signal of interest such as image, field, map, etc.) and still be able to efficiently reconstruct the signal in its entirety when the signal admits a sparse representation. This statement has a profound impact on the way in which a communication system can be designed. From compression of data —as necessitated by wireless transmission over band-limited channels, to estimation of channels that are naturally sparse, to active or passive delay, angle, and Doppler estimation of specific targets, to opportunistic spectrum sensing and collection of data in large sensor networks, the principles of compressive sensing can be applied to provide system designs that are more efficient than traditional ones. This special issue is devoted to those areas of communication system and network design where compressive sensing brings new insights, visions and tools to yield effective solutions for the problems of interest. We invite original, high-quality submissions in all such areas, ranging from physical link to system-level design, that offer innovative methodologies, algorithms, and/or theoretical results by appealing to or extending the existing results in the field of compressive sensing.

Manuscript submission:
December 15, 2010
Acceptance notification:
April 30, 2011
Final manuscript due:
June 30, 2011
Publication: 3rd quarter
2011
And finally two theses of relevance to our subject of interest:












At NIST there was an interesting presentation on September 1 :

10:30 AM - SIGMA XI / OPTICAL TECHNOLOGY SEMINAR: Hyperspectral Imaging, Compressive Sensing and Entrepreneurship?
The founding CEO of InView Technology Corporation, Bob Bridge, is an experienced entrepreneur, having worked in venture-capital backed technology start-ups since 1985. InView has licensed Compressive Sensing technology from Rice University, and is building high-performance infrared cameras and hyperspectral imagers based upon that technology. In contrast to Bob' earlier ventures which were venture capital funded, InView is receiving the bulk of its initial financing from various government sources. Bob has raised $51M in venture funding since 2001, including initial funding for three start-up companies. Before InView, Bob took Zilker Labs, a semiconductor start-up providing power management ICs to companies such as Cisco, from a marketing concept in 2002 to successful acquisition by public semiconductor company Intersil in 2008. Bob has also served as an Entrepreneur in Residence at Austin Ventures, vice president of marketing at Agere (a network processor IC startup), and vice president and general manager for communications ICs at Crystal Semiconductor/Cirrus Logic. Additionally, Bob has held engineering and management positions at Motorola, AT&T corporate headquarters and AT&T Bell Labs. Bob holds a Ph.D. in Electrical Engineering from The University of Texas at Austin and a bachelor's degree in Mathematical Sciences from Rice University. ***Dr. Bridges will describe his unique career path as well as the exciting new technology upon which is latest venture is based.*** Compressive Sensing technology allows InView to employ a low-cost sensor architecture to deliver unmatched camera/imager capabilities across the electromagnetic spectrum. For example, InView will offer a 2 mega-pixel Short Wavelength Infrared (SWIR) camera; the highest resolution SWIR available in the marketplace is 0.3 mega-pixels. High resolution images are reconstructed computationally from a set of measurements taken by a single element sensor. Each of these single-element measurements occurs after light from the entire image has been modulated by a random pattern and then focused onto the sensor. Mathematical processing then constructs the image using both the multiple sensor readings and the knowledge of the unique modulation pattern used to create each sensor reading as inputs to the algorithm.
Dr. Bob Bridges , CEO InView Technology.
Photo: Map of the last 100 readers who came to the new page on CS courses yesterday.

2 comments:

Piotr Indyk said...

Hi Igor,

A quick correction: the paper

New and improved Johnson-Lindenstrauss embeddings via the Restricted Isometry Property by Felix Krahmer and Rachel Ward.

shows that * RIP implies JL* (after randomizing the matrices). The opposite direction is also true, but it has been known for a while now.

Igor said...

Thanks Piotr.

Printfriendly