Tuesday, September 30, 2008

CS: Manifold Based Signal Processing, Tampering CS/DCS, SRIP, Incoherent Systems, Learning sparse doubly-selective channels and some Seminars

Today, we have some new papers from both the Nuit Blanche webcrawler and from the Rice Resource Page and some seminars, mostly on the West Coast.

First, Michael Wakin just released on the ever fascinating subject of Manifold Signal processing in Manifold-Based Signal Recovery and Parameter Estimation from Compressive Measurements. The abstract reads:
A field known as Compressive Sensing (CS) has recently emerged to help address the growing challenges of capturing and processing high-dimensional signals and data sets. CS exploits the surprising fact that the information contained in a sparse signal can be preserved in a small number of compressive (or random) linear measurements of that signal. Strong theoretical guarantees have been established on the accuracy to which sparse or near-sparse signals can be recovered from noisy compressive measurements. In this paper, we address similar questions in the context of a different modeling framework. Instead of sparse models, we focus on the broad class of manifold models, which can arise in both parametric and non-parametric signal families. Building upon recent results concerning the stable embeddings of manifolds within the measurement space, we establish both deterministic and probabilistic instance-optimal bounds in ℓ2 for manifold-based signal recovery and parameter estimation from noisy compressive measurements. In line with analogous results for sparsity-based CS, we conclude that much stronger bounds are possible in the probabilistic setting. Our work supports the growing empirical evidence that manifold-based models can be used with high accuracy in compressive signal processing.

Also found on the internets, a poster entitled Detection and Identification of Sparse Audio Tampering Using Distributed Source Coding and Compressive Sensing Techniques by G. Prandi, Giuseppe Valenzise, Marco Tagliasacchi, Augusto Sarti. The abstract reads:
In most practical applications, for the sake of information integrity not only it is useful to detect whether a multimedia content has been modified or not, but also to identify which kind of attack has been carried out. In the case of audio streams, for example, it may be useful to localize the tamper in the time and/or frequency domain. In this paper we devise a hash-based tampering detection and localization system exploiting compressive sensing principles. The multimedia content provider produces a small hash signature using a limited number of random projections of a time-frequency representation of the original audio stream. At the content user side, the hash signature is used to estimate the distortion between the original and the received stream and, provided that the tamper is sufficiently sparse or sparsifiable in some orthonormal basis expansion or redundant dictionary (e.g. DCT or wavelet), to identify the time-frequency portion of the stream that has been manipulated. In order to keep the hash length small, the algorithm exploits distributed source coding techniques.




Shamgar Gurevich and Ronny Hadani introduce us to Incoherent dictionaries and the statistical restricted isometry property. The abstract reads:

In this paper we formulate and prove a statistical version of the Candes-Tao restricted isometry property (SRIP for short) which holds in general for any incoherent dictionary which is a disjoint union of orthonormal bases. In addition, we prove that, under appropriate normalization, the eigenvalues of the associated Gram matrix fluctuate around \lambda= 1 according to the Wigner semicircle distribution. The result is then applied to various dictionaries that arise naturally in the setting of finite harmonic analysis, giving, in particular, a better understanding on a remark of Applebaum-Howard-Searle-Calderbank concerning RIP for the Heisenberg dictionary of chirp like functions.
Then Jessica Nelson and Vladimir Temlyakov released On the size of incoherent systems. The abstract reads:

This paper concerns special redundant systems, namely, incoherent systems or systems with small coherence parameter. Simple greedy-type algorithms perform well on these systems. For example, known results show that the smaller the coherence parameter, the better the performance of the Orthogonal Greedy Algorithm. These systems with small coherence parameter are also useful in the construction of compressed sensing matrices. Therefore, it is very desirable to build dictionaries with small coherence parameter.
We discuss the following problem for both Rn and Cn: How large can a system with coherence parameter not exceeding a fixed number μ be? We obtain upper and lower bounds for the maximal cardinality of such systems. Although the results herein are either known or simple corollaries of known results, our objective is to demonstrate how fundamental results from different areas of mathematics–linear algebra, probability, and number theory–collaborate on this important approximation theoretic problem.

Waheed Bajwa, Akbar M. Sayeed, and Robert Nowak also released Learning sparse doubly-selective channels. The abstract reads:
Coherent data communication over doubly-selective channels requires that the channel response be known at the receiver. Training-based schemes, which involve probing of the channel with known signaling waveforms and processing of the corresponding channel output to estimate the channel parameters, are commonly employed to learn the channel response in practice. Conventional training-based methods, often comprising of linear least squares channel estimators, are known to be optimal under the assumption of rich multipath channels. Numerous measurement campaigns have shown, however, that physical multipath channels tend to exhibit a sparse structure at high signal space dimension (time-bandwidth product), and can be characterized with significantly fewer parameters compared to the maximum number dictated by the delay-Doppler spread of the channel. In this paper, it is established that traditional training-based channel learning techniques are ill-suited to fully exploiting the inherent low-dimensionality of sparse channels. In contrast, key ideas from the emerging theory of compressed sensing are leveraged to propose sparse channel learning methods for both single-carrier and multicarrier probing waveforms that employ reconstruction algorithms based on convex/linear programming. In particular, it is shown that the performance of the proposed schemes come within a logarithmic factor of that of an ideal channel estimator, leading to significant reductions in the training energy and the loss in spectral efficiency associated with conventional training-based methods.

Matthias Seeger produced a talk on Large Scale Approximate Inference and Experimental Design for Sparse Linear Models. The video of the talk can be found here. The attendant technical report has been released by Matthias Seeger and Hannes Nickisch. It is entitled:Large Scale Variational Inference and Experimental Design for Sparse Generalized Linear Models. The abstract reads:
We show that a commonly used variational relaxation of Bayesian inference for continuous variable models, based on Gaussian-form lower bounds, is a convex optimization problem, whenever the search for the posterior mode is convex (sparse linear model, logistic regression, etc.). We provide novel scalable algorithms for this relaxation, which run orders of magnitude faster than previous methods, and can be used to estimate posterior covariance matrices over full images. Our difference of convex algorithms involve a crucial decoupling step, and require the estimation of Gaussian marginal variances, which is done by the Lanczos algorithm. We show how to implement Bayesian experimental design for large scale generalized linear models with sparsity potentials, which can be used to optimize measurement architectures for whole images. Ours is the first method to tractably learn compressed sensing on entire natural images.

And then we have several seminars listed in the CS Calendar:

Applied Math Seminar at Stanford

Fall Quarter 2008
Friday October 3, 3:15p.m.
Sloan Mathematics Corner, Building 380, Room 380-C

Anna Gilbert
Department of Mathematics
University of Michigan

Combining geometry and combinatorics: a unified approach to sparse signal recovery

There are two main algorithmic approaches to sparse signal recovery: geometric and combinatorial. The geometric approach starts with a geometric constraint on the measurement matrix Phi and then uses linear programming to decode information about x from Phi x. The combinatorial approach constructs Phi and a combinatorial decoding algorithm to match. We present a unified approach to these two classes of sparse signal recovery algorithms. The unifying elements are the adjacency matrices of high-quality unbalanced expanders. We generalize the notion of Restricted Isometry Property (RIP), crucial to compressed sensing results for signal recovery, from the Euclidean norm to the l_p norm for p=1, and then show that unbalanced expanders are essentially equivalent to RIP-p matrices. From known deterministic constructions for such matrices, we obtain new deterministic measurement matrix constructions and algorithms for signal recovery which, compared to previous deterministic algorithms, are superior in either the number of measurements or in noise tolerance. Joint work with Radu Berinde, Piotr Indyk, Howard Karloff, and Martin Strauss.

This one is past but I like that the author makes a point as to how the engineers might profit from the study.
Department of Mathematics
University of California, Berkeley


939 Evans 11:10 AM - 12:10 PM
09/26/2008
Laurent Demanet, Stanford
Compressive wave computation


This talk presents a strategy for computational wave propagation that consists in decomposing the solution wavefield onto a largely incomplete set of eigenfunctions of the weighted Laplacian, with eigenvalues chosen randomly. The recovery method is the ell-1 minimization of compressed sensing. For the mathematician, we establish three possibly new estimates for the wave equation that guarantee accuracy of the numerical method in one spatial dimension. For the engineer, the compressive strategy offers a unique combination of parallelism and memory savings that should be of particular relevance to applications in reflection seismology. Joint work with Gabriel Peyre.

Here is a talk that is bound to get the audience asking a lot of questions if there are into compressed sensing, polytopes, expander graphs and classification (we mentioned it earlier):

UCLA Math Department Seminar.

Image Processing Seminar
Thursday October 02, 2008
Organizer: Luminita Vese and Stefano Soatto
14:00-14:50 in MS6627
John Wright (University of Illinois at Urbana-Champaign)
Dense Error Correction via $L^1$-Minimization

Abstract. We consider the problem of recovering a non-negative sparse signal x 2 Rn from highly corrupted linear measurements y = Ax + e 2 Rm, where e is an unknown error vector whose nonzero entries may be unbounded. Motivated by the problem of face recognition in computer vision, we will prove that for highly correlated (and possibly overcomplete) dictionaries A, any non-negative, sufficiently sparse signal x can be recovered by solving an `1-minimization problem $$ min kxk1 + kek1 \ subject \ to \ y = Ax + e: $$ More precisely, if the fraction of errors is bounded away from one and the support of x grows sublinearly in the dimension m of the observation, then as m goes to infinity, the above `1-minimization succeeds for all signals x and almost all sign-and-support patterns of e. This result suggests that accurate recovery of sparse signals is possible and computationally feasible even with nearly 100% of the observations corrupted. The proof relies on a careful characterization of the faces of a convex polytope spanned together by the standard cross polytope and a set of iid Gaussian vectors with nonzero mean and small variance, which we call the \u201ccross-and-bouquet\u201d model. We discuss our result\u2019s implications for computer vision problems such as face recognition and motion segmentation, and more generally for robust source separation, and present experiments verifying its applicability to these domains. This is joint work with Yi Ma (UIUC). Speaker Bio: John Wright is a PhD candidate in the Department of Electrical and Computer Engineering at the University of Illinois at Urbana-Champaign. His research interests include sparse representation and compressed sensing, data segmentation, lossy minimum description length methods, and their applications in computer vision, including face recognition, image and motion segmentation, and image reconstruction. He has interned at Microsoft Research in Asia, and Microsoft Live Labs in Redmond, and is currently a Microsoft Live Labs fellow.

German—Israel Workshop for Vision and Image Sciences 2008
Haifa, November 4-6, 2008

Learning Compressed Sensing

Yair Weiss
School of Computer Science and Engineering
The Hebrew University of Jerusalem


Abstract - Compressed sensing is a recent set of mathematical results showing that sparse signals can be exactly reconstructed from a small number of linear measurements. Interestingly, for ideal sparse signals with no measurement noise, random measurements allow perfect reconstruction while measurements based on principal component analysis (PCA) or independent component analysis (ICA) do not. At the same time, for other signal and noise distributions, PCA and ICA can significantly outperform random projections in terms of enabling reconstruction from a small number of measurements. In this paper we ask: given a training set typical of the signals we wish to measure, what are the optimal set of linear projections for compressed sensing? We show that the optimal projections are in general not the principal components nor the independent components of the data, but rather a seemingly novel set of projections that capture what is still uncertain about the signal, given the training set. We also show that the projections onto the learned uncertain components may far outperform random projections. This is particularly true in the case of natural images, where random projections have vanishingly small signal to noise ratio as the number of pixels becomes large.
joint work with Hyun-Sung Chang and Bill Freeman


All these seminars are in the CS Calendar.




Credit: SpaceX Corporation, Stage 1 Separation of the Commercial Falcon 1 Vehicle, Congratulations to SpaceX! and CCTV footage of Zhai Zhigang, the first Chinese to do an EVA.

Monday, September 29, 2008

CS: Discussion with Ashwin Wagadarikar on Duke's Coded Aperture Snapshot Spectral Imagers



In light of the small conversation with Ramesh Raskar where he does a beautiful work in computationa photography, I wanted to also give the proper light on the hardware development at Duke that uses coded aperture. This is a short e-mail discussion I had with Ashwin Wagadarikar, a Ph.D. student, about a year ago. It was mostly about a discussion about the reasoning that went into the single and dual disperser system in the Coded Aperture Snapshot Spectral Imagers.


Me:
I have read both articles and here is the thing I cannot seem to fully understand: Why do you need two dispersive elements ? to provide another layer of mixing ?
Ashwin:
The two dispersive elements in the dual disperser architecture are arranged in opposition, so that the two dispersions cancel each other out. However, given that there is an aperture code placed after the first disperser, what the detector essentially measures is a well registered spatial scene with the code modulation on top of each object in the scene corresponding to the localized spectrum of the object.

The well registered spatial scene means that it is very easy to focus the camera on the objects.

As you will see with our single disperser papers, it is not necessary to have both dispersers to have a snapshot spectral imager. There are differences between the two instruments, which can influence the choice of which instrument to use for what spectral imaging application. These differences are discussed in greater detail in the paper that is under review with Applied Optics, but not so much in the SPIE paper.
Me:
So I am taking out of this that the main reason for the second disperser has more to do with convenience than with a compressed sensing mixing requirement. Hence there is no imperative need for a second disperser.
Ashwin:
I suppose you can look at it from both perspectives. The way the dispersers and the coded aperture are arranged in the dual disperser results in a purely spectral mixing at each spatial location. Having just one disperser will result in both spatial AND spectral mixing, which is more 'aggressive' coding. The dual disperser arrangement luckily gives us that convenient result of having a spatially well registered image on the detector.

Ultimately, using either design, we're going to have to recover a 3 dimensional dataset of information from just one 2 dimensional array of measurements. In the case of the dual disperser, we are able to recover an (x, y, lambda) datacube from just (x, y) measurements.

To give some perspective as tow why this design is interesting. The average cost of a hyperspectral imager is $100,000.

Friday, September 26, 2008

CS: A Small Discussion with Ramesh Raskar and the Camera Culture Lab at MIT.

[updated since first published 2 hours ago]

I had a small enlighting e-mail exchange with Ramesh Raskar (one of the authors of several fascinating new imaging sampling hardware featured here and on this blog) the other day about the connection between his work and other coded aperture work such as that of Gerry Skinner (CS: A Short Discussion with Gerry Skinner, a Specialist in Coded Aperture Imaging.) And I think we agree on some parts, yet he made a clearer point as to why he is using lenses as opposed to a lensless set-up:

...Gerry is so right about being careful about taking linear combination of images. A lot has been learned in coded apertures.
The excitement about capture-side of CS in imaging unfortunately tends to skip issues in whether there is a realistic gain wrt reconstruction noise and problems due to diffraction.
In general coded aperture doesnt work when the point spread function is extremely large and what you are imaging is a area source. For astronomy, the PSF is as large as the sensor but one is imaging only pt sources.
That is exactly the reason we designed coded aperture using lenses. They limit PSF to smaller region allowing us to maintain a reasonable SNR gain even after accounting for reconstruction noise.
Another use was coded aperture for lightfield capture where the masks were close to the sensor (Heterodyned lightfields), again limiting the PSF to a small number of pixels.
...please also refer to Roberto Accorsi and Prof Berthold K P Horn (MIT) who analyzed the effect of coded aperture (lensless) for point-like versus area scenes:

* Roberto Accorsi, Francesca Gasparini and Richard C. Lanza, "Optimal coded aperture patterns for improved SNR in nuclear medicine imaging", Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, Volume 474, Issue 3, December 2001, Pages 273-284
*Roberto Accorsi,  "Analytic derivation of the Contrast to Noise Ratio in Coded Aperture Imaging",  Personal Communications

On the reason he is not using any of the solver currently used in Compressive Sensing, Ramesh said the following:
 ...coded aperture with single shot doesnt really conform to CS and in our case we had equal number of observations/unknowns...
I note that in the following "when the point spread function is extremely large and what you are imaging is a area source" we are hitting on the issue of incoherence of dictionaries with the object being imaged. Also, CS is really about mixing several pieces of information with the attendant knowledge that these informations are sparse in some fashion. Right now, coded aperture has indeed never used the sparsity of the target as its main engine of discovery. But this is changing as witnessed by the extraordinary hardware development at Duke

Ramesh now teaches at MIT  and you can check his Computational Camera and Photography course presentation here. He also heads the Camera Culture Lab and he is looking to hire Graduate Students, MEng, PostDocs and UROPs starting Fall 2008 (I'll add this to the CSJobs section).

The Camera Culture Lab's presentation has the following introductory text:

We focus on creating tools to better capture and share visual information. The goal is to create an entirely new class of imaging platforms that have an understanding of the world that far exceeds human ability and produce meaningful abstractions that are well within human comprehensibility.

The group conducts multi-disciplinary research in modern optics, sensors, illumination, actuators, probes and software processing. This work ranges from creating novel feature-revealing computational cameras and new lightweight medical imaging mechanisms, to facilitating positive social impact via the next billion personalized cameras.

With more than a billion people now using networked, mobile cameras, we are seeing a rapid evolution in activities based on visual exchange. The capture and analysis of visual information plays an important role in photography, art, medical imaging, tele-presence, worker safety, scene understanding and robotics. But current computational approaches analyze images from cameras that have only limited abilities. Our goal is to go beyond post-capture software methods and exploit unusual optics, modern sensors, programmable illumination, and bio-inspired processing to decompose sensed values into perceptually critical elements. A significant enhancement in the next billion cameras to support scene analysis, and mechanisms for superior metadata tagging for effective sharing will bring about a revolution in visual communication.

Project topics include (i) computational photography via novel feature revealing cameras; (ii) femtosecond analysis of light transport with sophisticated illumination; (iii) Second Skin, a bio-i/o platform for motion capture via wearable imperceptible fabric; and (iv) universal encoder for sharing and consumption of visual media.

Keywords: Computational imaging, Signal processing, Applied optics, Computer graphics and vision, Hardware electronics, Art, Online photo collections, Visual social computing.

Lots of good stuff, I wish I could be part of that adventure.

Thursday, September 25, 2008

CS: A Simple Compressive Sensing Algorithm for Parallel Many-Core Architectures (Multi-CPUs, GPUs and the Cell processor)

[For those of you who do not know about compressive sensing, it is the technique that is behind,  among other implementations, the single pixel camera or the "cheap" hyperspectral imager (CASSI). One of the main challenges of this technology is the slow reconstruction process whereby an image is created out of the measurements made by the camera.]

Alexandre BorghiJerome Darbon, Sylvain Peyronnet, Tony F. Chan and Stanley Osher just released a paper on the possibility of reconstructing a signal from compressive sensing measurements using multi-CPUs, GPUs and the Cell processor in a paper entitled: A Simple Compressive Sensing Algorithm for Parallel Many-Core Architectures, The abstract reads: 
In this paper we consider the l1-compressive sensing problem. We propose an algorithm specifically designed to take advantage of shared memory, vectorized, parallel and many-core microprocessors such as the Cell processor, new generation Graphics Processing Units (GPUs) and standard vectorized multi-core processors (e.g. quad core CPUs). Besides its implementation is easy. We also give evidence of the efficiency of our approach and compare the algorithm on the three platforms, thus exhibiting pros and cons for each of them.
It is an intriguing paper as most papers trying to deal with GPUs aim at implementing very generic solvers. In this case, the algorithm of the solver is taylored to the hardware available. The figure shows the time it takes for reconstruction for a signal where m/n = 12.5% . The number of non-zero element is m/10. The measurement matrix is a partial DCT.

[ To find out more about Compressive Sensing, please check either the Rice Reference site or Compressive Sensing: The Big Picture ]


Wednesday, September 24, 2008

CS: GPR, Primal Dual Pursuit, Dictionary building, Uncertainty Relations, Edge Detection, Minimum Sum of Distances Estimator, NIPS'08 workshop


Today we have a flurry of new papers coming out and each of them should be of interest to some of the readership of this blog:

Three Novel Edge Detection Methods for Incomplete and Noisy Spectral Data by Eitan Tadmor, Jing Zou. The abstract reads:
We propose three novel methods for recovering edges in piecewise smooth
functions from their possibly incomplete and noisy spectral information. The proposed methods utilize three different approaches: #1. The randomly-based sparse Inverse Fast Fourier Transform (sIFT); #2. The Total Variation-based (TV) compressed sensing; and #3. The modified zero crossing. The different approaches share a common feature: edges are identified through separation of scales. To this end, we advocate here the use of concentration kernels (Tadmor, Acta Numer. 16:305–378, 2007), to convert the global spectral data into an approximate jump function which is localized in the immediate neighborhoods of the edges. Building on these concentration kernels, we show that the sIFT method, the TV-based compressed sensing and the zero crossing yield effective edge detectors, where finitely many jump discontinuities are accurately recovered. One- and two-dimensional numerical results are presented.
Minimum Sum of Distances Estimator: Robustness and Stability by Yoav Sharon, John Wright, and Yi Ma. The abstract reads

We consider the problem of estimating a state x from noisy and corrupted linear measurements y = Ax + z + e, where z is a dense vector of small-magnitude noise and e is a relatively sparse vector whose entries can be arbitrarily large. We study the behavior of the l1 estimator x^ = argmin_x ||y - Ax||_1, and analyze its breakdown point with respect to the number of corrupted measurements ||e||_0. We show that under mild conditions, the breakdown point does not depend on the noise level. We introduce a novel algorithm for computing the breakdown point for any given A, and provide a simple bound on the estimation error when the number of corrupted measurements is less than the breakdown point. We apply our algorithm to design a robust state estimator for an autonomous vehicles, and show how it can significantly improve performance over the Kalman filter.

This is an interesting application that would have served well our entry in DARPA Grand Challenge.

We have also two dissertations from Georgia Tech under the supervision of Justin Romberg

The first one is by Ali Cafer Gurbuz in a dissertation entitled Compressive sensing Ground penetrating radar Subsurface imaging. The abstract reads:

The problem of sensing a medium by several sensors and retrieving interesting features is a very general one. The basic framework of the problem is generally the same for applications from MRI, tomography, Radar SAR imaging to subsurface imaging, even though the data acquisition processes, sensing geometries and sensed properties are different. In this thesis we introduced a new perspective to the problem of remote sensing and information retrieval by studying the problem of subsurface imaging using GPR and seismic sensors. We have shown that if the sensed medium is sparse in some domain then it can be imaged using many fewer measurements than required by the standard methods. This leads to much lower data acquisition times and better images representing the medium. We have used the ideas from Compressive Sensing, which show that a small number of random measurements about a signal is sufficient to completely characterize it, if the signal is sparse or compressible in some domain. Although we have applied our ideas to the subsurface imaging problem, our results are general and can be extended to other remote sensing applications. A second objective in remote sensing is information retrieval which involves searching for important features in the computed image of the medium. In this thesis we focus on detecting buried structures like pipes, and tunnels in computed GPR or seismic images. The problem of finding these structures in high clutter and noise conditions, and finding them faster than the standard shape detecting methods like the Hough transform is analyzed. One of the most important contributions of this thesis is, where the sensing and the information retrieval stages are unified in a single framework using compressive sensing. Instead of taking lots of standard measurements to compute the image of the medium and search the necessary information in the computed image, a much smaller number of measurements as random projections are taken. The data acquisition and information retrieval stages are unified by using a data model dictionary that connects the information to the sensor data.
a subject similar to that treated by Andriyan Suksmono.

And then there is the dissertation of Muhammad Salman Asif entitled:Primal Dual Pursuit: A homotopy based algorithm for the Dantzig selector. The abstract reads:

Consider the following system model y = Ax + e, where x is n-dimensional sparse signal, y is the measurement vector in a much lower dimension m, A is the measurement matrix and e is the error in our measurements. The Dantzig selector estimates x by solving the following optimization problem minimize || x ||₁ subject to || A'(Ax - y) ||∞ \le ε, (DS). This is a convex program and can be recast into a linear program and solved using any modern optimization method e.g., interior point methods. We propose a fast and efficient scheme for solving the Dantzig Selector (DS), which we call "Primal-Dual pursuit". This algorithm can be thought of as a "primal-dual homotopy" approach to solve the Dantzig selector (DS). It computes the solution to (DS) for a range of successively relaxed problems, by starting with a large artificial ε and moving towards the desired value. Our algorithm iteratively updates the primal and dual supports as ε reduces to the desired value, which gives final solution. The homotopy path solution of (DS) takes with varying ε is piecewise linear. At some critical values of ε in this path, either some new elements enter the support of the signal or some existing elements leave the support. We derive the optimality and feasibility conditions which are used to update the solutions at these critical points. We also present a detailed analysis of primal-dual pursuit for sparse signals in noiseless case. We show that if our signal is S-sparse, then we can find all its S elements in exactly S steps using about "S² log n" random measurements, with very high probability.

We also have methods for building sparse dictionary for two groups. In the context of Compressive Sensing, these constructions should help in reconstructing signals:

Supervised Dictionary Learning by Julien Mairal, Francis Bach, Jean Ponce, Guillermo Sapiro, and Andrew Zisserman. The abstract reads:

It is now well established that sparse signal models are well suited to restoration tasks and can effectively be learned from audio, image, and video data. Recent research has been aimed at learning discriminative sparse models instead of purely reconstructive ones. This paper proposes a new step in that direction, with a novel sparse representation for signals belonging to different classes in terms of a shared dictionary and multiple class-decision functions. The linear variant of the proposed model admits a simple probabilistic interpretation, while its most general variant admits an interpretation in terms of kernels. An optimization framework for learning all the components of the proposed model is presented, along with experimental results on standard handwritten digit and texture classification tasks.
and Dictionary Learning for Sparse Approximations with the Majorization Method by Mehrdad Yaghoobi-Vaighan, Thomas Blumensath and Mike Davies. The abstract reads:

Sparse approximation methods can be used successfully where an appropriate generative model for compressible signals is known. If the model is unknown, it can be adapted by using a set of training samples. This paper presents a novel method for dictionary learning and extends the learning problem by introducing different constraints on the dictionary. The convergence of the proposed method to a fixed point, or the accumulation points forming a continuum, is guaranteed and holds for different sparsity measures. The majorization method is an optimization method that substitutes the original objective function with a surrogate function that is updated in each optimization step. This method has been used successfully in sparse approximation and statistical estimation (e.g. Expectation Maximization (EM)) problems. This paper shows that the majorization method can be used for the dictionary learning problem too. The proposed method is compared with other methods on both synthetic and real data and different constraints on the dictionary are compared. Simulations show the advantages of the proposed method over other currently available dictionary learning methods not only in terms of average performance but also in terms of computation time.
In a similar vein, when one wants to decompose a signal from dictionaries of functions that are incoherent, Yonina Eldar just released Uncertainty Relations for Analog Signals on ArXiv. The abstract reads:
In the past several years there has been a surge of research investigating various aspects of sparse representations and compressed sensing. Most of this work has focused on the finite-dimensional setting in which the goal is to decompose a finite-length vector into a given finite dictionary. Underlying many of these results is the conceptual notion of an uncertainty principle: a signal cannot be sparsely represented in two different bases. Here, we extend these ideas and results to the analog, infinite-dimensional setting by considering signals that lie in a finitely-generated shift-invariant (SI) space. This class of signals is rich enough to include many interesting special cases such as multiband signals and splines. By adapting the notion of coherence defined for finite dictionaries to infinite SI representations, we develop an uncertainty principle similar in spirit to its finite counterpart. We demonstrate tightness of our bound by considering a bandlimited low-pass comb that achieves the uncertainty principle. Building upon these results and similar work in the finite setting, we show how to find a sparse decomposition in an overcomplete dictionary by solving a convex optimization problem. The distinguishing feature of our approach is the fact that even though the problem is defined over an infinite domain with infinitely many variables and constraints, under certain conditions on the dictionary spectrum our algorithm can find the sparsest representation by solving a finite dimensional problem.


Finally, a workshop on l1 techniques in Machine Learning and added to the CS calendar.

Optimization for Machine Learning
NIPS*2008 Workshop
December 12-13, 2008, Whistler, Canada

URL: http://opt2008.kyb.tuebingen.mpg.de/


  • Deadline for submission of papers: 17th October 2008
  • Notification of acceptance: 7th November 2008
  • Final version of submission: 20th November 2008
  • Workshop date: 12th or 13th December 2008



Abstract
--------

Classical optimization techniques have found widespread use in machine learning. Convex optimization has occupied the center-stage and significant effort continues to be still devoted to it. New problems constantly emerge in machine learning, e.g., structured learning and semi-supervised learning, while at the same time fundamental problems such as clustering and classification continue to be better understood. Moreover, machine learning is now very important for real-world problems with massive datasets, streaming inputs, the need for distributed computation, and complex models. These challenging characteristics of modern problems and datasets indicate that we must go beyond the "traditional optimization" approaches common in machine learning.

What is needed is optimization "tuned" for machine learning tasks. For example, techniques such as non-convex optimization (for semi-supervised learning, sparsity constraints), combinatorial optimization and relaxations (structured learning), stochastic optimization (massive datasets), decomposition techniques (parallel and distributed computation), and online learning (streaming inputs) are relevant in this setting. These techniques naturally draw inspiration from other fields, such as operations research, polyhedral combinatorics, theoretical computer science, and the optimization community.

Motivated by these concerns, we would like to address these issues in the framework of this workshop.

Background and Objectives
-------------------------

The closest in spirit to our workshop are the previously held workshops on 'Mathematical Programming in Machine Learning / Data Mining' from 2005--2007.
These workshops were quite extensive and provided a solid platform for encouraging exchange between machine learners and optimization researchers. Another relevant workshop was the BigML NIPS*2007 workshop that focused on algorithmic challeges faced for large-scale machine learning tasks, with a focus on parallelization or online learning.

Our workshop addresses the following major issues, some of which have not been previously tackled as a combined optimization and machine learning effort. In particular, the aim of the workshop is to:

+ Bring together experts from machine learning, optimization, operations research, and statistics

+ Focus on problems of interest to the NIPS audience (some basic examples are given below)

+ Identify a set of important open problems and issues that lie at the intersection of both machine learning and optimization

Call for Participation
----------------------

We invite high quality submissions for presentation as talks or poster presentations during the workshop. We are especially interested in participants who can contribute in the following areas:

* Non-Convex Optimization, example problems in ML include
- Problems with sparsity constraints
- Sparse PCA
- Non-negative matrix and tensor approximation
- Non-convex quadratic programming

* Combinatorial Optimization, example problems in ML include
- Estimating MAP solutions to discrete random fields
- Clustering and graph-partitioning
- Semi-supervised and multiple-instance learning
- Feature and subspace selection

* Stochastic, Parallel and Online Optimization, example problems in ML include
- Massive data sets
- Distributed learning algorithms

* Algorithms and Techniques, especially with a focus on an underlying application
- Polyhedral combinatorics, polytopes and strong valid inequalities
- Linear and higher-order relaxations
- Semidefinite programming relaxations
- Decomposition for large-scale, message-passing and online learning
- Global and Lipschitz optimization
- Algorithms for non-smooth optimization
- Approximation Algorithms

*Note: Generic methods such as neural-networks, simulated annealing, swarm-optimization methods (ant-colony optimization, genetic algorithms), lie outside the scope of this workshop.
Credit: NASA/JPL/Space Science Institute, Rhea's Roughness, September 22, 2008

Tuesday, September 23, 2008

CS: Two workshops, a book and three conferences: SPARS 2009, Strobl09, MRI Unbound.

Remi Gribonval just mentioned to me SPARS 2009, a workshop on Signal Processing with Adaptive Sparse/Structured Representations.

When: April 07-10, 2009
Where: Saint-Malo (France)



SPARS'09 is the second edition of the international workshop dedicated to sparsity in signal processing, which first edition was held in Rennes in 2005 (http://spars05.irisa.fr).

Over the last five years, theoretical advances in sparse representations have highlighted their potential to impact all fundamental areas of signal processing, from blind source separation to feature extraction and classification, denoising, and detection ... In particular, these techniques are at the core of compressed sensing, an emerging approach which proposes a radically new viewpoint on signal acquisition compared to Shannon sampling. There are also strong connections between sparse signal models and kernel methods, which algorithmic success on large datasets relies deeply on sparsity.

The purpose of the SPARS 09 workshop is to present and discuss novel ideas, works and results, both experimental and theoretical, related to this rapidly evolving area of research.

SPARS 09 will be a single track workshop with contributed oral and poster presentations, and will feature invited lectures by the following plenary speakers :

  • Anna Gilbert, University of Michigan, USA
  • Justin Romberg, Georgia Tech, USA
  • Ron De Vore, University of South Carolina, USA
  • Martin Vetterli EPFL, Switzerland (to be confirmed)


Important dates
  • September 15, 2008 : call for papers
  • November 15, 2008 : submission deadline for extended abstracts
  • January 9, 2009 : author notification + opening of registrations
  • s January 31, 2009 : deadline for camera ready papers

Contributions are expected on the following topics (non-limitative list):
  • Sparse coding, vector quantization and dictionary learning
  • Sparse approximation algorithms : performance and complexity
  • analysis, new methodologies, ...
  • Compressed sensing
  • Simultaneous processing of multiple signals/images
  • Sparse/structured signal representations, visualization
  • Compression and coding
  • Feature extraction, classification, detection
  • Source separation
  • Sparsity measures in approximation theory, information theory and statistics
  • Applications to image, audio, video, medical, multimedia and multichannel data processing

Any communication proposal will mandatorily consist of:
* The paper title;
* The name ot the authors with their complete address (mail and email, phone, fax), the name of the principal author being underlined
* The answers to the three questions (with at most 5 lines per question) :
  • statement of the problem
  • originality of the work
  • new results
* A summary of two pages minimum and three pages maximum (single column), including figures. Final papers format : 4 pages maximum, double column, in PDF, A4, font>= 9pt

All submissions will be managed through the online system which will be made available on the website of the workshop.

The workshop is open in priority to participants who will give a presentation, however participation without a presentation should be possible depending on the number of participants. If you are interested, please contact the organizers in advance to let them know about your intention to participate.


While we are on the subject of sparsity and adaptivity, I just noticed that Stephane Mallat is about to release A Wavelet Tour of Signal Processing, Third Edition: The Sparse Way. Returning to meetings, Laurent Duval points me to Strobl09: Conference on Time-Frequency

Where: Strobl, Austria
When: 15 - 20. Jun. 2009.

The conference will cover mathematical and computional aspects of harmonic analysis. The topic will include time-frequency analysis, pseudodifferential operators, wireless communications and time-varying systems, compressed sensing, sampling theory, signal transforms and wavelet theory, functions spaces.

These and other meetings/workshop are listed on the CS calendar.

Found on the interwebs:
ISMRM Workshop on Data Sampling and Image Reconstruction: MRI Unbound.
When: January 25-29, 2009
Where: Sedona, Arizona

I like the fact that they seem to organize a Reconstruction Challenge, this is a very good idea. Talking about very good ideas, Dave Donoho has finally updated his webpage and here is one interesting paper:
where he points out how Sparselab has been useful for people to learn about compressed sensing. I'll make a mention to the technical ones later but I think I covered them all.


And finally, there are two conferences that have mentioned Compressed Sensing in their list of interesting topics. I am sure that more and more conferences will do that in the future so I will probably mentioned only those for which CS and related subjects are the main focus of interest. In the meantime, we have:


IMMERSCOM 2009, the 2nd International Conference on Immersive Telecommunications
Website: http://www.immerscom.org
When: May 27-29, 2009
Where: University of California, Berkeley


INTERNET 2009, The First International Conference on Evolving Internet
When: August 25-31, 2009
Where: Cap Esterel, France

Monday, September 22, 2008

CS: Compressive Sampling via Random Modulation Pre-Integration

Valerio Cambareri, a reader of this blog, and an engineering student at University of Bologna, is 15 days away from getting his Dottore in Ingegneria Elettronica or Bachelor of Engineering and is furiously finishing the writing of his undergraduate thesis. The subject area he investigated is Compressive Sampling via an RMPI (Random Modulation Preintegration) System Architecture. He has begun to feature some of his results at:




Let us note the use of the seemingly very fast smooth l_0 algorithm by G. Hosein Mohimani, Massoud Babaie-Zadeh and Christian Jutten mentioned here earlier. Valerio tells me that he will eventually put his thesis as well as the attendant matlab codes on his website after his graduation. Let us wish, Good Luck to Valerio on his defense.

Saturday, September 20, 2008

CSCommunity: If You Like It, Link to It.

World Renowned Compressive Sensing Experts say the following about this blog:


"...your blog is doing a good community service..."

"...you have a very nice blog..."

"...I have read your blog many times regarding compressed sensing.."

"...you're providing a wonderful resource to the compressed sensing community.."

"...I visit your blog every day and I must say that I
truly appreciate your efforts in making it one of the easiest ways to keep
in touch with the latest developments in the field..."

"..Your blog on CS is very cool.."

"...Wonderful blog..."

"..A propos, votre blog est vraiment tres tres bien..."

".. Still their rotary steerable system is a piece of s**t..."



errr...., my mistake, the last comment was not from a CS researcher nor on the subject of Compressive Sensing. Seriously, sending the link or featuring the link on your page is a good way for your colleagues to know more about CS and for your graduate students to have a feel on how the field is evolving day by day. The address is:


There is also the living document called the Big Picture and the more community oriented Compressive Sensing 2.0.

Finally, Laurent Duval has an update on if Mike's Dog really ate his frog and it looks like the meme is picking up.

Friday, September 19, 2008

CS: l_0, What Is It Good For ?


In Wikimization, one can read a presentation by Christine Law with an interesting ability to perform large subsampling using a reconstruction code using the l_0 norm as opposed to all other schemes using the l_1 or l_p norm (with p less than 1 but not equal to zero). Actually, not all schemes are using l_1 or l_p I mentioned earlier the article by G. Hosein Mohimani, Massoud Babaie-Zadeh and Christian Jutten entitled Fast Sparse Representation based on Smoothed l0 norm. They now have a new paper entitled A fast approach for overcomplete sparse decomposition based on smoothed L0 norm. The abstract reads:

In this paper, a fast algorithm for overcomplete sparse decomposition, called SL0, is proposed. The algorithm is essentially a method for obtaining sparse solutions of underdetermined systems of linear equations, and its applications include underdetermined Sparse Component Analysis (SCA), atomic decomposition on overcomplete dictionaries, compressed sensing, and decoding real field codes. Contrary to previous methods, which usually solve this problem by minimizing the L1 norm using Linear Programming (LP) techniques, our algorithm tries to directly minimize the L0 norm. It is experimentally shown that the proposed algorithm is about two to three orders of magnitude faster than the state-of-the-art interior-point LP solvers, while providing the same (or better) accuracy.

They also have set up a website entitled: Smoothed L0 (SL0) Algorithm for Sparse Decomposition where they introduce their algorithm and attendant code:

What is the SL0 algorithm?

SL0 (Smoothed L0) is an algorithm for finding the sparsest solutions of an underdetermined system of linear equations As=x. One of its main applications is in Compressive Sensing (CS).

SL0 is a very fast algorithm. For example, it is about 2 to 3 orders of magnitude faster than L1-magic.

SL0 tries to directly minimize the L0 norm. This is contrary to most of other existing algorithms (e.g. Basis Pursuit), which replace L0 norm by other cost functions (like L1). Note also that the equivalence of minimizing L1 and L0 is only assymptotic, and does not always hold (for a counter-example, see here).


The code for SL0.m can now be found on the authors' site here. Back in April, G. Hosei Mohimani initially forwarded me the code implementing the algorithm. It is also available here but it should be considered an older version. I have changed the local code site accordingly by pointing to this new site. The reconstruction section of the Big Picture has also been changed and points to this new site as well.

Thursday, September 18, 2008

CS: On Verifiable Sufficient Conditions for Sparse Signal Recovery via $\ell_1$ Minimization

If you recall, it looked like the ability to check whether a measurement matrix is acceptable or notfor the purpose of recovering a sparse signal through an l_1 minimization method, was pretty bleak. Anatoli Juditsky and Arkadii Nemirovski seem to have found a way out of this conundrum with a new preprint entitled: On Verifiable Sufficient Conditions for Sparse Signal Recovery via $\ell_1$ Minimization. The abstract reads:
We propose novel necessary and sufficient conditions for a sensing matrix to be "$s$-good" -- to allow for exact $\ell_1$-recovery of sparse signals with $s$ nonzero entries when no measurement noise is present. Then we express the error bounds for imperfect $\ell_1$-recovery (nonzero measurement noise, nearly $s$-sparse signal, near-optimal solution of the optimization problem yielding the $\ell_1$-recovery) in terms of the characteristics underlying these conditions. Further, we demonstrate (and this is the principal result of the paper) that these characteristics, although difficult to evaluate, lead to verifiable sufficient conditions for exact sparse $\ell_1$-recovery and to efficiently computable upper bounds on those $s$ for which a given sensing matrix is $s$-good. We establish also instructive links between our approach and the basic concepts of the Compressed Sensing theory, like Restricted Isometry or Restricted Eigenvalue properties.

wow.

Credit: NASA/JPL-Caltech/University of Arizona/Texas A&M, clouds on Mars as seen from Phoenix on sol 111.

Wednesday, September 17, 2008

CS: Random Projections and a talk.

Laurent Jacques pointed out that I messed up in the previous entry. I have changed that and now want to mention two papers/preprints focused on random projections.

Graph Laplacian Tomography from Unknown Random Projections by Ronald Coifman,Yoel Shkolnisky, Fred Sigworth and Amit Singer.

The abstract reads:

We introduce a graph Laplacian-based algorithm for the tomographic reconstruction of a planar object from its projections taken at random unknown directions. A Laplace type operator is constructed on the data set of projections, and the eigenvectors of this operator reveal the projection orientations. The algorithm is shown to successfully reconstruct the Shepp-Logan phantom from its noisy projections. Such a reconstruction algorithm is desirable for the structuring of certain biological proteins using cryo-electron microscopy.
and a follow-up of that paper.

Cryo-EM Structure Determination through Eigenvectors of Sparse Matrices by Ronald Coifman,Yoel Shkolnisky, Fred Sigworth and Amit Singer. The abstract reads:

Recovering the three-dimensional structure of proteins is important for understanding their functionality. We describe a spectral graph algorithm for reconstructing the three-dimensional structure of molecules from their cryo-electron microscopy images taken at random unknown orientations. The key idea of the algorithm is designing a sparse operator defined on the projection images, whose eigenvectors reveal their orientations. The special geometry of the problem rendered by the Fourier projection-slice theorem is incorporated into the construction of a weighted graph whose vertices are the radial Fourier lines and whose edges are linked with the common line property. The radial lines are associated with points on the sphere and are networked through spider like connections. The graph organizes the radial lines on the sphere in a global manner that reveals the projection directions. This organization is derived from a global computation of a few eigenvectors of the graph’s adjacency matrix. Once the directions are obtained, the molecule can be reconstructed using classical tomography methods. The presented algorithm is direct (as opposed to iterative refinement schemes), does not require any prior model for the reconstructed object, and shown to have favorable computational and numerical properties. Moreover, the algorithm does not impose any assumption on the distribution of the projection orientations. Physically, this means that the algorithm successfully reconstructs molecules that have unknown spatial preference. We also introduce extensions of the algorithm, based on the spectral properties of the operator, which significantly improve its applicability to realistic data sets. These extensions include: particle selection, to filter corrupted projections; center determination to estimate the relative shift of each projection; and, a method to resolve the heterogeneity problem, in cases where a mix of different molecules is being imaged concurrently.
Petar Maynoukov has a summary on a presentation by one of the authors.
Unrelated, Dror Baron will give a talk tomorrow at the Technion CS department at the Pixel Club Seminar. The title is Compressed Sensing Meets Information Theory,

Date: Thursday, 18.9.2008, 11:30, Place:Room 337-8 Taub Bld.

Abstract of the talk:
Sensors, signal processing hardware, and algorithms are under increasing pressure to accommodate ever larger data sets; ever faster sampling and processing rates; ever lower power consumption; and radically new sensing modalities. Fortunately, there have been enormous increases in computational power. This progress has motivated Compressed Sensing (CS), an emerging field based on the revelation that a sparse signal can be reconstructed from a small number of linear measurements. The implications of CS are promising, and enable the design of new kinds of cameras and analog-to-digital converters. Information theory has numerous insights to offer CS; I will describe several investigations along these lines. First, unavoidable analog measurement noise dictates the minimum number of measurements required to reconstruct the signal. Second, we leverage the remarkable success of LDPC channel codes to design low-complexity CS reconstruction algorithms. Third, distributed compressed sensing (DCS) provides new distributed signal acquisition algorithms that exploit both intra- and inter-signal correlation structures in multi-signal ensembles. DCS is immediately applicable in sensor networks.

Linear measurements play a crucial role not only in compressed sensing but in disciplines such as finance, where numerous noisy measurements are needed to estimate various statistical characteristics. Indeed, many areas of science and engineering seek to extract information from linearly derived measurements in a computationally feasible manner. Advances toward a unified theory of linear measurement systems will enable us to effectively process the vast amounts of data being generated in our dynamic world.


Credit Photo: Pool Photo AP, also one can check the slide show from ABC on the devastation of Hurricane Ike.

Tuesday, September 16, 2008

CS: A video, Two talks, A Randomized Algorithm for PCA


I mentioned it before but the impressive results of Compressive Structured Light for Recovering Inhomogeneous Participating Media by Jinwei Gu, Shree Nayar, Eitan Grinspun, Peter Belhumeur, and Ravi Ramamoorthi is presented very nicely in a video located here. It was added to the video section of the CS pages.



Thomas Blumensath and Mike Davies have a revised version of "Sampling Theorems for Signals from the Union of Linear Subspaces".

Vladimir Rokhlin
, Arthur Szlam, and Mark Tygert just released A Randomized Algorithm for Principal Component Analysis. The abstract reads:
Principal component analysis (PCA) requires the computation of a low-rank approximation to a matrix containing the data being analyzed. In many applications of PCA, the best possible accuracy of any rank-deficient approximation is at most a few digits (measured in the spectral norm, relative to the spectral norm of the matrix being approximated). In such circumstances, existing efficient algorithms do not guarantee good accuracy for the approximations they produce, unless one or both dimensions of the matrix being approximated are small. We describe an efficient algorithm for the low-rank approximation of matrices that produces accuracy very close to the best possible, for matrices of arbitrary sizes. We illustrate our theoretical results via several numerical examples.

There are two upcoming talks listed in the CS Calendar:

Applied Math Seminar at the Computer Science Department at Yale.
Speaker: Shai Dekel, Ph.D., Chief Scientist, Imaging Solutions, GE Healthcare IT
Titles: "Adaptive compressed image sensing based on wavelet-trees" and "On the equivalence of the modulus of smoothness and the K-functional over convex domains".

When/where: Thursday, September 18th, 2008, 3:30PM, Room 500 AKW

Abstract:
In this talk I will give two 30 minutes talks presenting the above recent results. The first is a compressed sensing algorithm that actually works on large images (to the best of my knowledge, all known CS algorithms 'choke' on images larger than 512x512). Then second topic is a classic problem in approximation theory. The abstract is here.



On September 15, 2008, a talk by Rafael Carrillo entitled: Robust Sampling and Reconstruction Methods for Sparse Signals in the Presence of Impulsive Noise at University of Delaware at 11:15 AM, Evans Hall, Room 204.

Abstract of the talk:

Recent results in compressed sensing have shown that a sparse or compressible signal can be reconstructed from a few incoherent measurements with the reconstruction formulated as an optimization problem or implemented through iterative algorithms. Compressive sensing systems are not immune to noise, which is always present in practical acquisition systems. When the underlying signal is corrupted by heavy tailed or impulsive noise, commonly employed linear measurements are severely affected, as sampling operators coupled with current geometric and greedy reconstruction algorithms fail to recover a fair approximation of the signal. Similarly, when the sensed measurements are corrupted with such noise, current approaches also fail to yield faithful estimates of the original signal. In this work, we propose robust methods for sampling and reconstructing signals in the presence of impulsive noise. To solve the problem of noise embedded in the underlying signal prior the measurement process, we propose a robust nonlinear measurement operator based on the weighed myriad filter family. Myriad-based measurements offer robustness in impulsive environments while, at the same time, enabling use of standard reconstruction algorithms derived for linear measurements. To recover sparse signals from noisy measurements, we propose a geometric reconstruction algorithm based on L1 minimization employing a nonlinear constraint. Specifically, a Lorentzian norm constraint on the error measure defines the reconstruction feasible set. Analysis of the proposed methods show that even in harsh environments when the noise possesses infinite variance we have a finite reconstruction error and furthermore these methods yield an approximate reconstruction. Simulations demonstrate that the proposed methods significantly outperform commonly employed compressed sensing and reconstruction techniques in heavy-tailed environments, while providing comparable performance in less demanding, light-tailed environments.



View Larger Map

view of Gilchirst, Texas before Huricane Ike.

Credit Photo: NOAA, photos taken by an NOAA aircraft after Hurricane Ike. This is a view of Gilchrist, Texas.

Monday, September 15, 2008

I don't like Ike

I have made passing references in figures /maps / images to Ike a week ago and then on FridaySaturday and Monday. While the Texas A&M main campus was spared, Houston, the location of Rice, was not.

Rich Baraniuk's house was not so lucky. He tells me nobody got hurt. Good!

CS: Sparsity and Persistence: Mixed Norms Provide Simple Signal Models with Dependent Coefficients

We mentioned mixed norms recently. Here is version 3 and the latest version of Sparsity and persistence: mixed norms provide simple signal models with dependent coefficients by Matthieu Kowalski and Bruno Torrésani. The abstract reads:

Sparse regression often uses $\ell_p$ norm priors (with p less than 2). This paper demonstrates that the introduction of mixed-norms in such contexts allows one to go one step beyond in signal models, and promote some different, structured, forms of sparsity. It is shown that the particular case of $\ell_{1,2}$ and $\ell_{2,1}$ norms lead to new group shrinkage operators. Mixed norm priors are shown to be particularly efficient in a generalized basis pursuit denoising approach, and are also used in a context of morphological component analysis. A suitable version of the Block Coordinate Relaxation algorithm is derived for the latter. The group-shrinkage operators are then modified to overcome some limitations of the mixed-norms. The proposed group shrinkage operators are tested on simulated signals in specific situations, to illustrate their different behaviors. Results on real data are also used to illustrate the relevance of the approach.

On a different note, Bruno Torrésani has a presentation in French that he made while on some island entitled: Parcimonie, ondelettes et *-lettes which is an introduction to why compressed sensing is needed (see end of presentation).

Credit: NASA, ISS017-E-015170 (4 Sept. 2008) --- Hurricane Ike was still a Category 4 storm on the morning of Sept. 4 when this photo was taken from the International Space Station's vantage point of 220 miles above the Earth.

Saturday, September 13, 2008

Only if Mike's dog really ate his frog

I just read and watched this presentation by Eric Weinstein entitled Sheldon Glashow Owes me a Dollar (and 17 years of interest): What happens in the marketplace of ideas when the endless frontier meets the efficient frontier? where he presents his thoughts on a how market type of approach could be applied to science production and in particular disruptive ideas.

"if Mike's dog really ate his frog" is from Attempts to hedge out peer review presented in slide 73-79 of the pdf presentation document. Eric, this was a nice talk. There is a now a second reference in Google to this sentence.

(via BackReaction blog)

Leaving Houston

I have talked about Leaving Houston before when under pressure from the elements. Here are data that did not know existed in 2005. Doppler radar data over houston (Looks like it is avoiding Texas A&M University)





and a view of what a 4 meter or 6 meter flood can do to the Coast and live view of the weather.



Thank you Pedro for the one-week heads-up.

Friday, September 12, 2008

CS: Fast Bayesian Matching Pursuit, new version of L_q reconstruction scheme

Lee Potter, Phil Schniter, and Justin Ziniel propose a new reconstruction technique potentially of interest to the Compressive Sensing community entitled Fast Bayesian Matching Pursuit. It is featured in: Fast Bayesian Matching Pursuit: Model Uncertainty and Parameter Estimation for Sparse Linear Models. The abstract reads:

A low-complexity recursive procedure is presented for model selection and minimum mean squared error (MMSE) estimation in linear regression. Emphasis is given to the case of a sparse parameter vector and fewer observations than unknown parameters. A Gaussian mixture is chosen as the prior on the unknown parameter vector. The algorithm returns both a set of high posterior probability mixing parameters and an approximate MMSE estimate of the parameter vector. Exact ratios of posterior probabilities serve to reveal potential ambiguity among multiple candidate solutions that are ambiguous due to observation noise or correlation among columns in the regressor matrix. Algorithm complexity is linear in the number of unknown coefficients, the number of observations and the number of nonzero coefficients. If hyperparameters are unknown, a maximum likelihood estimate is found by a generalized expectation maximization algorithm. Numerical simulations demonstrate estimation performance and illustrate the distinctions between MMSE estimation and maximum a posteriori probability model selection.
The matlab implementation of the Fast Bayesian Matching Pursuit code used in the paper will be featured here shortly according to the site.

Also, Simon Foucart mentions to me that he has updated his paper entitled Sparsest solutions of underdetermined linear systems via -minimization for . by himself and Ming-Jun Lai. As mentioned before, the code is here.


Since the beginning of this week, we have had several addtion on the reconstruction side of things, I have updated the list of reconstruction codes in the big picture accordingly.

Finally, here is Ike's prediction as of yesterday and how it relates to both Texas A&M University and Rice University. Better information can be found here.





Credit: NASA/JPL/University of Arizona, Hirise camera image of Mars,
Ius Chasma's Floor (PSP_009368_1720)

Printfriendly