Tuesday, November 30, 2010

CS: The Long Post of the Week






Following up on yesterday's 5D camera here is the paper and video introducing the subject: Looking Around the Corner using Transient Imaging by Ahmed Kirmani, Tyler Hutchison, James Davis, and Ramesh Raskar. The abstract reads:
We show that multi-path analysis using images from a timeof-flight (ToF) camera provides a tantalizing opportunity to infer about 3D geometry of not only visible but hidden parts of a scene. We provide a novel framework for reconstructing scene geometry from a single viewpoint using a camera that captures a 3D time-image I(x, y, t) for each pixel. We propose a framework that uses the time-image and transient reasoning to expose scene properties that may be beyond the reach of traditional computer vision. We corroborate our theory with free space hardware experiments using a femtosecond laser and an ultrafast photo detector array. The ability to compute the geometry of hidden elements, unobservable by both the camera and illumination source, will create a range of new computer vision opportunities.
while Ramesh talks about photons meeting each other, I recently heard Ori Katz talk about the following paper (see below) who mentioned thee possibility of having "twin" photons, but this is another story. In the meantime here is Quantum Inspired Imaging with Compressive Sensing by Ori Katz, Yaron Bromberg, and Yaron Silberberg. The abstract reads:
We demonstrate depth-resolved computational ghost imaging using a single single-pixel detector and a spatially modulated beam. We further develop an advanced reconstruction algorithm based on compressive-sensing, demonstrating a 10-fold reduction in image acquisition time.
What I gathered from Ori's presentation was that compressed sensing could have removed entirely a sterile discussion between specialists on whether ghost imaging was a purely quantum effect or not (it's not since CS provides a classical explanation to the experiment). A good half of the rest of the papers presented below are behind a paywall, enjoy:
Compressive sampling is a sampling technique for sparse signals. The advantage of compressive sampling is that signals are compactly represented by a few number of measured values. This paper adopts an analog neural network technique, Lagrange programming neural networks (LPNNs), to recover data in compressive sampling. We propose the LPNN dynamics to handle three sceneries, including the standard recovery of sparse signal, the recovery of non-sparse signal, and the noisy measurement values, in compressive sampling. Simulation examples demonstrate that our approach effectively recovers the signals from the measured values for both noise free and noisy environment.

A large set of signals can sometimes be described sparsely using a dictionary, that is, every element can be represented as a linear combination of few elements from the dictionary. Algorithms for various signal processing applications, including classification, denoising and signal separation, learn a dictionary from a set of signals to be represented. Can we expect that the representation found by such a dictionary for a previously unseen example from the same source will have L_2 error of the same magnitude as those for the given examples? We assume signals are generated from a fixed distribution, and study this questions from a statistical learning theory perspective. We develop generalization bounds on the quality of the learned dictionary for two types of constraints on the coefficient selection, as measured by the expected L_2 error in representation when the dictionary is used. For the case of l_1 regularized coefficient selection we provide a generalization bound of the order of O(sqrt(np log(m lambda)/m)), where n is the dimension, p is the number of elements in the dictionary, lambda is a bound on the l_1 norm of the coefficient vector and m is the number of samples, which complements existing results. For the case of representing a new signal as a combination of at most k dictionary elements, we provide a bound of the order O(sqrt(np log(m k)/m)) under an assumption on the level of orthogonality of the dictionary (low Babel function). We further show that this assumption holds for most dictionaries in high dimensions in a strong probabilistic sense. Our results further yield fast rates of order 1/m as opposed to 1/sqrt(m) using localized Rademacher complexity. We provide similar results in a general setting using kernels with weak smoothness requirements.

The simultaneous inversion for source location and mechanism of microseismic events is a significant problem in the fields of applied geophysics and petroleum engineering. Methodologies for real-time source moment tensor estimation may become of significant importance for the monitoring of hydraulic fractures. We examine an algorithm for a quasi-real time implementation of source location estimation and seismic moment tensor inversion of microseismic events. The algorithm requires the inversion of a dictionary of Green functions containing the multi-component response of each potential seismic source in the subsurface. The problem entails the inversion of a prohibitively large matrix that, at a first glance, does not appear amenable to a real-time implementation. Compressive sensing, however, can be used to reduce the size of the dictionary of Green functions to up to 90% of its original size, making a near real-time execution computationally tractable. The algorithm is validated with a small magnitude earthquake (18 June 2002, Caborn, Indiana) with a well-studied source mechanism and hypocenter location.

We demonstrate subpixel level color imaging capability on a lensfree incoherent on-chip microscopy platform. By using a nanostructured substrate, the incoherent emission from the object plane is modulated to create a unique far-field diffraction pattern corresponding to each point at the object plane. These lensfree diffraction patterns are then sampled in the far-field using a color sensor-array, where the pixels have three different types of color filters at red, green, and blue (RGB) wavelengths. The recorded RGB diffraction patterns (for each point on the structured substrate) form a basis that can be used to rapidly reconstruct any arbitrary multicolor incoherent object distribution at subpixel resolution, using a compressive sampling algorithm. This lensfree computational imaging platform could be quite useful to create a compact fluorescent on-chip microscope that has color imaging capability.
I could not get the text of this paper, but it sounds interesting:  Adaptive Dantzig density estimation by Bertin K., Le Pennec E. et Rivoirard V

In this paper we propose a new wavelet transform applicable to functions defined on graphs, high dimensional data and networks. The proposed method generalizes the Haar-like transform proposed in [1], and it is similarly defined via a hierarchical tree, which is assumed to capture the geometry and structure of the input data. It is applied to the data using a multiscale filtering and decimation scheme, which can employ different wavelet filters. We propose a tree construction method which results in efficient representation of the input function in the transform domain. We show that the proposed transform is more efficient than both the one-dimensional (1D) and twodimensional (2D) separable wavelet transforms in representing images.We also explore the application of the proposed transform to image denoising, and show that combined with a subimage averaging scheme, it achieves denoising results which are similar to the ones obtained with the K-SVD algorithm.

Reconstruction of multidimensional signals from the samples of their partial derivatives is known to be an important problem in imaging sciences, with its fields of application including optics, interferometry, computer vision, and remote sensing, just to name a few. Due to the nature of the derivative operator, the above reconstruction problem is generally regarded as ill-posed, which suggests the necessity of using some a priori constraints to render its solution unique and stable. The ill-posed nature of the problem, however, becomes much more conspicuous when the set of data derivatives occurs to be incomplete. In this case, a plausible solution to the problem seems to be provided by the theory of compressive sampling, which looks for solutions that fit the measurements on one hand, and have the sparsest possible representation in a predefined basis, on the other hand. One of the most important questions to be addressed in such a case would be of how much incomplete the data is allowed to be for the reconstruction to remain useful. With this question in mind, the present note proposes a way to augment the standard constraints of compressive sampling by additional constraints related to some natural properties of the partial derivatives. It is shown that the resulting scheme of derivative compressive sampling (DCS) is capable of reliably recovering the signals of interest from much fewer data samples as compared to the standard CS. As an example application, the problem of phase unwrapping is discussed.

In a traditional signal processing system sampling is carried out at a frequency which is at least twice the highest frequency component found in the signal. This is in order to guarantee that complete signal recovery is later on possible. The sampled signal can subsequently be subjected to further processing leading to, for example, encryption and compression. This processing can be computationally intensive and, in the case of battery operated systems, unpractically power hungry. Compressive sensing has recently emerged as a new signal sampling paradigm gaining huge attention from the research community. According to this theory it can potentially be possible to sample certain signals at a lower than Nyquist rate without jeopardizing signal recovery. In practical terms this may provide multi-pronged solutions to reduce some systems computational complexity. In this work, information theoretic analysis of real EEG signals is presented that shows the additional benefits of compressive sensing in preserving data privacy. Through this it can then be established generally that compressive sensing not only compresses but also secures while sampling.

Comparison and analysis of nonlinear algorithms for compressed sensing in MRI. by  Yu Y, Hong M, Liu F, Wang H, Crozier S.. The abstract reads:

Compressed sensing (CS) theory has been recently applied in Magnetic Resonance Imaging (MRI) to accelerate the overall imaging process. In the CS implementation, various algorithms have been used to solve the nonlinear equation system for better image quality and reconstruction speed. However, there are no explicit criteria for an optimal CS algorithm selection in the practical MRI application. A systematic and comparative study of those commonly used algorithms is therefore essential for the implementation of CS in MRI. In this work, three typical algorithms, namely, the Gradient Projection For Sparse Reconstruction (GPSR) algorithm, Interior-point algorithm (l(1)_ls), and the Stagewise Orthogonal Matching Pursuit (StOMP) algorithm are compared and investigated in three different imaging scenarios, brain, angiogram and phantom imaging. The algorithms' performances are characterized in terms of image quality and reconstruction speed. The theoretical results show that the performance of the CS algorithms is case sensitive; overall, the StOMP algorithm offers the best solution in imaging quality, while the GPSR algorithm is the most efficient one among the three methods. In the next step, the algorithm performances and characteristics will be experimentally explored. It is hoped that this research will further support the applications of CS in MRI.

Bounds from a Card Trick
by Travis Gagie. The abstract reads:
We describe a new variation of a mathematical card trick, whose analysis leads to new lower bounds for data compression and estimating the entropy of Markov sources.

This paper gives an overview of the eigenvalue problems encountered in areas of data mining that are related to dimension reduction. Given some input high-dimensional data, the goal of dimension reduction is to map them to a lowdimensional space such that certain properties of the initial data are preserved. Optimizing the above properties among the reduced data can be typically posed as a trace optimization problem that leads to an eigenvalue problem. There is a rich variety of such problems and the goal of this paper is to unravel relations between them as well as to discuss effective solution techniques. First, we make a distinction between projective methods that determine an explicit linear projection from the high-dimensional space to the low-dimensional space, and nonlinear methods where the mapping between the two is nonlinear
and implicit. Then, we show that all of the eigenvalue problems solved in the context of explicit projections can be viewed as the projected analogues of the so-called nonlinear or implicit projections. We also discuss kernels as a means of unifying both types of methods and revisit some of the equivalences between methods established in this way. Finally, we provide some illustrative examples to showcase the behavior and the particular characteristics of the various dimension reduction methods on real world data sets.
I could not but remember a previous entry on a similar subject: Trace: What is it good for ? - How Compressed Sensing relates to Dimensionality Reduction

Finally, I found the following LMS Invited Lecturer 2011

Emmanuel Candes, Stanford

Centre for Mathematical Sciences, Cambridge, March 21-25 2011


Emmanuel Candes will give an eight-lecture minicourse, at a level suitable for graduate students, on Compressed Sensing. This is a subject very much at the interface of pure and applied mathematics and the lectures should interest a wide audience.

There will also be one-hour lectures by: Anders Hansen (Cambridge), Vincent Rivoirard (Paris-Dauphine), Carola Schoenlieb (Cambridge) and Jared Tanner (Edinburgh).

College accommodation will be available. Funding is available for Research students from UK universities and a limited amount of funding is available for others. Please email lms2011@maths.cam.ac.uk for more details.

Monday, November 29, 2010

CS: 5D lighting camera, Cyclic Matching Pursuit with a Time-frequency Dictionary, SMALL Workshop on Sparse Dictionary Learning

Today, we have an exciting post. A technology that could use some compressive sensing, a new reconstruction code released and an announcement for a workshop in London.

First Ramesh Raskar works on a 5D camera (3D in space and 2D in angle): Much of the story is on this page:
       http://web.media.mit.edu/~raskar//femto/
       http://www.bbc.co.uk/news/technology-11544037


This is a camera that can look around corners. This has clear opportunities in compressive sensing given the inherent linear projection involved in measurements. I am sure their team will like to hear from CS researchers

Bob Sturm thinks I provided some motivation for this entry Paper of the Day (Po'D): Cyclic Matching Pursuit with a Time-frequency Dictionary Edition where he releases the attendant code. The paper of interest is the following: Cyclic Matching Pursuits with Multiscale Time-frequency Dictionaries by Bob Sturm, Mads  Christensen. The abstract reads:
We generalize cyclic matching pursuit (CMP), propose an orthogonal variant, and examine their performance using multiscale time-frequency dictionaries in the sparse approximation of signals. Overall, we find that the cyclic approach of CMP produces signal models that have a much lower approximation error than existing greedy iterative descent methods such as matching pursuit (MP), and are competitive with models found using orthogonal MP (OMP), and orthogonal least squares (OLS). This implies that CMP is a strong alternative to the more computationally complex approaches of OMP and OLS for modeling high-dimensional signals.
The code and the discussion on this paper can be found here.

Finally, Maria Jafari let me know of the SMALL Workshop on Sparse Dictionary Learning in London

Dear Igor,

We would like to invite you to participate in a free two-day SMALL Workshop on Sparse Dictionary Learning (SMALL  as in Sparse Models, Algorithms, and Learning for Large-scale data, the EU project behind the Workshop), which will take place on 6 and 7 January 2011, at Queen Mary University of London.
The readers of Nuit Blanche might also be interested in participating, so we would be very grateful if you could post this on your blog.

The structure of the workshop will include presentations from eight distinguished invited speakers (see below), poster sessions during lunch and a discussion session focused on future directions of research at the end of the second day.

The workshop will be free to attend, however we do not have funding available to cover travel expenses for attendees.

We still have space for a few posters that can be presented during the lunch/networking session. If you wish to submit a poster, please notify us by sending a title and a 100 word abstract.
Notification of acceptance will be after the registration deadline.

Please confirm by Friday 3rd December 2010 if you will be able to participate in our workshop.
Best wishes

Maria Jafari and Mark Plumbley



Workshop: Sparse Dictionary Learning

Sparse signal models have been successfully applied by the signal processing community to a variety of applications. They rely on the selection of a sparsifying dictionary that matches the signal characteristics, and thus yields a sparse representation. While the dictionary is often selected from a pool of pre-defined dictionaries, eg wavelets and DCT, this does not always exploit the properties of the signal. An alternative is to learn the sparsifying dictionary from the data. This flexible approach to sparse modeling has the potential of yielding efficient methods that scale well with dimension, and incorporate knowledge about the nature and structure of the data.

In this workshop, we aim to present existing work on dictionary learning methods, discuss how they address the need to develop fast and structured algorithms, and consider future directions for this exciting and challenging field.


Invited speakers:

Francis Bach
Mike Davies
Miki Elad
Kjersti Engan
Holger Rauhut
Karin Schnass
Pierre Vandergheynst
John Wright
---------------------------------
The SMALL project, “Sparse models, algorithms and learning for large scale data (SMALL)”, a collaboration between: INRIA/IRISA (France), Queen Mary University of London (UK), University of Edinburgh (UK), EPFL (Switzerland), Technion (Israel).



Invited Speakers

John Wright (Microsoft Research Asia)
Dictionary Learning via L1 Minimization: Well-posedness and Correct Recovery
We discuss the problem of decomposing a given data matrix as a product of some unknown basis and a matrix of sparse coefficients. This problem finds applications in source separation, and is also attracting significant interest in the sparse representation community. We discuss circumstances under which this problem is well-posed, as well as situations under which it can be solved by efficient algorithms. In particular, we show that under natural assumptions, the problem is indeed globally well-posed even with fairly small sets of observations (size proportional to the size of the dictionary to be recovered times a logarithmic oversampling factor). We further show that under similar circumstances, L1 norm minimization locally recovers the desired solution. This extends results of Gribonval and Schnass to the case of nonsquare dictionaries. We discuss circumstances under which this recovery might actually be expected to be globally optimal, and give discuss several examples applications in hyperspectral imaging.

Holger Rauhut (University of Bonn)
Compressive Sensing and Structured Random Matrices
Compressive sensing is a recent paradigm in signal processing and sampling theory that predicts that sparse signals can be recovered from a small number of linear and non-adaptive measurements using convex optimization or greedy algorithms. Quite remarkably, all good constructions of the so called measurement matrix known so far are based on randomness. While Gaussian random matrices provide optimal recovery guarantees, such unstructured matrices are of limited use in applications. Indeed, structure often allows to have fast matrix vector multiplies. This is crucial in order to speed up recovery algorithms and to deal with large scale problems. The talk discusses models of structured random matrices that are useful in certain applications, and presents corresponding recovery guarantees. An important type of structured random matrix arises in connection with sampling sparse expansions in terms of bounded orthogonal systems (such as the Fourier system). The second type of structured random matrices to be discussed are partial random circulant matrices, that is, from convolution. In particular, we present recent results with J. Romberg and J. Tropp on the restricted isometry property of such matrices.

Kjersti Engan (University of Stavanger )
Dictionary Learning by RLS-DLA with Applications
The Recursive Least Squares Dictionary Learning Algorithm (RLS-DLA) is an online/adaptive approach, i.e. it updates the dictionary for every new training vector. Including a forgetting factor in the algorithm we get a search-then-converge scheme, much less dependent on an initial dictionary compared to some other dictionary learning algorithms. This talk will give a presentation of the algorithm, and some of the properties of the algorithm will be discussed.
Experiments on texture classification using Frame Texture Classification Method (FTCM) with dictionaries learned by RLS-DLA will be presented. We will also show experiments from other applications like compression and denoising of images.

Francis Bach (INRIA-ENS)
Structured Sparsity-inducing Norms through Submodular Functions
Sparse methods for supervised learning aim at finding good linear predictors from as few variables as possible, i.e., with small cardinality of their supports. This combinatorial selection problem is often turned into a convex optimization problem by replacing the cardinality function by its convex envelope (tightest convex lower bound), in this case the L1-norm. In this paper, we investigate more general set-functions than the cardinality, that may incorporate prior knowledge or structural constraints which are common in many applications: namely, we show that for nondecreasing submodular set-functions, the corresponding convex envelope can be obtained from its Lovasz extension, a common tool in submodular analysis. This defines a family of polyhedral norms, for which we provide generic algorithmic tools (subgradients and proximal operators) and theoretical results (conditions for support recovery or high-dimensional inference). By selecting specific submodular functions, we can give a new interpretation to known norms, such as those based on rank-statistics or grouped norms with potentially overlapping groups; we also define new norms, in particular ones that can be used as non-factorial priors for supervised learning.

Karin Schnass (RICAM)
Dictionary Identification - Sparse Matrix-Factorisation via L1-Minimisation
The talk will be about the problem of learning a dictionary providing sparse representations for a given signal class, via $\ell_1$-minimisation. The problem can also be seen as factorising a $\ddim \times \nsig$ matrix $Y=(y_1 \ldots y_\nsig), \, y_n\in \R^\ddim$ of training signals into a $\ddim \times \natoms$ dictionary matrix $\dico$ and a $\natoms \times \nsig$ coefficient matrix $\X=(x_1\ldots x_\nsig),\, x_n \in \R^\natoms$, which is sparse. The exact question studied here is when a dictionary coefficient pair $(\dico,\X)$ can be recovered as local minimum of a (nonconvex) $\ell_1$-criterion with input $Y=\dico \X$. First, for general dictionaries and coefficient matrices, algebraic conditions ensuring local identifiability are derived, which are then specialised to the case when the dictionary is a basis. Finally, assuming a random Bernoulli-Gaussian sparse model on the coefficient matrix, it is shown that sufficiently incoherent bases are locally identifiable with high probability. The perhaps surprising result is that the typically sufficient number of training samples $\nsig$ grows up to a logarithmic factor only linearly with the signal dimension, i.e. $\nsig \approx C \natoms \log \natoms$, in contrast to previous approaches requiring combinatorially many samples.

Mike Davies (University of Edinburgh)
Rank Aware Algorithms for Joint Sparse Recovery
This talk will revisit the sparse multiple measurement vector (MMV) problem, where the aim is to recover a set of jointly sparse multichannel vectors from incomplete measurements. This problem has received increasing interest as an extension of single channel sparse recovery, which lies at the heart of the emerging field of compressed sensing. However, MMV approximation also has links with work on direction of arrival estimation in the field of array signal. Inspired by these links, we consider a new family of MMV algorithms based on the well-know MUSIC method in array processing highlighting the role of the rank of the unknown signal matrix in determining the difficulty of the recovery problem. We will show that the recovery conditions for our new algorithms take into account the observed rank of the signal matrix. This is in contrast to previously proposed techniques such as Simultaneous Orthogonal Matching Pursuit (SOMP) and mixed norm minimization techniques which we show to be effectively blind to the rank information. Numerical simulations demonstrate that our new rank aware techniques are significantly better than existing methods in dealing with multiple measurements.

Michael Elad (The Technion)
Exploiting Statistical Dependencies in Sparse Representations
In the commonly used sparse representation modeling, the atoms are assumed to be independent of each other when forming the signal. In this talk we shall introduce a statistical model called Boltzman Machine (BM) that enables such dependencies to be taken into account. Adopting a Bayesian point of view, we first treat the pursuit problem - given a signal, and assuming that the model parameters and the dictionary are known, find its sparse representation. We derive the exact MAP estimation, and show that just like in the independent case, this leads to an exponential search problem. We derive two algorithms for its evaluation: a greedy approximation approach for the general case, and an exact estimation that corresponds to a unitary dictionary and banded interaction matrix. We also consider the estimation of the model parameters, learning these parameters directly from training data. We show that given the signals' representations, this problem can be posed as a convex optimization task by using the Maximum Pseudo-Likelihood (MPL).

Pierre Vandergheynst(EPFL)
Transductive Learning with Spectral Graph Wavelets
In this talk I will quickly review the construction and properties of frames of spectral graph wavelets for functions defined on the vertices of a weighted undirected graph. I then leverage the existing state of the art algorithms in sparse recovery to formulate various versions of transductive learning on graphs. Comparing analysis and synthesis based algorithms reveals that the structure of the frame plays a very important role and that sparsity is nicely connected to a notion of smoothness on graphs. I take these lessons one step further to propose better optimization criteria for this problem and finish with a list of interesting open questions for the SMALL consortium.

Sunday, November 28, 2010

Saturday, November 27, 2010

Long Term View










While on the first day, 157 people viewed this entry on their feedreaders, there were still 18 people viewing this entry fourteen days later..The sum of all these entries is about 470 or about 3 times the first day's views. A number that is pretty constant for most entries.

Work in Progress

I am following up on an idea that has been very well described by Muthu before about tracking research. For those of you interested, I am trying to put together several pages that are focused on a particular subject related to compressive sensing here. Right now, there are mostly series of links to Nuit Blanche where papers of interest are being featured. Any input is welcome. One of the reason I do this instead of putting something on wikipedia is that I don't want to give away my editing processes to the masses :-)



Credit: NASA/JPL/Space Science Institut, Rhea, there is oxygen on Rhea

Friday, November 26, 2010

No Comment

(from here via here)

Using Kinect for a compressive sensing hack ? and a dictionary attack

As some of you know, I am very much interested in hardware hacking as a way to show case compressive sensing on the cheap. A few days ago, I mentioned the different numerous hacks surrounding the release of the Kinect camera that is a USB camera connecting to the Xbox and providing 3D map information. As it turns out, here is what wikipedia has on the reengineering undertaken so far:

The depth sensor consists of an infrared laser projector combined with a monochrome CMOS sensor, and allows the Kinect sensor to see in 3D under any ambient light conditions.[9][22] The sensing range of the depth sensor is adjustable, with the Kinect software capable of automatically calibrating the sensor based on gameplay and the player's physical environment, such as the presence of furniture.[23]
Described by Microsoft personnel as the primary innovation of Kinect,[24][25][26][27] the software technology enables advanced gesture recognition, facial recognition, and voice recognition.[28] According to information supplied to retailers, the Kinect is capable of simultaneously tracking up to six people, including two active players for motion analysis with a feature extraction of 20 joints per player.[29]
Through reverse engineering efforts,[30] it has been determined that the Kinect sensor outputs video at a frame rate of 30 Hz, with the RGB video stream at 8-bit VGA resolution (640 × 480 pixels) with a Bayer color filter, and the monochrome video stream used for depth sensing at 11-bit VGA resolution (640 × 480 pixels with 2,048 levels of sensitivity). The Kinect sensor has a practical ranging limit of 1.2–3.5 metres (3.9–11 ft) distance when used with the Xbox software. The area required to play Kinect is roughly around a 6m² area, although the sensor can maintain tracking through an extended range of approximately 0.7–6 metres (2.3–20 ft). The sensor has an angular field of view of 57° horizontally and a 43° vertically, while the motorized pivot is capable of tilting the sensor as much as 27° either up or down. The microphone array features four microphone capsules,[31] and operates with each channel processing 16-bit audio at a sampling rate of 16 kHz.[29]
The description is not really conclusive about the actual underlying technology getting the depth map. So I turned to Daniel Reetz ( you may recall our previous interaction before) for some answers and here is what he said:
Hey Igor,
The projector is not modulated in time at all. It is just a laser continuously shining through some holographic diffuser. The diffuser material creates a fixed pattern of speckle. The speckle is essentially “in focus” from the front of the Kinect sensor all the way to the furthest back surface that it can see.
So there seems to be no time-domain subsampling to be had from the projector. The camera which images the projected images to do depth estimation is rather high resolution (1280×920, offhand), and seems to be being read out at 30hz. The resulting depth map has only 320×240 pixels, but not necessarily 320×240 actual resolution.
So the answer is a little complicated. The projected speckle image is unchanging. In this case, I took a bunch of pictures of this non-changing pattern so that the images could be averaged together for better speckle counting.
I can’t wait to get back to LA where I can do the speckle generation demonstration I have in mind, and spend some more time with these patents/papers to get to the core of this technology.
Thanks Daniel !

So it looks like the time modulation is out of the window but I keep wondering if this set-up can be used for some compressive sensing experiments. You can buy the Kinect with an Xbox here or here. Most hacks performed so far are documented in the KinectHacks blog.

In a different direction, I found this project fascinating: Why blurring sensitive information is a bad idea as it touches on dictionary attacks for images, a subject not unknown on this blog where one does a comparison between known elements of a dictionary muddled through a blurring (random ?) function and an unknown scene.

Thursday, November 25, 2010

Nobody Cares About You and Your Algorithm

The business model of Q\&A sites like StackOverflow revolves around recognizing that there is a difference between replying to a question and answering that question. Equally, there is a difference between publishing a paper on an idea and releasing this idea to the world in the form of an implementation.

In a number of cases, as I look forward to kicking the tires of some ideas featured in some papers, the feedback I get from some authors is that the implementation 'is easy' i.e. that it should be easy for me to duplicate the pseudo code and implement it into matlab/octave.

Sure ... all your peers could write an implementation of your code. But between the two of us, do you really think your peers will get famous re-implementing your algorithm ? You and I both know the answer.

If Nuit Blanche is to be used as a way of promoting your algorithm or your ideas (as it should) you need to provide an implementation of it even though 'it is easy'. Otherwise your idea as ingenious as it may be, will die. Become the giant that allows others to be standing on your shoulders.

See, it's always a question of context: Pulsing a nuclear reactor is easy. Pulsing a nuclear reactor when you have no training for the if-and-outs of the process... not so much. Performing experiments is easy. Performing experiments in a zero-gravity environment where everybody around you barf their brains out while the ceiling drips condensation water on your data acquisition computer keyboard ... not so much.

Yes, in some idealized world things are easy, in some idealized world, your readers would know how to install Sedumi on their computers. Except when was the last time you lived in that idealized world ? Most comments in this simple post revolve around software installation issues as a pre-condition before running a code. If people have problems installing a package that has installation 'how-tos', do you really think 'it is easy' for them to rewrite an implementation of your algorithm ?

Ideas are just that ... ideas, you need other people to take them forward and the only way to do that is by releasing an actual implementation that works. If you don't do this, your ideas will become irrelevant. Don't get me wrong, you'll get a good publication, your peers will think highly of you, some may even grant you tenure or a better job, some might even put your name up for prizes, but the fact of the matter is, the rest won't care. Worse, your idea will be of no consequence to new entrants in the field be they students or curious researchers from other fields.

Even if you think that the constants of your implementation are too large, or that the asymptotics are bad, don't fear, somebody, somewhere, thinks it's really cool because it solves her problem.

The Nuit Blanche effect is nothing compared to the Slashdot effect, yet it provides you with the ability to reach out to thousands of potentially interested and focused people and/or future employers. More than hundred people will come to your site in a few days as a result of being featured here . And trust me, these people care.. Help Nuit Blanche help you become a f@%$*$ rock star, make some implementation of your ideas available.











[Stats for the link pointing to TFOCS solver last week.]

Wednesday, November 24, 2010

CS: The long entry of the week

Some of you are going to take a little break on Thursday but in case you get bored waiting for the next plane or because it is going to be a slow news day (except if North Korea continues on doing what it just started) then you may want to read the following papers. Enjoy.

First here is some lecture notes entitled Compressed Sensing and Sparse Signal Processing by Wu-Sheng Lu.. I have a put a link to them in the CS course page.

Just found on arxiv:
 Purpose: We develop an iterative image-reconstruction algorithm for application to low-intensity computed tomography (CT) projection data, which is based on constrained, total-variation (TV) minimization. The algorithm design focuses on recovering structure on length scales comparable to a detector-bin width.
Method: Recovering the resolution on the scale of a detector bin, requires that pixel size be much smaller than the bin width. The resulting image array contains many more pixels than data, and this undersampling is overcome with a combination of Fourier upsampling of each projection and the use of constrained, TV-minimization, as suggested by compressive sensing. The presented pseudo-code for solving constrained, TV-minimization is designed to yield an accurate solution to this optimization problem within 100 iterations.
Results: The proposed image-reconstruction algorithm is applied to a low-intensity scan of a rabbit with a thin wire, to test resolution. The proposed algorithm is compared with filtered back-projection (FBP).
Conclusion: The algorithm may have some advantage over FBP in that the resulting noise-level is lowered at equivalent contrast levels of the wire.
On HAL:
We propose a framework for blind multiple filter estimation from convolutive mixtures, exploiting the time-domain sparsity of the mixing filters and the disjointness of the sources in the time-frequency domain. The proposed framework includes two steps: (a) a clustering step, to determine the frequencies where each source is active alone; (b) a filter estimation step, to recover the filter associated to each source from the corresponding incomplete frequency information. We show how to solve the filter estimation step (b) using convex programming, and we explore numerically the factors that drive its performance. Step (a) remains challenging, and we discuss possible strategies that will be studied in future work.

An ALPS View of Sparse Recovery by Volkan Cevher. The abstract reads:
We provide two compressive sensing (CS) recovery algorithms based on iterative hard-thresholding. The algorithms, collectively dubbed as algebraic pursuits (ALPS), exploit the restricted isometry properties of the CS measurement matrix within the algebra of Nesterov’s optimal gradient methods. We theoretically characterize the estimation and convergence guarantees of ALPS for signals that are sparse on ortho-bases as well as on tight-frames. Simulation results demonstrate a great potential for ALPS in terms of phase-transition, noise robustness, and CS reconstruction.


Fast Hard Thresholding with Nesterov’s Gradient Method by Volkan Cevher, Sina Jafarpour. The abstract reads:
We provide an algorithmic framework for structured sparse recovery which unifies combinatorial optimization with the non-smooth convex optimization framework by Nesterov [1, 2]. Our algorithm, dubbed Nesterov iterative hard-thresholding (NIHT), is similar to the algebraic pursuits (ALPS) in [3] in spirit: we use the gradient information in the convex data error objective to navigate over the nonconvex set of structured sparse signals. While ALPS feature a priori estimation and convergence guarantees, we were only able to provide an online estimation guarantee for NIHT (e.g., the guarantees require the algorithm execution). Experiments show however that NIHT can empirically outperform ALPS and other state-of-the-art convex optimization-based algorithms in sparse recovery.
Of note besides the "traditional" comparison with the Donoho-Tanner transition, the same DT phase transition with an additional constraint of positivity.






We introduce the Multiplicative Update Selector and Estimator (MUSE) algorithm for sparse approximation in underdetermined linear regression problems. Given f = + , the MUSE provably and efficiently finds a k-sparse vector ^ such that k ^ �� fk1 k k1 + O1 pk, for any k-sparse vector , any measurement matrix , and any noise vector . We cast the sparse approximation problem as a zerosum game over a properly chosen new space; this reformulation provides salient computational advantages in recovery. When the measurement matrix provides stable embedding to sparse vectors (the so-called restricted isometry property in compressive sensing), the MUSE also features guarantees on k �� ^ k2. Simulation results demonstrate the scalability and performance of the MUSE in solving sparse approximation problems based on the Dantzig Selector.
A K -sparse vector x   RN produces measurements via linear dimensionality reduction as u = Φx + n, where Φ  RM N (M < N), and n  RM consists of independent and identically distributed, zero mean Gaussian entries with variance σ2. An algorithm, after its execution, determines a vector ˆx that has K-nonzero entries, and satisfies u − Φˆx . How far can ˆx be from x? When the measurement matrix Φ provides stable embedding to 2K- sparse signals (the so-called restricted isometry property), they must be very close. This paper therefore establishes an analytic bound to characterize the distance ˆx − x based on the online meta-information. Our bound improves the pre-run algorithmic recovery guarantees, and is quite useful in exploring various data error and solution sparsity trade-offs. We also evaluate the performance of several sparse recovery algorithms in the context of our bound.

We develop an efficient learning framework to construct signal dictionaries for sparse representation by selecting the dictionary columns from multiple candidate bases. By sparse, we mean that only a few dictionary elements, compared to the ambient signal dimension, can exactly represent or well-approximate the signals of interest. We formulate both the selection of the dictionary columns and the sparse representation of signals as a joint combinatorial optimization problem. The proposed combinatorial objective maximizes variance reduction over the set of training signals by constraining the size of the dictionary as well as the number of dictionary columns that can be used to represent each signal. We show that if the available dictionary column vectors are incoherent, our objective function satis es approximate submodularity. We exploit this property to develop SDSOMP and SDSMA, two greedy algorithms with approximation guarantees. We also describe how our learning framework enables dictionary selection for structured sparse representations, e.g., where the sparse coefficients occur in restricted patterns. We evaluate our approach on synthetic signals and natural images for representation and inpainting problems.

We consider bearing estimation of multiple narrow-band plane waves impinging on an array of sensors. For this problem, bearing estimation algorithms such as minimum variance distortionless response (MVDR), multiple signal classification, and maximum likelihood generally require the array covariance matrix as sufficient statistics. Interestingly, the rank of the array covariance matrix is approximately equal to the number of the sources, which is typically much smaller than the number of sensors in many practical scenarios. In these scenarios, the covariance matrix is low-rank and can be estimated via matrix completion from only a small subset of its entries. We propose a distributed matrix completion framework to drastically reduce the inter-sensor communication in a network while still achieving near-optimal bearing estimation accuracy. Using recent results in noisy matrix completion, we provide sampling bounds and show how the additive noise at the sensor observations affects the reconstruction performance. We demonstrate via simulations that our approach sports desirable tradeoffs between communication costs and bearing estimation accuracy.

Tuesday, November 23, 2010

Spectrum Unfolding and L1 Minimization ?

Nuclear detectors are a very challenging design problem because unlike light detectors featuredin computational photography, all the elements of the detectors do interact with the particles being tracked. This is the case in particular of neutron detectors. As I was reading on the subject ( in Radiation Detection and Measurement ) for the purpose of evaluating a compressive equivalent to current technology, I came across the problem that is well studied in this area called spectrum unfolding, i.e. the fitting of the energy spectrum recorded by detectors as a linear combination of simple spectra of known interaction of the detector with specific particles at specific energy.. This is a situation that is encountered in hyperspectral unmixing: or how can a signal be decomposed as a series of different spectra found in some dictionary. To give more context here is a good introduction on the subject in this presentation entitled: Unfolding techniques for neutron spectrometry by Marcel Reginatto



Of interest is that none of the "newer" methods of investigation include reconstruction solvers used in compressive sneisng:


Surely, L1 minimization would provide good results here as well. The next step would be to design a sensor that permits either a faster way of performing this spectrum unfolding or provide additional spatial information , a feature that is currently not available.More on that later....

Other papers and webpages of interest:

Monday, November 22, 2010

CS: Two or three postdocs at University of Arizona

Amit Ashok just sent me the following:

Application procedure are available at www.uacareertrack.com (job number 46112)

Position title: Postdoctoral Research Associate (ECE Department at the University of Arizona)
Position summary:
The DARPA Knowledge Enhanced Compressive Measurement (KECoM) project has the primary goals of 1) developing a mathematical framework for incorporating prior knowledge of tasks and observed signals into the design of compressive sensors, and 2) applying this framework to relevant sensing modalities. The UA anticipates a KECoM award to support investigation of information-theoretic techniques for sensor optimization under prior knowledge, as well as application to a compressive spread-spectrum receiver, a compressive radar transmitter/receiver, and a compressive spectral imager.
* Two to three postdoctoral researchers will be hired for the DARPA KECoM effort
* Occasional travel may be required
* Original postdoctoral appointment is for one year with renewal possible for up to a total of three years

Duties and responsibilities:
* Provide advanced technical expertise in RF compressive receiver design/engineering OR compressive spectral imager design/engineering
* Provide advanced technical expertise in compressive sensor modeling AND/OR hardware demonstration systems
* Day-to-day technical supervision of other UA program technical staff
* Periodic reporting and documentation as required by program timeline

Additional minimum qualifications:
* Ph.D. in Electrical Engineering, Optical Sciences/Engineering, Mathematics, Applied Physics, OR related discipline
* Demonstrated experience in RF receiver design and modeling, spectral imager design and modeling, AND/OR signal processing or related disciplines
* Demonstrated ability for independent research with associated publication record
* Demonstrated ability to work in a team environment
* Demonstrated ability to communicate effectively, both verbally and in writing

Preferred qualifications:
* Experience in compressive sensing
* Experience in spread-spectrum/SIGINT receiver design and implementation
* Experience in radar target recognition
* Experience in information theory
* Experience with MATLAB
* Experience in numerical optimization techniques
* Experience with one or more high-level programming languages (C, C++, etc.)

Application procedure are available at www.uacareertrack.com (job number 46112)

Sunday, November 21, 2010

CS: Bad Image reconstruction, 3D cameras, hacks and calibration.

If there is anything annoying from the videos of one hundred naked citizens being imaged by X-ray scanners at airports, it's the fact that after all the money spent, the reconstruction solvers are just plain terrible!

I just came across the following article entitled Single-shot depth-camera design by Yung-Lin Chen, Chuan-Chung Chang, Ludovic Angot, Chir-Weei Chang and Chung-Hao Tien starts with: Adding an axially symmetric phase-coded mask and applying an appropriate metric to characterize image blur can provide distance estimates for individual images using a single camera.
The concept is not new ( see here, here or here as start) but it looks like that unlike previous attempts depth is well calibrated.

In the end though, while getting depth from one camera is very nice, how is this feat going to compete with cheap two cameras being hacked by everybody as is the case of the numerous Kinect Hacks.Of specific interest are the attempts of understanding how depth is being computed in these entries:

All this to say that the most important part of transfer function engineering or reverse engineering revolves around the difficult business of calibration.

Saturday, November 20, 2010

No Comment


From here.

Science at Work network of blogs

In  The Modeler's Known Unknowns and Unknown Knowns, I did not make it clear, but eventually will, that the knowledge of the probability of an event depends on where one stands in an organization hierarchy. 

In a different direction, Etienne Côme provided a connection tree of the different "machine learning" blogs . I put machine learning between hyphens because it really is a larger mapping of blogs that are into machine learning, theoretical computer science, statistics, math and so on (the translation is here). And so I would not characterize it as such, from the small sample I could survey, it really is a tree of blogs spanning chunk of what I would call Science at Work as most of these blogs don't seem to care about science and technology outreach to the masses but are rather focused to a specific targeted audience of specialists. For that reason, it is no wonder that the connections are structurally weak as Etienne points out. To get the full pdf, you need to go there.

RMM: The Modeler's Known Unknowns and Unknown Knowns

I just wrote a new entry on the mathematical modeler's known unknowns and unknown knowns. Check it out!

Friday, November 19, 2010

CS: RIPless theory of compressed sensing, Secrecy, PADDLE, Sparse PCA, non-asymptotic analysis of random matrices. Multiview CS, a postodc and a PhD studentship and more.

So it looks like there was too much burden brandishing the RIP argument :-) Here is a new paper on the subject: A probabilistic and RIPless theory of compressed sensing by Emmanuel J. Candes, Yaniv Plan. The abstract reads:
This paper introduces a simple and very general theory of compressive sensing. In this theory, the sensing mechanism simply selects sensing vectors independently at random from a probability distribution F; it includes all models - e.g. Gaussian, frequency measurements - discussed in the literature, but also provides a framework for new measurement strategies as well. We prove that if the probability distribution F obeys a simple incoherence property and an isotropy property, one can faithfully recover approximately sparse signals from a minimal number of noisy measurements. The novelty is that our recovery results do not require the restricted isometry property (RIP) - they make use of a much weaker notion - or a random model for the signal. As an example, the paper shows that a signal with s nonzero entries can be faithfully recovered from about s log n Fourier coefficients that are contaminated with noise.
One may note the reliance on a previous paper by David Gross in quantum state tomography, today we have another quantum paper (a significant revision actually):

The resources required to characterise the dynamics of engineered quantum systems-such as quantum computers and quantum sensors-grow exponentially with system size. Here we adapt techniques from compressive sensing to exponentially reduce the experimental configurations required for quantum process tomography. Our method is applicable to dynamical processes that are known to be nearly-sparse in a certain basis and it can be implemented using only single-body preparations and measurements. We perform efficient, high-fidelity estimation of process matrices on an experiment attempting to implement a photonic two-qubit logic-gate. The data base is obtained under various decoherence strengths. We find that our technique is both accurate and noise robust, thus removing a key roadblock to the development and scaling of quantum technologies.
We have also several other papers from arxiv:

Perfect Secrecy Using Compressed Sensing by Mahmoud Ramezani Mayiami, Babak Seyfe, Hamid G. Bafghi. The abstract reads:
Abstract: In this paper we consider the compressed sensing-based encryption and proposed the conditions in which the perfect secrecy is obtained. We prove when the Restricted Isometery Property (RIP) is hold and the number of measurements is more than two times of sparsity level i.e. M \geq 2k, the perfect secrecy condition introduced by Shannon is achievable if message block is not equal to zero or we have infinite block length

PADDLE: Proximal Algorithm for Dual Dictionaries LEarning by Curzio Basso, Matteo Santoro, Alessandro Verri, Silvia Villa. The abstract reads:
Recently, considerable research efforts have been devoted to the design of methods to learn from data overcomplete dictionaries for sparse coding. However, learned dictionaries require the solution of an optimization problem for coding new data. In order to overcome this drawback, we propose an algorithm aimed at learning both a dictionary and its dual: a linear mapping directly performing the coding. By leveraging on proximal methods, our algorithm jointly minimizes the reconstruction error of the dictionary and the coding error of its dual; the sparsity of the representation is induced by an $\ell_1$-based penalty on its coefficients. The results obtained on synthetic data and real images show that the algorithm is capable of recovering the expected dictionaries. Furthermore, on a benchmark dataset, we show that the image features obtained from the dual matrix yield state-of-the-art classification performance while being much less computational intensive.

PADDLE is available from this page:

PADDLE is a Python package for learning dictionaries with frame-like properties, as well as achieving sparse coding of the training data. In particular, it provides algorithms for
  • learning a dictionary together with a dual of itself, and
  • learning a dictionary close to a tigth-frame.
of note another Python package called L1L2Py.

L1L2Py is a Python package to perform feature selection by means of \ell_1\ell_2 regularization with double optimization.

Introduction to the non-asymptotic analysis of random matrices by Roman Vershynin. The abstract reads:
This is a tutorial on some basic non-asymptotic methods and concepts in random matrix theory. The reader will learn several tools for the analysis of the extreme singular values of random matrices with independent rows or columns. Many of these methods sprung off from the development of geometric functional analysis in the 1970-2000's. They have applications in several fields, most notably in theoretical computer science, statistics and signal processing. A few basic applications are covered in this text, particularly for the problem of estimating covariance matrices in statistics and for validating probabilistic constructions of measurement matrices in compressed sensing. These notes are written particularly for graduate students and beginning researchers in different areas, including functional analysts, probabilists, theoretical statisticians, electrical engineers, and theoretical computer scientists
Given a sample covariance matrix, we examine the problem of maximizing the variance explained by a linear combination of the input variables while constraining the number of nonzero coefficients in this combination. This is known as sparse principal component analysis and has a wide array of applications in machine learning and engineering. Unfortunately, this problem is also combinatorially hard and we discuss convex relaxation techniques that efficiently produce good approximate solutions. We then describe several algorithms solving these relaxations as well as greedy algorithms that iteratively improve the solution quality. Finally, we illustrate sparse PCA in several applications, ranging from senate voting and finance to news data.


There is studentship sponsored by Philips Inc. The announcement is on the compressive sensing jobs page.

Thomas Strohmer sent me the following Postdoc opportunity (also on the ompressive sensing jobs page. )



POST-DOCTORAL POSITION IN MATHEMATICS, University of California, Davis

The Department of Mathematics at the University of California, Davis, is soliciting applications for a Postdoctoral Scholar position with a starting date between March 2011 and October 2011.
To be considered for the Postdoctoral Scholar position, the Department seeks applicants with a strong knowledge base in Sparse Approximations, Compressive Sensing and/or Optimization. Applicants are required to have completed their Ph.D. by August 31, 2011 (or before the starting date, if the starting date is earlier). The position requires working on research related to two defense-based projects (DTRA/NSF and DARPA), lead by Professor Thomas Strohmer. The research is concerned with developing theory and algorithms for sparse recovery in connection with threat detection and radar. The candidate should also have excellent programming
skills in Matlab. The annual salary of this position is $60,000, plus some travel funds. The position carries no teaching duties, but teaching may be possible upon request. The appointment is renewable for a total of up to three years, assuming satisfactory performance.

The UC Davis Math and Applied Math programs have been ranked among the nation’s top programs by the National Research Council in its most recent report. Additional information about the Department may be found at http://www.math.ucdavis.edu/. Our postal address is Department of Mathematics, University of California, One Shields Avenue, Davis, CA 95616-8633.
Applications will be accepted until the positions are filled. To guarantee full consideration, the application should be received by December 15, 2010 by submitting the AMS Cover Sheet and supporting documentation (CV, publication list, letters of reference, brief research statement) electronically through http://www.mathjobs.org/. The University of California is an affirmative action/equal opportunity employer.

David Brady is the  co-founder and a chief scientist of  Centice. He's done some of the early implementation of compressive sensing hardware and even wrote a book about how CS  gets into optics (.Optical Imaging and Spectroscopy).Prasant Potuluri, the CTO of  Centice also did compressive sensing in his thesis on Multiplexed Optical Sensors for Reference Structure Tomography and Compressive Spectroscopy. Well, whyy do I write this ? Because Centice just raised funding ((Innovation Ventures-Backed Centice Verifies Most of $1.3M Equity Offering ) which really underscores the fact that you can do compressive sensing and be a successful entrepreneur :-). Congratulations David.and Prasant !

Finally, we have some papers on multi-view sensors using compressive sensing:

A Multi-Sensor Compressed Sensing Receiver: Performance Bounds and Simulated Results by Benjamin Miller, Joel Goodman, and Keith Forsythe, John Z. Sun and Vivek K Goyal. The abstract reads:
Multi-sensor receivers are commonly tasked with detecting, demodulating and geolocating target emitters over very wide frequency bands. Compressed sensing can be applied to persistently monitor a wide bandwidth, given that the received signal can be represented using a small number of coefficients in some basis. In this paper we present a multi-sensor compressive sensing receiver that is capable of reconstructing frequency-sparse signals using block reconstruction techniques in a sensor-frequency basis. We derive performance bounds for time-difference and angle of arrival (AoA) estimation of such a receiver, and present simulated results in which we compare AoA reconstruction performance to the bounds derived
and the other one behind a paywall:

Compressed sensing joint reconstruction for multi-view images by X. Li, Z. Wei, and L. Xiao. The abstract reads:
The problem of compressed sensing joint reconstruction of multi-view images in camera networks is considered. Noting that the neighbouring images are visually similar, the multi-view correlation is captured by the sparse prior of the difference images between two contiguous multi-view images. Thus the joint reconstruction is formulated as an unconstrained optimisation problem, which contains a quadratic fidelity term and two regularisation terms encouraging the sparse priors for multi-view images and their difference images, respectively. Moreover, an effective iterative algorithm is presented to solve the optimisation problem. Experimental results with the real multi-view images show that the proposed method can perform joint reconstruction with greater accuracy than CS image-by-image reconstruction.




Credits: Montage by Emily Lakdawalla. Ida, Dactyl, Braille, Annefrank, Gaspra, Borrelly: NASA / JPL / Ted Stryk. Steins: ESA / OSIRIS team. Eros: NASA / JHUAPL. Itokawa: ISAS / JAXA / Emily Lakdawalla. Mathilde: NASA / JHUAPL / Ted Stryk. Lutetia: ESA / OSIRIS team / Emily Lakdawalla. Halley: Russian Academy of Sciences / Ted Stryk. Tempel 1, Hartley 2: NASA / JPL / UMD. Wild 2: NASA / JPL. All asteroids and comets visited by spacecraft as of November 2010

Printfriendly