Tuesday, May 31, 2011

CS: Large Scale Compressive Sensing, Robust PCA for voting in Ensemble algorithms and Spectrum Sensing

After yesterday's entry, Danny Bickson sent me the following:

One paper that is related to compressed sensing is featured here:
we are discussing how to implement parallel large scale compressed sensing where there are two feasible approaches: parallel stochastic gradient descent or parallel coordinate descent. We chose to implement the latter option (but we also compare it to the first). The paper is here: http://arxiv.org/abs/1105.5379

We have already featured the paper but you want to read Danny's blog entry.

Another entry following up on the past few days is Bob's blog entry on the voting mechanism to be used in an ensemble approach. Go read it, I'll wait.



In the comment, I mentioned an additional way we could perform this voting between different solvers' results:
Here is another suggestion, we could also put all the solutions as columns of a new matrix called B and look for a factorization of that matrix as: B = A + E + Z , where A is a low rank matrix (rank 1), E is a sparse matrix and Z is a "noisy" matrix ( http://perception.csl.uiuc.edu/matrix-rank/sample_code.html ) or B = A + E (Robust PCA, http://www-stat.stanford.edu/~candes/papers/RobustPCA.pdf ). Of note the comparison between different algorithms: http://perception.csl.uiuc.edu/matrix-rank/sample_code.html#Comparison
Of related interest:






Gonzalo Vazquez Vilar is continuing his spectrum sensing summary presented at ICASSP in Cooperative spectrum sensing and resource allocation at ICASSP 2011

Monday, May 30, 2011

Compressive Sensing Literature This Week.

With the end of the semester for several universities we have some theses showing up on the radar screen. We also have a new crop of blog entries, papers and presentations. At the end some of the papers are marginally connected to compressive sensing but they are interesting. Finally, there is a announcement for a workshop. First things first, Danny Bickson has a blog, it is:
http://bickson.blogspot.com/

Following up on last week, here are two other summaries of the sublinear algorithm conference in Italy:




Now onto the papers/preprints:

We propose Shotgun, a parallel coordinate descent algorithm for minimizing L1-regularized losses. Though coordinate descent seems inherently sequential, we prove convergence bounds for Shotgun which predict linear speedups, up to a problem-dependent limit. We present a comprehensive empirical study of Shotgun for Lasso and sparse logistic regression. Our theoretical predictions on the potential for parallelism closely match behavior on real data. Shotgun outperforms other published solvers on a range of large problems, proving to be one of the most scalable algorithms for L1.
Compressive Identification of Linear Operators by Reinhard Heckel, Helmut Bölcskei. The abstract reads:
We consider the problem of identifying a linear deterministic operator from an input-output measurement. For the large class of continuous (and hence bounded) operators, under additional mild restrictions, we show that stable identifiability is possible if the total support area of the operator's spreading function satisfies D <= 1/2. This result holds for arbitrary (possibly fragmented) support regions of the spreading function, does not impose limitations on the total extent of the support region, and, most importantly, does not require the support region of the spreading function to be known prior to identification. Furthermore, we prove that asking for identifiability of only almost all operators, stable identifiability is possible if D <= 1. This result is surprising as it says that there is no penalty for not knowing the support region of the spreading function prior to identification.


A signal recovery method for compressive sensing under noisy measurements is proposed. The problem is formulated as a nonconvex nonsmooth constrained optimization problem that uses the smoothly clipped absolute deviation (SCAD) function to promote sparsity. Relaxation is employed by means of a series of local linear approximations (LLAs) of the SCAD in a constrained formulation. The relaxation is shown to converge to a minimum of the original nonconvex constrained optimization problem. In order to solve each nonsmooth convex relaxation problem, a second-order cone programming (SOCP) formulation is used, which can be applied by using standard state-of-the-art SOCP solvers such as SeDuMi. Experimental results demonstrate that signals recovered using the proposed method exhibit reduced ℓ∞ reconstruction error when compared with competing methods such as ℓ1-Magic. Simulations demonstrate that significant reduction in the reconstruction error can be achieved with computational cost that is comparable to that required by
the ℓ1-Magic algorithm.

We present and analyze an e cient implementation of an iteratively reweighted least squares algorithm for recovering a matrix from a small number of linear measurements. The algorithm is designed for the simultaneous promotion of both a minimal nuclear norm and an approximatively low-rank solution. Under the assumption that the linear measurements ful ll a suitable generalization of the Null Space Property known in the context of compressed sensing, the algorithm is guaranteed to recover iteratively any matrix with an error of the order of the best k-rank approximation. In certain relevant cases, for instance for the matrix completion problem, our version of this algorithm can take advantage of the Woodbury matrix identity, which allows to expedite the solution of the least squares problems required at each iteration. We present numerical experiments which con rm the robustness of the algorithm for the solution of matrix completion problems, and demonstrate its competitiveness with respect to other techniques proposed recently in the literature.

The attendant (code) is here.


We show that sparse spherical harmonic expansions can be efficiently recovered from a small number of randomly chosen samples on the sphere. To establish the main result, we verify the restricted isometry property of an associated preconditioned random measurement matrix using recent estimates on the uniform growth of Jacobi polynomials.

We consider the synthesis problem of Compressed Sensing { given s and an M £ n matrix A, extract from A an m £ n submatrix Am, with m as small as possible, which is s-good, that is, every signal x with at most s nonzero entries can be recovered from observation Amx by `1 minimization: x = argminufkuk1 : Amu = Amxg. We show that under reasonable assumptions the synthesis problem can be reformulated as the problem of entry-wise approximation, within a given accuracy, of n £ n matrix W = YT A, with Y 2 RM£n given, by a matrix of the form YT m Am, with Am comprised of m rows of A. We propose randomized algorithms for efficiently solving the latter problem with accuracy guaranties EfkW ¡YT m Amk1g · O(1)L(Y; A) qln(n)m Here L(Y; A) is an easy-to-specify quantity which in good cases . is a moderate absolute constant (e.g., L(A; A) = 1 when A is the Hadamard matrix, and similarly for the matrix of Fourier transform on any finite Abelian group). We also supply derandomized versions of the approximation algorithms which do not require random sampling of matrices and attain the same accuracy bounds. We further demonstrate that in terms of approximation accuracy our algorithms are optimal up to logarithmic in n factors. Finally, we provide preliminary numerical results on the performance of our algorithms for the synthesis problem.

For compressed sensing with jointly sparse signals, we present a new signal model and two new joint iterative greedy-pursuit recovery algorithms. The signal model is based on the assumption of a jointly shared support-set and the joint recovery algorithms have knowledge of the size of the shared support-set. Through experimental evaluation, we show that the new joint algorithms provide significant performance improvements compared to regular algorithms which do not exploit a joint sparsity.
Estimation of the level set of a function (i.e., regions where the function exceeds some value) is an important problem with applications in digital elevation maps, medical imaging, and astronomy. In many applications, however, the function of interest is acquired through indirect measurements, such as tomographic projections, coded-aperture measurements, or pseudo-random projections associated with compressed sensing. This paper describes a new methodology and associated theoretical analysis for rapid and accurate estimation of the level set from such projection measurements. The proposed method estimates the level set from projection measurements without an intermediate function reconstruction step, thereby leading to significantly faster computation. In addition, the coherence of the projection operator and McDiarmid’s inequality are used to characterize the estimator’s performance. 

A key theorem regarding the Restricted Isometry Property for short sampling trajectories contained a normalization error which renders the statement of the theorem incorrect. This note describes the original claim and proof, the normalization issue and how it impacts the proof, and several avenues for future investigation.


Consider an m by N matrix Phi with the Restricted Isometry Property of order k and level delta, that is, the norm of any k-sparse vector in R^N is preserved to within a multiplicative factor of 1 +- delta under application of Phi. We show that by randomizing the column signs of such a matrix Phi, the resulting map with high probability embeds any fixed set of p = O(e^k) points in R^N into R^m without distorting the norm of any point in the set by more than a factor of 1 +- delta. Consequently, matrices with the Restricted Isometry Property and with randomized column signs provide optimal Johnson-Lindenstrauss embeddings up to logarithmic factors in N. In particular, our results improve the best known bounds on the necessary embedding dimension m for a wide class of structured random matrices; for partial Fourier and partial Hadamard matrices, we improve the recent bound m = O(delta^(-4) log(p) log^4(N)) appearing in Ailon and Liberty to m = O(delta^(-2) log(p) log^4(N)), which is optimal up to the logarithmic factors in N. Our results also have a direct application in the area of compressed sensing for redundant dictionaries.



Presentations:

The three theses are:

 SONAR Imaging using Compressive Sensing by  Taylor Williams. The abstract reads::
Sonar imaging is a technique that can locate objects in a scene from their acoustic reflections. It utilizes multiple measurements from different angles to create a 2D image. Conventional imaging algorithms such as backprojection (BP) require a large number of measurements to produce accurate images. Theory indicates that an alternative approach called compressive sensing (CS) can create accurate images using just a fraction of the measurements. CS requires that the scene is compressible and that each individual measurement provides information about the scene, not just a particular location. The sonar imaging experiment must be designed to meet these requirements. The purpose of this study is to show that the compressive sensing framework can be used in sonar imaging with results comparable to conventional methods. A sonar test stand was built that can measure acoustic reflections from a scene. A pseudo-random noise (PN) code was used for transmission. Four speakers and 16 microphones were mounted randomly perturbed from a circle surrounding the scene. Software was developed to create an image from this data through FBP. Also, a CS algorithm was developed to reconstruct an image from limited measurements. This algorithm uses a random subset of samples from each measurement. Initial results show that FBP can effectively be used to image a scene using acoustic waves. The CS algorithm yields a similar quality image using less than 10% of the measurements. These results show that CS can be used in sonar imaging to greatly reduce the number of measurements that need to be collected. The findings are directly portable to radar imaging, a field with a high level of research for both military and civilian uses

Magnetic resonance imaging (MRI) is a powerful tool for studying the anatomy, physiology, and metabolism of biological systems. Despite the fact that MRI was introduced decades ago and has already revolutionized medical imaging, current applications are still far from utilizing the full potential of the MR signal. Traditional MRI data acquisition and image reconstruction methods are based on simple Fourier inversion, leading to undesirable trade-offs between image resolution, signal-to-noise ratio (SNR), and data acquisition time. Classical approaches to addressing these trade-offs have relied on improved imaging hardware and more efficient pulse sequences. In contrast, our work addresses the limitations of MR using relatively less-explored signal processing approaches, which have recently become practical because of increasing computational capabilities. This dissertation concerns the use of constrained imaging models to guide the design of both data acquisition and image reconstruction, leading to improved imaging performance in the context of both noise-limited and resolution-limited scenarios. To address noise limitations for high-resolution imaging, we introduce a quasi-Bayesian edge-preserving smoothness prior for modeling correlated image sequences. The prior models the correlated edge structures that are observed in the image sequence, and is used within a penalized maximum likelihood framework to reduce image noise while preserving high-resolution anatomical structure. In contrast to many constrained imaging methods, we demonstrate that the proposed method is relatively simple to analyze and is robust to model inaccuracy when reconstruction parameters are chosen appropriately. Resolution and SNR analysis shows that the proposed formulations lead to substantial improvements in SNR with only a moderate decrease in spatial resolution. An examination of resolution and SNR trade-offs is presented, which serves as a guide for the optimal design of data acquisition and image reconstruction procedures in this context. To address limited spatial resolution in high-SNR scenarios, we design specialized data acquisition and image reconstruction procedures to enable image reconstruction from sparsely-sampled data. Specifically, we leverage prior information that the image has sparse or low-rank structure to significantly reduce sampling requirements in two different contexts. In the first context, we assume that the image is sparse in a known transform domain, and develop a novel non-Fourier data acquisition scheme to enable high-quality reconstruction from undersampled data. The second context is specific to spatiotemporal imaging, and it is assumed that the temporal evolution of the spatiotemporal image is highly correlated at different spatial positions. This correlation leads to the formulation of a novel low-rank matrix recovery problem, which we demonstrate can be solved efficiently and effectively using special algorithms. Applications of the proposed techniques are illustrated with simulated and experimental data from a variety of different MR imaging scenarios.

And an older Master's thesis: Image compression and recovery using compressive sampling and particle swarm optimization. by Benjamin Van Ruitenbeek,. The abstract reads: 
We present a novel method for sparse signal recovery using Particle Swarm Optimization and demonstrate an application in image compression. Images are compressed with compressive sampling, and then reconstructed with particle swarm techniques. Several enhancements to the basic particle swarm algorithm are shown to improve signal recovery accuracy. We also present techniques specifically for reconstructing sparse image data and evaluate their performance.
Not strictly related to CS:
In this paper we present a multi-scale dictionary learning paradigm for sparse and redundant signal representations. The appeal of such a dictionary is obvious - in many cases data naturally comes at different scales. A multi-scale dictionary should be able to combine the advantages of generic multiscale representations (such as Wavelets), with the power of learnt dictionaries, in capturing the intrinsic characteristics of a family of signals. Using such a dictionary would allow representing the data in a more efficient, i.e. sparse, manner, allowing applications to take a more global look at the signal. In this work we aim to achieve this goal without incurring the costs of an explicit dictionary with large atoms. The K-SVD using Wavelets approach presented here applies dictionary learning in the analysis domain of a fixed multi-scale operator. This way, sub-dictionaries at different data scales, consisting of small atoms, are trained. These dictionaries can then be efficiently used in sparse coding for various image processing applications, potentially outperforming both single-scale trained dictionaries and multi-scale analytic ones. In this paper we demonstrate this construction and discuss its potential through several experiments performed on fingerprint and coastal scenery image
Audio Inpainting by Amir Adler, Valentin Emiya, Maria G. Jafari, Michael Elad, Remi Gribonval and Mark D. Plumbley. The abstract reads:
We propose the Audio Inpainting framework that recovers audio intervals distorted due to impairments such as impulsive noise, clipping, and packet loss. In this framework, the distorted samples are treated as missing, and the signal is decomposed into overlapping time-domain frames. The restoration problem is then formulated as an inverse problem per audio frame. Sparse representation modeling is employed per frame, and each inverse problem is solved using the Orthogonal Matching Pursuit algorithm together with a discrete cosine or a Gabor dictionary. The performance of this algorithm is shown to be comparable or better than state-of-the-art methods when blocks of samples of variable durations are missing. We also demonstrate that the size of the block of missing samples, rather than the overall number of missing samples, is a crucial parameter for high quality signal restoration. We further introduce a constrained Matching Pursuit approach for the special case of audio declipping that exploits the sign pattern of clipped audio samples and their maximal absolute value, as well as allowing the user to specify the maximum amplitude of the signal. This approach is shown to outperforms state-of-the-art and commercially available methods for audio declipping

Finally here is an announcement for a meeting at the intractability center:
Workshop : Counting, Inference and Optimization on GraphsMay 26, 2011 by admin
Filed under Events, Highlights, News, Workshops
Leave a Comment
Nov Nov
2 5
November 2 – 5, 2011
The aim of this multidisciplinary workshop is to bring together various communities who work on counting, inference, and optimization problems related to graphs. These communities include
- Theoretical computer science
- Information/coding theory
- Statistical physics
- Statistical inference
During this 3.5-day-long workshop at the Princeton Center for Computational Intractability, topics will include holographic algorithms, complexity dichotomy theorems, capacity of constrained coding schemes, graphical models and belief propagation algorithms, and the computation of partition functions in classical and quantum physics.



Credit: NASA, this is one of the most amazing shot I have seen of the international space station. You can see both the Space shuttle, the European ATV and the Russian Soyuz docked to it (you will probably not see this after June) . Other photos taken by Ron an astronaut during one of his extra-vehicular activity can be found here.(via NASA Watch)

Saturday, May 28, 2011

Great Thoughts Friday: Kickstarter as a way to fund some of these crazy projects

I was thinking about this yesterday, but did not get around to writing this entry before today. So it really is a Great Thoughts Friday entry

Some of you may not know about Kickstarter. it is a site where one features project that requests funding from people on the interwebs. Examples for the technology or photography section include:











Following some of the themes mentioned here, what could be some of the crazy projects thast could be funded through Kickstarter ?

  • an iPad/iPod app that downloads automatically all the pdfs featured on nuit blanche for offline viewing
  • fly a high altitude balloon to get a hemispherical view of the horizon
  • development of a platform to be flown on a high altitude balloon to allow for an on board camera to be directed from the ground
  • support travel for travel and cost of living if needed for Sparseman. if he sends something to the Google Exacycle program
  • build a multispectral imager out of an iPhone4.
  • ?????


Ensemble, We Can Be the Dumbest: The Google Exacycle Program.

In French, ensemble means together, so the title of this entry is really meant as a semi translation of yesterday's "Together We Can Be the Dumbest" entry. Bob points out that his ensemble method and the DUMBEST algorithm don't seem to match whereas I point out that they seem to be the same thing. In short, once you have a certain number of measurements, all of us have noticed empirically that solutions found by two solvers differed and that only the knowledge of the initial solution makes us say which algorithm is better than the other. The problem with this approach is really that if you were to use a random lens imager, you probably couldn't make out the real solution. A system that uses as much as what you have already gathered is an important improvement over what I should call oracle based systems or spending resources getting an additional measurement. Hence the need to efficiently use what you already have by using subsets of a measurements through several solvers.

I note that this idea is so important that folks like Yilun Wang and Wotao Yin have recognized it and made a solver out of this idea (ISD). They even made it clear as to when they came up with the idea on their website, something I usually don't see often. Similarly, the use of all the measurements you already have seems to also be there in the Kalman filter and beyond work of Namrata Vaswani et al .

Figure is from Recovery of Compressively Sensed Sparse Signals using Cyclic OLS[1]

Anyway, in about the same way an ensemble solver was first on the leaderboard of the Netflix prize (but did not win), we ought to collectively investigate how the flurry of solvers can help us do better reconstruction and map better DT phase transitions. This should not work only for sparse signals reconstruction as we ought to also consider doing the same approach for the current crop of  structured sparsity, MMV, robust PCA and Matrix Completion solvers. The real question is: Is the reason an acronym like DUMBEST won't get funded the same reason why somebody investigating Ensemble behavior won't garner much support ? If it is, then all y'all ought to consider the Google Exacycle program. Imagine that even if you are in love with one algorithm (yours :-)), an ensemble approach could be to merge the result of that same solver using different parameters (and we know how some of these parameters are important sometimes) in an approach parallel to epsilon photography. The possibilities are so endless, that the 100 million CPU hours now don't look as large as one would think. The other part of it is I don't know of one problem in engineering and applied math that 

  • has so many repercussions on so many problems, while at the same time
  • is embarrassingly parallel.

If nobody tells me they are sending something to the Google exacycle program, I'll send a placeholder. This is important.

Friday, May 27, 2011

Together We Can Be the Dumbest

I used to call it the DUMBEST algorithm, Bob calls it an ensemble algorithm [1], this is so refined, so elegant ... I like it.


Credit: NASA

Through the Non Looking-Glass, and What Alice Found There

The story goes that Lewis Caroll gave a copy of "Alice in Wonderland" to Queen Victoria who then asked him in return to send her his next book as she fancied the first one. The joke is that the book was "An Elementary Treatise on Determinants, With Their Application to Simultaneous Linear Equations and Algebraic Equations". Sir Lewis Caroll then went on to write a follow-on to Alice in Wonderland called "Through the Looking-Glass, and What Alice Found There" that features a chess board.


And then it all aligned in one fell swoop a little like Alice falling in a different magic world of wonders. That chess board reminded me of the grid used in the compressive coded aperture work mentioned yesterday in Rebecca Willett's work[2][3].



The simultaneous linear equations reminded me of the obvious connection to compressive sensing and underdetermined linear systems of equations. The determinant functional of the second book reminded of the current solution technique using the rank proxy log(det) [1] to recover image from a quadratic compressive sensing approach and finally, the looking glass is nothing short of a lens. What is linking of these elements ? Quite simply: Lens Free imaging [5][6][7].

And so what did Alice found through the Non Looking Glass you ask ? Initially there are some successes but eventually we have to deal with the pesky details, actually those were details when using lenses, they are now your main issues and the way you solve for them is what makes you famous. These pesky details are ?

  • Since we are getting non human readable image, we need to perform calibration to figure out if what we see is what we want. The issue is that we have to form a combinatorial amount of trials to since now the transfer function is not symmetric or has no known features [9].
  • Noise gets in the way as found by the earlier folks who have done coded aperture for the past fifty years [4] since now the PSF is widespread. You lose resolution with a lens system but this is because you are decreasing noise.

Let us also note that these issues are similar to the ones we face when looking at other radiation than light[8].

[6] Lens-free optical tomographic microscope with a large imaging volume on a chip by Serhan O. Isikmana, Waheb Bisharaa, Sam Mavandadia, Frank W. Yua, Steve Fenga, Randy Laua, and Aydogan Ozcan
[7] Lensfree on-chip holography facilitates novel microscopy applications by Aydogan Ozcan, Serhan Isikman, Onur Mudanyali, Derek Tseng and Ikbal Sencan

Thursday, May 26, 2011

CS: Institute of Advanced Studies' Women and Mathematics Courses

The Institute of Advanced Studies' Women and Mathematics series of lectures and seminars featured the following interesting presentations this year:

Sublinear Algorithms 2011 Summaries


Jelani Nelson blogs some summaries about Sublinear Algorithm 2011 in Italy. Here is what we have so far:







Credit: NASA / JPL / UA, Spirit's specular reflection
When HiRISE took a photo of Spirit's resting spot in Gusev crater on March 31, 2011, it happened to be in exactly the right position to catch a specular reflection (a glint) from the rover's solar panels. Apart from being pretty, it signifies that there was not a thick coating of dust on the panels.

Wednesday, May 25, 2011

Informal meeting in Bay Area on CS, Spectrum Sensing at ICASSP, The last photo of Spirit on Mars.



Eric Tramel is wondering if anyone be interested in a Bay-area Compressed Sensing informal meetup? if you are interested, answer him.



Credit: NASA, The last photo taken by Spirit. the Spirit rover on Mars is likely not responding today to commands from Earth, which will make it the last time we try to get in touch with it. Godspeed Spirit.

Tuesday, May 24, 2011

Compressive Sampling and Sparse Reconstruction: ICASSP 2011 / ICML 2011 Papers

Gonzalo Vazquez Vilar is in Prague for ICASSP 2011 and he blogs about the sessions on compressive sensing. Go read his impressions.

The ICML 2011 papers are out, which one should I feature on Nuit Blanche. Send me your suggestions via the Twitter or in the comments.

CS: T-SBL/T-MSBL Codes available

Zhilin Zhang sent me the following e-mail about the T-SBL/T-MSBL algorithms featured in [1], [2] and [3]:

Hi, Igor,
How are you! I finally finished the code-writing job. Now the codes of T-SBL/T-MSBL are available and can be downloaded from the link:
For reproducibility, it includes some demos which can re-produce the experiment results in my paper.
I also post the code of tMFOCUSS (developed in my ICASSP 2011 paper) in my homepage. It can be downloaded from the link:
Also, I wrote a new version of the MSBL code, which can be downloaded from here:
Although our lab alumni, David, posted his code in his homepage, that code is not suitable for algorithm comparison in most cases (since the values of some parameters are not suitable). The new version has chosen better values for these parameters and added many comments and advices.
I'll very appreciate if you can mention these codes via your blog and add them into the code collection.

Best regards,
Zhilin
Zhilin  noticed that the algorithms were to cumbersome to use with parameter tuning, so he spent the week-end rewriting part of the algorithms. Here is his new entry on the subject on his blog:  New Versions of T-MSBL/T-SBL are Available for Download that features the new codes.

Thanks Zhilin , I'll feature your code soon on the reconstruction section of the Big Picture in Compressive Sensing.




[1] Sparse Signal Recovery with Temporally Correlated Source Vectors Using Sparse Bayesian Learning by Zhilin Zhang, Bhaskar D. Rao.
[2] Z.Zhang, B.D.Rao, Iterative Reweighted Algorithms for Sparse Signal Recovery with Temporally Correlated Source Vectors, ICASSP 2011. Downloaded here
[3] Z.Zhang, B.D.Rao, Exploiting Correlation in Sparse Signal Recovery Problems: Multiple Measurement Vectors, Block Sparsity, and Time-Varying Sparsity.


Image Credit: NASA/JPL/Space Science Institute, N00171687.jpg was taken on May 21, 2011 and received on Earth May 22, 2011. The camera was pointing toward TITAN at approximately 2,313,374 kilometers away, and the image was taken using the CL1 and GRN filters.

Monday, May 23, 2011

Around the blogs in 80 hours

Here is an excerpt of the blog entries I have not covered yet:

Dirk Lorenz has a blog: http://regularize.wordpress.com/ his first entries include:



Dick: Aiming For The Moon With All Rockets Blazing? A Critique Of Breast Cancer Deadline 2020
Brian: Controlling the welding process with machine vision
Zhilin: Codes of T-SBL and T-MSBL are being updated
Djalil: Circular law: known facts and conjectures
John Langford has a take on the 8f workshop he attended in Research Directions for Machine Learning and Algorithms
Gonzalo


Greg:


Frank: Continuity and discontinuity of Shannon entropy

Bob:


Yaroslav: Neural Networks making a come-back?
Victoria: Generalize clinicaltrials.gov and register research hypotheses before analysis
Michael Trick’s Operations Research Blog: That’s got to be true… doesn’t it?


Image Credit: NASA/JPL/Space Science Institute, W00067554.jpg was taken on May 09, 2011 and received on Earth May 10, 2011. The camera was pointing toward TITAN at approximately 203,110 kilometers away, and the image was taken using the CB3 and CL2 filters.

Sunday, May 22, 2011

Hugh McLeod on How To Be Creative

I love Hugh McLeod's manifesto on How to be creative. Many of these insights are readibly applicable to some of the processes we are all involved in. Unfortunately, according to the licence: one cannot alter, transform, or build upon this work. It's a shame.

Friday, May 20, 2011

Great Thoughts Friday: Unsupervised Feature Learning of Exotic Camera Systems

The issue of sparse coding is not new to our readers since it falls in the dictionary learning issue of interest in the reconstruction stage in compressive sensing. Thanks to Bob, I was reminded of the following video of Andrew Ng about unsupervised learning (one way of performing dictionary learning). As Andrew shows some of these techniques are getting better and better.


By necessity, people are using the same databases. But if we take the argument further, some of these databases are somehow dependent on the very camera parameters that took these shots and one could make the argument that the calibration issue is hidden behind some well thought out benchmark databases. While one of the need for decomposing images is important, eventually, we are often not knowledgeable about the features that were learned (an issue Andrew points to at the very end of his video/slides.) Given all this, I wonder if exotic camera systems such as multiple Spherical Catadioptric Cameras ( as featured in Axial-Cones: Modeling Spherical Catadioptric Cameras for Wide-Angle Light Field Rendering by Yuichi Taguchi, Amit Agrawal, Ashok Veeraraghavan, Srikumar Ramalingam and Ramesh Raskar) or the random lens imager that produce highly redundant information of the scene of interest, can provide better classification rate than the current techniques ?


If we take this argument further, what would then be a good way of comparing these exotic camera systems for classification purposes ?...

Note:
For those of you interested: At the end of the Andrew Ng's presentation, there is a link to this course and handouts of the corresponding course at Stanford. It features two videos.

Random Earthquake Thoughts

The following are random thoughts that may be good for someone who is on a Great Thoughts Friday mode.  Large Earthquakes are few and could really be considered as sparse objects but since we are talking about time series, shouldn't some of the tools used in detecting those in high dimensional time series be using rank minimization techniques mentioned this week ? By high dimensional time series, I don't just mean acceleration traces from different geographical locations but rather more complex data from other types of sensor networks.

By reading a little more about the potential connection between earthquakes and ionospheric measurements in Ionospheric Precursors of Earthquakes; Recent Advances in Theory and Practical Applications by Sergey Pulinets, I decided to watch for a day the ionospheric TEC measurements from JPL updated every five minutes




The Ionospheric TEC map are constructed from tomographic measurements performed between GPS satellites and some ground stations. For a day, I noticed something obvious: direct sunlight on a region increases the electron content in the ionosphere over that region albeit not uniformly. Where is the non-uniformity coming from ? I am not sure but if one follows the theme of the paper above, these non-uniformities may be linked to solar and/or geomagnetic activity and gases releases in the cases of earthquakes and therefore, they might be co-located to earth faults.




Both the TEC map and the night map seem to correlate. Now I wonder how these maxima average over time and correlate with the known faults ? or


Earthquake Density Maps for the World from the USGS)


one wonders if any of these maxima can also be correlated with geostationary observation of earth or the activity on the Sun. Over the past day, I didn't see an obvious connection with the last two. 

Let us note that the general public cannot seem to have access to previous ionospheric TEC maps

In a different direction, some folks are also looking at earthquake prediction based on a network analysis of past events. The paper is Earthquake networks based on similar activity patterns by Joel N. Tenenbaum, Shlomo Havlin, H. Eugene Stanley. The abstract reads:
Earthquakes are a complex spatiotemporal phenomenon the underlying mechanism for which is still not fully understood despite decades of research and analysis. We propose and develop a network approach to earthquake events. In this network, a node represents a spatial location while a link between two nodes represents similar activity patterns in the two different locations. The strength of a link is proportional to the strength of the cross-correlation. We apply our network approach to a Japanese earthquake catalog spanning the 14-year period 1985-1998. We find strong links representing large correlations between patterns in locations separated by more than 1000 km. We also find significant similarities in the network structure upon comparing different time periods.

The results are not overly convincing but I wonder if this type of analysis would not be amenable to a diffusion wavelet approach.

Liked this entry ? subscribe to the Nuit Blanche feed, there's more where that came from

Thursday, May 19, 2011

Compressive Sensing Literature This Week.

On the left is a photo of the last Rendezvous Pitch Maneuver (the back flip) ever to be performed by Endeavour.... Sigh. Anyway, today we have a long entry. First.here is a summary 8f workshop day 2 and day 3

Now from the arxiv, the Rice repository and other sources, enjoy!

Xampling at the Rate of Innovation by Tomer Michaeli, Yonina Eldar. The abstract reads:
We address the problem of recovering signals from samples taken at their rate of innovation. Our only assumption is that the sampling system is such that the parameters defining the signal can be stably determined from the samples, a condition that lies at the heart of every sampling theorem. Consequently, our analysis subsumes previously studied nonlinear acquisition devices and nonlinear signal classes. In particular, we do not restrict attention to memoryless nonlinear distortions or to union-of-subspace signal models. This allows treatment of various finite-rate-of-innovation (FRI) signals that were not studied to date, including, for example, frequency shift keying (FSK) transmissions. Our strategy relies on minimizing the error between the measured samples and those corresponding to our signal estimate. This least-squares (LS) objective is generally non-convex and might possess many local minima. Nevertheless, we prove that under the stability hypothesis, any optimization method designed to trap a stationary point of the LS criterion necessarily converges to the true solution. We demonstrate our approach in the context of recovering finite-duration and periodic pulse streams in settings that were not previously treated. Furthermore, in situations for which other algorithms are applicable, we show that our method is preferable in terms of noise robustness.
From the mostly quantum blogEfficient Measurement of Quantum Dynamics via Compressive Sensing by A. Shabani, R. L. Kosut, M. Mohseni, H. Rabitz1, M. A. Broome4, M. P. Almeida, A. Fedrizzi, and A. G. White. The abstract reads:
The resources required to characterize the dynamics of engineered quantum systems—such as quantum computers and quantum sensors—grow exponentially with system size. Here we adapt techniques from compressive sensing to exponentially reduce the experimental configurations required for quantum process tomography. Our method is applicable to processes that are nearly sparse in a certain basis and can be implemented using only single-body preparations and measurements. We perform efficient, high-fidelity estimation of process matrices of a photonic two-qubit logic gate. The database is obtained under various decoherence strengths. Our technique is both accurate and noise robust, thus removing a key roadblock to the development and scaling of quantum technologies.

In this paper, we propose an empirical review of the conditions under which the compressed sensing framework allows to achieve exact image reconstruction. After a short presentation of the theoretical results related to this subject, we investigate the relevance and the limits of these theoretical results through several numerical reconstructions of some bench signals. In particular, we discuss the quantitative and qualitative artifacts that affect the reconstructed signal when reducing the number of measurements in the Fourier domain. Finally, we conclude our study by extending our results to some real microscopic images.

Our aim of this article is to reconstruct a signal from undersampled data in the situation that the signal is sparse in terms of a tight frame. We present a condition, which is independent of the coherence of the tight frame, to guarantee accurate recovery of signals which are sparse in the tight frame, from undersampled data with minimal $l_1$-norm of transform coefficients. This improves the result in [1]. Also, the $l_q$-minimization $(0 q 1)$ approaches are introduced. We show that under a suitable condition, there exists a value $q_0\in(0,1]$ such that for any $q\in(0,q_0)$, each solution of the $l_q$-minimization is approximately well to the true signal. In particular, when the tight frame is an identity matrix or an orthonormal basis, all results obtained in this paper appeared in [13] and [26]

In many linear regression problems, explanatory variables are activated in groups or clusters; group lasso has been proposed for regression in such cases. This paper studies the nonasymptotic regression performance of group lasso using `1/`2 regularization for arbitrary (random or deterministic) design matrices. In particular, the paper establishes under a statistical prior on the set of nonzero coefficients that the `1/`2 group lasso has a near-optimal regression error for all but a vanishingly small set of models. The analysis in the paper relies on three easily computable metrics of the design matrix – coherence, block coherence, and spectral norm. Remarkably, under certain conditions on these metrics, the `1/`2 group lasso can perform near-ideal regression even if the model order scales almost linearly with the number of rows of the design matrix. This is in stark contrast with prior work on the regression performance of the `1/`2 group lasso that only provides linear scaling of the model order for the case of random design matrices.
The attendant slides are here.

The Delsarte-Goethals frame has been proposed for deterministic compressive sensing of sparse and compressible signals. Its performance in compressive imaging applications falls short of that obtained for arbitrary sparse vectors. Prior work has proposed specially tailored signal recovery algorithms that partition the recovery of the input vector into clustered and unclustered portions. In this paper we present a formal analysis of the Delsarte-Goethals frame that shows that its hampered performance in compressive imaging is due to the presence of clustered sparse vectors in its null space. Such a clustered structure is characteristic in commonly-used raster scanning of 2-D wavelet representations. We also show that a simple randomized raster scanning of the wavelet coefficient vector can remedy these shortcomings, allowing the use of standard recovery algorithms. Additionally, we propose an alternative deterministic raster scanning that achieves similar recovery performance. Experimental results confirm the improvements in recovery performance for both the noiseless and noisy measurement regimes.
The attendant slides are here.

Fast and accurate unveiling of power line outages is of paramount importance not only for preventing faults that may lead to blackouts, but also for routine monitoring and control tasks of the smart grid, including state estimation and optimal power flow. Existing approaches are either challenged by the \emph{combinatorial complexity} issues involved, and are thus limited to identifying single- and double-line outages; or, they invoke less pragmatic assumptions such as \emph{conditionally independent} phasor angle measurements available across the grid. Using only a subset of voltage phasor angle data, the present paper develops a near real-time algorithm for identifying multiple line outages at the affordable complexity of solving a quadratic program via block coordinate descent iterations. The novel approach relies on reformulating the DC linear power flow model as a \emph{sparse} overcomplete expansion, and leveraging contemporary advances in compressive sampling and variable selection using the least-absolute shrinkage and selection operator (Lasso). Analysis and simulated tests on the standard IEEE 118-bus system confirm the effectiveness of lassoing line changes in the smart power grid.
The advent of Compressive Sensing has provided significant mathematical tools to enhance the sensing capabilities of hardware devices. In this paper we apply Compressive Sensing to improve over-the-air ultrasonic sensing capabilities. We demonstrate that using an appropriate scene model it is possible to pose three-dimensional surface reconstruction of a scene as a sparse recovery problem. By transmitting incoherent wideband ultrasonic pulses and receiving their reflections a sensor array can sense the scene and reconstruct it using standard CS reconstruction algorithms. We further demonstrate that it possible to construct virtual arrays that exploit the sensors’ motion. Thus we can obtain three-dimensional scene reconstruction using a linear mobile array.
This paper considers the task of recovering the support of a sparse, high-dimensional vector from a small number of measurements. The procedure proposed here, which we call the Sign-Sketch procedure, is shown to be a robust recovery method in settings where the measurements are corrupted by various forms of uncertainty, including additive Gaussian noise and (possibly unbounded) outliers, and even subsequent quantization of the measurements to a single bit. The Sign-Sketch procedure employs sparse random measurement matrices, and utilizes a computationally efficient support recovery procedure that is a variation of a technique from the sketching literature. We show here that O(max fk log(n  k), k log kg) non-adaptive linear measurements suffice to recover the support of any unknown n-dimensional vector having no more than k nonzero entries, and that our proposed procedure requires at most O(n log n) total operations for both acquisition and inference

Compressed sensing (CS) samples signals at a much lower rate than the Nyquist rate if they are sparse in some basis. In this paper, the CS methodology is applied to sinusoidallymodeled audio signals. As this model is sparse by definition in the frequency domain (being equal to the sum of a small number of sinusoids), we investigate whether CS can be used to encode audio signals at low bitrates. In contrast to encoding the sinusoidal parameters (amplitude, frequency, phase) as current state-of-the-art methods do, we propose encoding few randomly selected samples of the time-domain description of the sinusoidal component (per signal segment). The potential of applying compressed sensing both to single-channel and multichannel audio coding is examined. The listening test results are encouraging, indicating that the proposed approach can achieve comparable performance to that of state-of-the-art methods. Given that CS can lead to novel coding systems where the sampling and compression operations are combined into one low complexity step, the proposed methodology can be considered as an important step towards applying the CS framework to audio coding applications
Compressed sensing aims at recovering a sparse signal x 2 RN from few nonadaptive, linear measurements (x) given by a measurement matrix . One of the fundamental recovery algorithms is an l_1 minimisation. In this paper we investigate the situation when our measurement (x) is contaminated by arbitrary noise under the assumption that the matrix satisfies the restricted isometry property. This complements results from [4] and [8].
Compressed sensing (CS) has primarily two modes of acquiring measurements of sparse signals. One is by taking inner product measurements described by an underdetermined linear system of equations y = Ax, where y ∈ Rm represents the measurements gathered about a sparse signal x ∈ Rn of interest. In this setting, the matrix A ∈ Rm×n is chosen to possess a particular property, namely the restricted isometry property, and the measurements are acquired by computing inner products between x and the rows of A. Alternatively, one can acquire CS measurements by sampling x at random locations (random point evaluations). In this case, an underdetermined linear system also relates the measurements to a higher dimensional representation, but the measurements are acquired differently—random samples are not acquired as inner products.

A Compressed Sensing Wire-Tap Channel by Galen Reeves, Naveen Goela, Nebojsa Milosavljevic, Michael Gastpar. The abstract reads:
A multiplicative Gaussian wire-tap channel inspired by compressed sensing is studied. Lower and upper bounds on the secrecy capacity are derived, and shown to be relatively tight in the large system limit for a large class of compressed sensing matrices. Surprisingly, it is shown that the secrecy capacity of this channel is nearly equal to the capacity without any secrecy constraint provided that the channel of the eavesdropper is strictly worse than the channel of the intended receiver. In other words, the eavesdropper can see almost everything and yet learn almost nothing. This behavior, which contrasts sharply with that of many commonly studied wiretap channels, is made possible by the fact that a small number of linear projections can make a crucial difference in the ability to estimate sparse vectors.

The LASSO is a variable subset selection procedure in statistical linear regression based on $\ell_1$ penalization of the least-squares operator. Its behavior crucially depends, both in practice and in theory, on the ratio between the fidelity term and the penalty term. We provide a detailed analysis of the fidelity vs. penalty ratio as a function of the relaxation parameter. Our study is based on a general position condition on the design matrix which holds with probability one for most experimental models. Along the way, the proofs of some well known basic properties of the LASSO are provided from this new generic point of view.

This paper presents a novel projection-based adaptive algorithm for sparse signal and system identification. The sequentially observed data are used to generate an equivalent sequence of closed convex sets, namely hyperslabs. Each hyperslab is the geometric equivalent of a cost criterion, that quantifies “data mismatch”. Sparsity is imposed by the introduction of appropriately designed weighted ℓ1 balls and the related projection operator is also derived. The algorithm develops around projections onto the sequence of the generated hyperslabs as well as the weighted ℓ1 balls. The resulting scheme exhibits linear dependence, with respect to the unknown system’s order, on the number of multiplications/additions and an O(L log2 L) dependence on sorting operations, where L is the length of the system/signal to be estimated. Numerical results are also given to validate the performance of the proposed method against the LASSO algorithm and two very recently developed adaptive sparse schemes that fuse arguments from the LMS / RLS adaptation mechanisms with those imposed by the lasso rational.

Compressive sensing (CS) have got attention as a promising signal processing technique to reduce information rate of sparse signals [1]. One line of CS related researches are to devise low complexity recovery algorithms since the conventional L1-norm based recovery algorithms still have high computational complexity for practical applications. Recently, a few researchers have made an attempt to apply probabilistic message passing (PMP) ideas to CS recovery [2], [3] since PMP has provided a successful solution for low complexity decoding while showing suboptimal performance in channel coding problems, such as low-density parity check codes [4]. Motivated by such previous works, in this paper, we propose a new least square estimation (LSE) based CS recovery algorithm by applying PMP, called PMP-LSE. It is well known that CS recovery is basically an underdetermined system and it can be reformed as an overdetermined system with the support set information (SI). Therefore, in the proposed algorithm, PMP undertakes to find the SI of the signal to reform the recovery to an overdetermined case, and then LSE completes the recovery using the SI. Mainly, PMPLSE has two strong benefits. First, PMP-LSE shows outstanding performance with noisy measurements by removing the noise effect from elements belonging to the non-support set. Second, PMP-LSE prevents the recovery from diverging. Under certain conditions, PMP based algorithms fails in the recovery due to divergence caused by a large number of iterations. In the algorithm, however, the possibility of the divergence highly decreases since PMP is only used to search the SI with a few iterations.

The theory of (tight) wavelet frames has been extensively studied in the past twenty years and they are currently widely used for image restoration and other image processing and analysis problems. The success of wavelet frame based models, including balanced approach [20, 8] and analysis based approach [13, 32, 49], is due to their capability of sparsely approximating piecewise smooth functions like images. Motivated by the balanced approach and analysis based approach, we shall propose a wavelet frame based l_0 minimization model, where the 0 of the frame coefficients are penalized. We adapt the penalty decomposition (PD) method of [40] to solve the proposed optimization problem. Numerical results showed that the proposed model solved by the PD method can generate images with better quality than those obtained by either analysis based approach or balanced approach in terms of restoring sharp features as well as maintaining smoothness of the recovered images. Some convergence analysis of the PD method will also be provided.


This paper proposes efficient algorithms for group sparse optimization with mixed `2;1-regularization, which arises from the reconstruction of group sparse signals in compressive sensing, and the group Lasso problem in statistics and machine learning. It is known that encoding the group information in addition to sparsity will lead to better signal recovery/feature selection. The `2;1-regularization promotes group sparsity, but the resulting problem, due to the mixed-norm structure and possible grouping irregularity, is considered more di cult to solve than the conventional `1-regularized problem. Our approach is based on a variable splitting strategy and the classic alternating direction method (ADM). Two algorithms are presented, one derived from the primal and the other from the dual of the `2;1-regularized problem. The convergence of the proposed algorithms is guaranteed by the existing ADM theory. General group configurations such as overlapping groups and incomplete covers can be easily handled by our approach. Computational results show that on random problems the proposed ADM algorithms exhibit good efficiency, and strong stability and robustness.

Following the Compressed Sensing (CS) paradigm, this paper studies the problem of recovering sparse or compressible signals from (scalar) non-uniformly quantized measurements. We show that a simple adaptation of the Basis Pursuit DeQuantizer introduced earlier, that is, a sign sensitive weighting of their `p-norm fidelity constraint, yields good SNR improvements in the signal reconstruction. As a good indication of this improvement origin, we prove theoretically that a similar decoder, using a particular side-position-to-level oracle, displays a reduction of the reconstruction error when both the number of measurements and the moment p of the constraint increase. This follows the oversampling principle underlined in our previous study for uniformly quantized CS, with an additional gain provided by the non-uniform quantization. We conclude this paper by showing the efficiency of the approach on 1-D and 2-D signal examples

The following are abstracts or articles behind a firewall or secured pdfs!!! Some people really don't want to share their insights.

A new version of the following paper showed up on my radar screen:
A  poster:

Finally, here is a presentation in Germany to be taken in the context of the recent entry on rank minimization:

At HCM Uni Bonn, 2.15 - 3.45pm, Mai 26, 2011, LWK 0.006
Extending the idea of compressive sensing to tensors by Zeljka Stojanac (Bonn), Hausdor f Center for Mathematics, University of Bonn
Compressive Sensing has become a major tool in science, especially in the past decade. Because of its many applications, the extension of the idea to the matrix case has already been done. It has been shown that finding the sparsest solution corresponds to nding the matrix of minimal rank and l1-minimization corresponds to the nuclear norm minimization. However, the extension to the tensor case has not been done yet. At the beginning of the seminar we will give a short introduction to tensors. Then, we will introduce several tensor decompositions (and corresponding notions of rank) that naturally arise and show some of their properties. We will also try to give an intuition as to why they could be used in Compressive Sensing.


Credit: NASA, Endeavour's last Rendezvous Pitch Maneuver (RPM)

Printfriendly