Tuesday, October 30, 2012

Nuit Blanche's Month in Review (October 2012)

In order to provide some focus to the blog, I have started these Month In Review posts. This is the second installment, the first one is here. We'll see where that experiment might take us. Talking about experiments, we also started something on Reddit and the Sunday Morning Insights. For more on those, please see below. Here is how this post is broken down:
  • General ideas
  • Reproducible Research - Codes -
  • Jobs
  • Experiments: Month in Review / Reddit / Sunday Morning Insights
  • Life of the community / The Business of Science 
  • Misc
General ideas

This past month saw the crossing of the one million five hundred thousand pages views. What does this mean in terms of reach out, check this entry out for more information.

If there is one lesson from this long list of preprints and papers showing up on What's New in Compressive Sensing and Matrix Factorization This Month, it's that ADMM is becoming a mainstay in reconstruction solvers. 

I don't know if this is a trend, but this October, we also saw an interest by the practitioners to develop domain specific dictionaries:

We'll see where that leads us especially in light of advances in our understanding of the analysis framework (see Analysis K-SVD: A Dictionary-Learning Algorithm for the Analysis Sparse Model Sunday Morning Insight: The Linear Boltzmann Equation and Co-Sparsity for more)

I was also particularly impressed by muon tomography this past month and wrote twice about it (Imaging Damaged Reactors and VolcanoesMuon Tomography as a Moore's Law Enabled Technology ) mostly because better algorithm and silicon speed provide for a better image of what's around us.

I also came back to a very interesting paper in Pushing the Boundaries in Compressive Sensing ( and a connection with this video) and noted the push toward unifying Group testing and compressive sensing (Semi-Quantitative Group Testing: a General Paradigm with Applications in Genotyping), a subject of paramount importance in the future. We also had some crazy ideas ( Imaging with a View or Imagine a GoPro2 at Supersonic speed ). There is still time for your students ( if you are US based) to get on HASP. What about getting a camera to fall at Mach 1 and take a movie that parallels that of Curiosity landing on Mars ?

Other entries and papers of interest included a paper somehow connected to Randomized Numerical Linear Algebra (Invariance of Principal Components under Low-Dimensional Random Projection of the Data ), the use of adaptive compressive sensing from actual experimental data (A Data-Adaptive Compressed Sensing Approach to Polarimetric SAR Tomography of Forested Areas) and the limits of super resolution (Quantum limits of super-resolution of optical sparse objects via sparsity constraint ).

Reproducible Research - Codes -

In the spirit of reproducible research, we also had a flurry of new implementations / codes made available by their authors

We also had a large set of job announcements:

Experiments: Month in Review / Reddit / Sunday Morning Insights

The first installment for the Month in Review (September 2012) is here.

At the beginning of this month, we started an experiment on Reddit, so far we have 42 readers. This is good. I have mentioned this before but anybody can post a link there as long as it is somehow relevant to the subject at hand. Also, civil discussions are encouraged. 

While this experiment might be a good thing in terms of bursting our peer filter, I also like the fact that several entries are getting a new life. Since the Blogger search engine is pretty calamitous, this is a way to get attention on a subject that is not necessarily old but cannot be easily found with the blogger layout. We'll see how this goes. 

Sunday Morning Insight's goal is to provide some , you guessed it, insight on a subject you might have read several times on the blog, here are the first three installments:

Videos and Slides:

This past month, Nuit Blanche also featured a few videos and slides from conferences, I realize this is a time sink but it is worth it:

Life of the community / The Business of Science (meetings/ talks / deadlines....)


Credit: NASA/GOES, NOAA's GOES-13 satellite captured this visible image of the massive Hurricane Sandy on Oct. 28 at 1302 UTC (9:02 a.m. EDT) (via Space.com)

Join our Reddit Experiment, Join the CompressiveSensing subreddit and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Monday, October 29, 2012

Postdoc Positions, Graph Analysis or Compressed Sensing, Sandia, CA

Saw this on NA Digest today (a different call than the one we saw in August)
From: Tammy Kolda tgkolda@sandia.gov
Date: Fri, 26 Oct 2012 17:38:42 -0400
Subject: Postdoc Positions, Graph Analysis or Compressed Sensing, Sandia/CA
Job Title: Postdoc, Graph Analysis or Compressed Sensing
Job ID: 641951
Summary: We currently have two openings for postdoctoral researchers in the following areas: (1) massive-scale graph analysis using MapReduce and (2) compressed sensing applied to problems of data compromise and robustness. The successful applicant will be expected to conduct innovative research, to develop open-source software, to present his or her findings at leading conferences and workshops, and to publish his or her results in leading journals. This postdoctoral position is for motivated and enthusiastic individuals with a background in computer science, mathematics, statistics, engineering, or related areas.
Required: (1) Advanced degree (Ph.D. or equivalent) in computer science, mathematics, statistics, engineering, or a related area, with a strong record of academic performance; (2) ability to work in a collaborative research environment; (3) evidence of relevant research expertise in the form of technical publications, presentations, software, and/or knowledge of applications; (4) software development competence in Java, C++, or a related language; (5) proficiency in solving problems, prioritizing work, and making decisions; (6) excellent written and oral communication skills.
Desired: (1) Expertise in one or more of the following areas: graph and network analysis, compressed sensing, linear algebra, parallel algorithm design, applied statistics, numerical optimization, computational combinatorics, cyber defense, power systems, image and network surveillance; (2) software engineering proficiency, especially Hadoop MapReduce and/or high-performance computing experience; (3) experience with standard research tools such as LaTeX, MATLAB, PowerPoint; (4) a background in solving practical problems in science and engineering that involve encounters with real-world data; and (5) evidence of professional service to the community, such as engagement in student service activities or seminar/workshop organization.
Supervisor: Tamara G. Kolda, tgkolda@sandia.gov
For more information, see http://goo.gl/IjY5I

Join our Reddit Experiment, Join the CompressiveSensing subreddit and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Analysis K-SVD: A Dictionary-Learning Algorithm for the Analysis Sparse Model - implementation -

In a previous entry (Sunday Morning Insight: The Linear Boltzmann Equation and Co-Sparsity), we mentioned that anytime you were describing a field that followed some sort of physical law, like

Lu = 0 

L being the Boltzmann or the Maxwell operator, and you added some boundary conditions, then you were in effect making a statement on co-sparsity: i.e. the non-zero elements representing either some boundary conditions or a better approximation of the full operator (Linear Boltzmann replacing the Diffusion operator at the boundaries). This is profound because it connects the generic work happening in sampling to the real world of physics and engineering (see structured life)

Unless I am mistaken this the third dictionary learning implementation released in the wild dedicated to learning the Analysis Operator, in effect learning the equivalent discretization of the operator of interest with its attendant boundary conditions. The first two were featured in Noise Aware Analysis Operator Learning for Approximately Cosparse Signals and in 90% missing pixels and you reconstructed that ?! Analysis Operator Learning and Its Application to Image Reconstruction. The paper illustrating what this new solver can do is: Analysis K-SVD: A Dictionary-Learning Algorithm for the Analysis Sparse Model by Ron Rubinstein, Tomer Peleg and Michael Elad
The synthesis-based sparse representation model for signals has drawn considerable interest in the past decade. Such a model assumes that the signal of interest can be decomposed as a linear combination of a few atoms from a given dictionary. In this paper we concentrate on an alternative, analysis-based model, where an analysis operator – hereafter referred to as the analysis dictionary – multiplies the signal, leading to a sparse outcome. Our goal is to learn the analysis dictionary from a set of examples. The approach taken is parallel and similar to the one adopted by the K-SVD algorithm that serves the corresponding problem in the synthesis model. We present the development of the algorithm steps: This includes tailored pursuit algorithms – the Backward Greedy and the Optimized Backward Greedy algorithms, and a penalty function that defines the objective for the dictionary update stage. We demonstrate the effectiveness of the proposed dictionary learning in several experiments, treating synthetic data and real images, and showing a successful and meaningful recovery of the analysis dictionary.

The implementation is here.

Join our Reddit Experiment, Join the CompressiveSensing subreddit and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Sunday, October 28, 2012

Three PostDocs at the Seismic Laboratory for Imaging and Modelling (UBC)

Felix Hermann just sent me the following:

Dear colleagues,

We have three open postdoctoral positions at the Seismic Laboratory for Imaging and Modelling in the following areas:
  • computational and theoretical seismology: seismic modelling, wave-equation based imaging and inversion
  • observational seismology: development of practical data acquisition scenarios and workflows for full-waveform inversion
  • compressive sensing: design and implementation of novel acquisition, sparse/low-rank recovery algorithms, and directional transforms including curvelets
  • scientific computing & inverse problems: PDE-constrained optimization and direct and indirect solvers for the Helmholtz equation and
  • optimization & machine learning: large-scale convex and stochastic optimization, etc.
For more information please follow
or to our add at mathjobs (Position ID: UBC-SLIMPDF [#4250])
where the candidates can submit their applications. Please, forward this information to potential candidates. Thank you.
Kind regards,
Felix J. Herrmann
Director of UBC-Seismic Laboratory for Imaging and Modeling (SLIM)
phone: (+1) 604-822-8628https://www.slim.eos.ubc.ca

Join our Reddit Experiment, Join the CompressiveSensing subreddit and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Sunday Morning Insight: Muon Tomography as a Moore's Law Enabled Technology

In Predicting the Future, the steamrollers, one realizes that Moore's law is interesting but somehow we cannot point to a specific instance as to why this is really interesting. Let's shine some light in that direction. 

When David Brady talks about Coded Aperture X-Ray Scatter Imaging, one gets the impression that at some point in time, one could use scattering as a supplemental information to, potentially, extract hyperspectral information in CT-scans. That information is not going to be easy to extract in bulk measurements but one senses that this is because it involves a more complex modeling. That modeling is only accessible with better and faster computational power. 

In a similar line of thought, it is interesting to see how the improvement of computational power has also transformed muon tomography imaging.  In Imaging Damaged Reactors and Volcanoes, we noted how the use of  a more advanced reconstruction algorithm [18] permitted the imaging of a relevant information. 

The starting point of muon tomography technology can be traced back to Luis Alavrez's [22] negative results on the possibility of the Egyptian pyramids holding another chamber  [23]. At that point in time, the technology could reconstruct image using an attenuation argument. While detectors were getting better, the real breakthrough lied with the possibility of utilizing scattering simulation from Monte Carlo codes like MCNP or GEANT4. With that information one can evaluate whether or not the material being interrogated by muons contains high-Z material in it. And indeed, in the initial study for the detection of potential Fukushima corium [18], scattering seems to provide this additional bit of information that attenuation studies alone cannot provide. Current reconstruction algorithm taking into the coulomb scattering can be found in [5,6].

For those interested, here is a rough survey of the type of problems being investigated with muon tomography. Let us note that some of the attenuation only experiments are very recent as they focus on the mapping of civil engineering structures and void underneath them, a subject of considerable interest: 
  • the detection of smuggled nuclear warheads [3, 8,9,13]
  • the detection of orphan nuclear materials in scrap metal factories [1,9]
  • the detection of empty spaces in the pyramids (Egyptian or Mayan pyramids) [2,5] -attenuation only -
  • the detection of density inhomogeneities in volcanoes [12,14] -attenuation only -
  • the detection of potential corium at Fukushima [10,11]
  • the detection of voids in civil engineering structures [15,16,17] -attenuation only -
  • Nuclear Waste Imaging and Spent Fuel Verification [26]

[2] Imaging Maya Pyramids with Cosmic Ray Muons by Roy Schwitters
[3] Imaging with 1 ft3 Muon Tomography Station and Analysis of Future Station Geometries by Nathan Mertins, Michael Staib, William Bittner, Marcus Hohlmann
[4] Cosmic Ray Muon Radiography by Larry Schultz 
[5] A Detector for Muon Tomography: Data Acquisition and Preliminary Results Eric T. Wright 
[6] Advances in Cosmic Ray Muon Tomography Reconstruction Algorithms by  Richard Claude Hoch
[7] Statistical Reconstruction for Cosmic Ray Muon Tomography Larry J. Schultz, Member, Gary S. Blanpied, Konstantin N. Borozdin, Andrew M. Fraser, Nicolas W. Hengartner, Alexei V. Klimenko, Christopher L. Morris, Chris Orum, and Michael J. Sossong
[8] Enabling Port Security using Passive Muon Radiography, Nicolas Hengartner, Bill Priedhorski, Konstantin Borozdin, Alexi Klimenco, Tom Asaki, Rick Chartrand, Larry Shultz, Andrew Green,  Richard Shirato.
[9] Applications of Muon Tomography for the Detection of Hidden Nuclear Substances in Containers by M. Benettoni, P. Checchia, E. Conti, F. Gonella, G. Nebbia, G. Mariotti, S. Pesente, S. Vanini, G. Viesti, G, G. Bonomi, A. Zenoni,  P. Calvini, , S. Squarcia
[10] Discussion - Next Step for Fukushima Daiichi Muon Tomography by Miyadera, Haruo
[11] Our Next Two Steps for Fukushima Daiichi Muon Tomography by Jeffrey D. Bacon,  Konstantin N. Borozdin,  Michael Brockwell, Edward C. Milner,  Haruo Miyadera,  Christopher Morris,  John O. Perry, Zarija Lukic. Koji Masuda
[12] DIAPHANE web site devoted to the "Development of cosmic-ray muons tomography in geophysics"
[13] COSMIC-RAY MUON TOMOGRAPHY AND ITS APPLICATION TO THE DETECTION OF HIGH-Z MATERIALS by Konstantin Borozdin, Thomas Asaki, Rick Chartrand, Mark Galassi, Andrew Greene, Nicolas Hengartner, Gary Hogan, Alexei Klimenko, Christopher Morris, William Priedhorsky, Alexander Saunders, Richard Schirato, Larry Schultz, Matthew Sottile, Gary Blanpied 
[23] Luis W. Alvarez, Jared A. Anderson, F. El Bedwei, James Burkhard, Ahmed Fakhry, Adib Girgis, Amr Goneid, Fikhry Hassan, Dennis Iverson, Gerald Lynch, Zenab Miligy, Ali Hilmy Moussa, Mohammed-Sharkawi, Lauren Yazolino, Search for Hidden Chambers in the Pyramids," Science 167, 832 (1969). 
[24] Designs for Muon Tomography Station Prototypes Using GEM Detectors, Leonard V. Grasso III
[25] Imaging of high-Z material for nuclear contraband detection with a minimal prototype of a Muon Tomography station  based on GEM detectors  by Kondo Gnanvo, Leonard V. Grasso III, Marcus Hohlmann, Judson B. Locke, Amilkar S. Quintero, Debasis Mitra
 [26]  Nuclear Waste Imaging and Spent Fuel Verification by Muon TomographyG. Jonkmans, V. N. P. Anghel, C. Jewett, M. Thompson

Join our Reddit Experiment, Join the CompressiveSensing subreddit and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Friday, October 26, 2012

What's New in Compressive Sensing and Matrix Factorization This Month

I'll come back to some of them in the future, but in the meantime, here is the large batch for the month. Enjoy!

Slides: Constrained Overcomplete Analysis Operator Learning for Cosparse Signal Modelling by Mehrdad Yaghoobi, Sangnam Nam, Remi Gribonval, and Mike E. Davies

Mehrdad Yaghoobi Mike E. Davies

The problem of analysis operator learning can be formulated as a constrained optimisation problem. This problem has been approximately solved using projected gradient or geometric gradient descent methods. We will propose a relaxation for the constrained analysis operator learning in this paper. The relaxation has been suggested here to, a) reduce the computational complexity of the optimisation and b) include a larger set of admissible operators. We will show here that an appropriate relaxation can be useful in presenting a projection-free optimisation algorithm, while preventing the problem to become ill-posed. The relaxed optimisation objective is not convex and it is thus not always possible to find the global optimum. However, when a rich set of training samples are given, we empirically show that the desired synthetic analysis operator is recoverable, using the introduced sub-gradient descent algorithm.

Mehrdad Yaghoobi Sangnam Nam, R´emi Gribonval and Mike E. Davies

Abstract—We consider the problem of learning a low-dimensional signal model from a collection of training samples. The mainstream approach would be to learn an overcomplete dictionary to provide good approximations of the training samples using sparse synthesis coefficients. This famous sparse model has a less well known counterpart, in analysis form, called the cosparse analysis model. In this new model, signals are characterised by their parsimony in a transformed domain using an overcomplete (linear) analysis operator. We propose to learn an analysis operator from a training corpus using a constrained optimisation framework based on L1 optimisation. The reason for introducing a constraint in the optimisation framework is to exclude trivial solutions. Although there is no final answer here for which constraint is the most relevant constraint, we investigate some conventional constraints in the model adaptation field and use the uniformly normalised tight frame (UNTF) for this purpose. We then derive a practical learning algorithm, based on projected subgradients and Douglas-Rachford splitting technique, and demonstrate its ability to robustly recover a ground truth analysis operator, when provided with a clean training set, of sufficient size. We also find an analysis operator for images, using some noisy cosparse signals, which is indeed a more realistic experiment. As the derived optimisation problem is not a convex program, we often find a local minimum using such variational methods. For two different settings, we provide preliminary theoretical support for the well-posedness of the learning problem, which can be practically used to test the local identifiability conditions of learnt operators.

Shaun I. Kelly, Mehrdad Yaghoobi, and Mike E. Davies

Abstract—We investigate the effects of phase errors on undersampled synthetic aperture radar (SAR) systems. We show that the standard methods of auto-focus, which are used as a postprocessing step, are typically not suitable. Instead of applying auto-focus as a post-processor we propose using a stable algorithm, which is based on algorithms from the dictionary learning literature, that corrects phase errors during the reconstruction and is found empirically to recover sparse SAR images.

Jianing V. Shi, Wotao Yin and Stanley J. Osher

The ℓ1-regularized logistic regression is an important linear classifier in statistical learning, providing an attractive route for feature selection. The hyperparameter λ affects the level of sparsity. For arbitrary data, the optimal level of sparsity that yields the best classification performance is usually not known a priori. Current methodologies for seeking the optimal level of sparsity include the grid search method and the Bayesian approach. The grid search method typically requires constructing a regularization path, by solving a sequence of minimization problems with varying values of hyperparameter. Such a procedure can be time consuming. In this paper, we introduce a fast procedure that generates a new regularization path without tuning the hyperparameter. Our algorithm can efficiently sample the regularization path at finely grained points, through an iterative algorithm based on Bregman divergence. We first derive a new regularization path by replacing the ℓ1-norm by Bregman divergence, and contrast it with the grid search method. The direct Bregman method requires solving each subproblem accurately, and turns out to be computationally inefficient. Therefore we further derive a linearized Bregman algorithm, which is algebraically simple and computationally efficient. Finally we demonstrate some empirical results for the linearized Bregman algorithm on benchmark data and study feature selection as an inverse problem. Compared with the grid search method, the linearized Bregman algorithm generates a regularization path with comparable accuracy, in a much more computationally efficient manner.

Yiran Shen, Wen Hu, Junbin Liu, Mingrui Yang, Bo Wei and Chun Tung Chou

Background subtraction is often the first step of many computer vision applications. For a background subtraction method to be useful in embedded camera networks, it must be both accurate and computationally efficient because of the resource constraints on embedded platforms. This makes many traditional background subtraction algorithms unsuitable for embedded platforms because they use complex statistical models to handle subtle illumination changes. These models make them accurate but the computational requirement of these complex models is often too high for embedded platforms. In this paper, we propose a new background subtraction method which is both accurate and computational efficient. The key idea is to use compressive sensing to reduce the dimensionality of the data while retaining most of the information. By using multiple datasets, we show that the accuracy of our proposed background subtraction method is comparable to that of the traditional background subtraction methods. Moreover, real implementation on an embedded camera platform shows that our proposed method is at least 5 times faster, and consumes significantly less energy and memory resources than the conventional approaches. Finally, we demonstrated the feasibility of the proposed method by the implementation and evaluation of an end-to-end real-time embedded camera network target tracking application.

Prasant Misra, Mingrui Yang, Wen Hu, Sanjay Jha

Cross-correlation is a popular signal processing technique used for obtaining reliable range information. Recently, a practical and efficient implementation of cross-correlation (via sparse approximation) was demonstrated on resource constrained wireless sensor network platforms, where the key idea was to compress the received signal samples, and transfer them to central device where the range information
was retrieved by `1-minimization. Although, this mechanism yields accurate ranging results, its applicability is limited due to its slow execution speed and inaccurate recovery of the correlation peak magnitude, which implicitly provides the useful measure of signal-to-noise ratio. In this work, we propose Fast Gradient Projection (F-GP), a new `1-minimization algorithm, which overcomes the existing limitations, and provides fast and accurate ranging.

Qisong Wu, David Carlson, Wenzhao Lian, Mingyuan Zhou, Colin R. Stoetzner, Daryl Kipke, Douglas Weber, Joshua Vogelstein, David Dunson and Lawrence Carin

Abstract—A new model is developed for feature learning and clustering of electrophysiological (ephys) data across multiple recording periods. The model is applicable to situations in which the detected spikes may be clipped (constituting missing data). It is demonstrated that joint feature (dictionary) learning and clustering allows one to perform forensics on the characteristics of the data (distinguishing single-unit spikes from non-local phenomena and artifacts). We explicitly model the number of spikes within a measurement interval, addressing a time-evolving firing rate. Further, we model the number of clusters, mitigating limitations of methods like the Dirichlet process. Model properties are discussed, state-of-the-art results are presented on public data, and the methodology is demonstrated on new measured (experimental) ephys data.

We tackle the multi-party speech recovery problem through modeling the acoustic of the reverberant chambers. Our approach exploits structured sparsity models to perform room modeling and speech recovery. We propose a scheme for characterizing the room acoustic from the unknown competing speech sources relying on localization of the early images of the speakers by sparse approximation of the spatial spectra of the virtual sources in a free-space model. The images are then clustered exploiting the low-rank structure of the spectro-temporal components belonging to each source. This enables us to identify the early support of the room impulse response function and its unique map to the room geometry. To further tackle the ambiguity of the reflection ratios, we propose a novel formulation of the reverberation model and estimate the absorption coefficients through a convex optimization exploiting joint sparsity model formulated upon spatio-spectral sparsity of concurrent speech representation. The acoustic parameters are then incorporated for separating individual speech signals through either structured sparse recovery or inverse filtering the acoustic channels. The experiments conducted on real data recordings demonstrate the effectiveness of the proposed approach for multi-party speech recovery and recognition.

Nicolas Dobigeon, Adrian Basarab, Denis Kouame´and Jean-Yves Tourneret

Compressed sensing has recently shown much interest for ultrasound imaging. In particular, exploiting the sparsity of ultrasound images in the frequency domain, a specific random sampling of ultrasound images can be used advantageously for designing efficient Bayesian image reconstruction methods. We showed in a previous work that assigning independent Bernoulli Gaussian priors to the ultrasound image in the frequency domain provides Bayesian reconstruction errors similar to a classical ℓ1 minimization technique. However, the advantage of Bayesian methods is to estimate the sparsity level of the image by using a hierarchical algorithm. This paper goes a step further by exploiting the spatial correlations between the image pixels in the frequency domain. A new Bayesian model based on a correlated Bernoulli Gaussian model is proposed for that purpose. The parameters of this model can be estimated by sampling the corresponding posterior distribution using an MCMC method. The resulting algorithm provides very low reconstruction errors even when reducing significantly the number of measurements via random sampling.

Spark plays a great role in studying uniqueness of sparse solutions of the underdetermined linear equations. In this article, we derive a new lower bound of spark. As an application, we obtain a new criterion for the uniqueness of sparse solutions of linear equations.

The LASSO is a recent technique for variable selection in the regression model \bean y & = & X\beta +\epsilon, \eean where $X\in \R^{n\times p}$ and $\epsilon$ is a centered gaussian i.i.d. noise vector $\mathcal N(0,\sigma^2I)$. The LASSO has been proved to perform exact support recovery for regression vectors when the design matrix satisfies certain algebraic conditions and $\beta$ is sufficiently sparse. Estimation of the vector $X\beta$ has also extensively been studied for the purpose of prediction under the same algebraic conditions on $X$ and under sufficient sparsity of $\beta$. Among many other, the coherence is an index which can be used to study these nice properties of the LASSO. More precisely, a small coherence implies that most sparse vectors, with less nonzero components than the order $n/\log(p)$, can be recovered with high probability if its nonzero components are larger than the order $\sigma \sqrt{\log(p)}$. However, many matrices occuring in practice do not have a small coherence and thus, most results which have appeared in the litterature cannot be applied. The goal of this paper is to study a model for which precise results can be obtained. In the proposed model, the columns of the design matrix are drawn from a Gaussian mixture model and the coherence condition is imposed on the much smaller matrix whose columns are the mixture's centers, instead of on $X$ itself. Our main theorem states that $X\beta$ is as well estimated as in the case of small coherence up to a correction parametrized by the maximal variance in the mixture model.

Recovery of sparse signals from compressed measurements constitutes an l0 norm minimization problem, which is unpractical to solve. A number of sparse recovery approaches have appeared in the literature, including l1 minimization techniques, greedy pursuit algorithms, Bayesian methods and nonconvex optimization techniques among others. This manuscript introduces a novel two-stage greedy approach, called the Forward-Backward Pursuit (FBP). FBP is an iterative approach where each iteration consists of consecutive forward and backward stages. The forward step first expands the support estimate by the forward step size, while the following backward step shrinks it by the backward step size. The forward step size is higher than the backward step size, hence the initially empty support estimate is expanded at the end of each iteration. Forward and backward steps are iterated until the residual power of the observation vector falls below a threshold. This structure of FBP does not necessitate the sparsity level to be known a priori in contrast to the Subspace Pursuit or Compressive Sampling Matching Pursuit algorithms. FBP recovery performance is demonstrated via simulations including recovery of random sparse signals with different nonzero coefficient distributions in noisy and noise-free scenarios in addition to the recovery of a sparse image.

This letter proposes a novel method for accelerating iterative detection for spatially coupled (SC) systems. An SC system is constructed by one-dimensional coupling of many subsystems, which are classified into training and propagation parts. An irregular structure is introduced into the subsystems in the training part so that information in that part can be detected successfully. The obtained reliable information may spread over the whole system via the subsystems in the propagation part. In order to allow the subsystems in the training part to collaborate, shortcuts between them are created to accelerate iterative detection for that part. As an example of SC systems, SC code-division multiple-access (CDMA) systems are considered. Density Evolution for SC CDMA systems shows that the proposed method can provide a significant reduction in the number of iterations for highly loaded systems, compared to conventional methods.

Orthogonal Matching Pursuit (OMP) is a simple, yet empirically competitive algorithm for sparse recovery. Recent developments have shown that OMP guarantees exact recovery of $K$-sparse signals in $K$ iterations if the observation matrix $\Phi$ satisfies the Restricted Isometry Property (RIP) with Restricted Isometry Constant (RIC) $\delta_{K+1}<\frac{1}{\sqrt{K}+1}$. On the other hand, OMP empirically promises higher recovery rates when it runs for more than $K$ iterations. In order to support this theoretically, we extend the theoretical analysis of OMP to cover more than $K$ iterations. We develop exact recovery guarantees for $K$-sparse signals in more than $K$ iterations when $\Phi$ satisfies an RIP condition which depends on the number of correct and false indices in the support estimates of intermediate iterations. In addition, we present an upper bound on the number of false indices in the support estimate for the derived RIP condition to be less restrictive than $\delta_{K+1}<\frac{1}{\sqrt{K}+1}$. Moreover, we provide recovery simulations which demonstrate the performance improvement when more than $K$ iterations are allowed. Finally, we empirically analyse the number of false indices in the support estimate, which indicates that these do not violate the developed upper bound in practice.

We propose a proximal approach to deal with convex optimization problems involving nonlinear constraints. A large family of such constraints, proven to be effective in the solution of inverse problems, can be expressed as the lower level set of a sum of convex functions evaluated over different, but possibly overlapping, blocks of the signal. For this class of constraints, the associated projection operator generally does not have a closed form. We circumvent this difficulty by splitting the lower level set into as many epigraphs as functions involved in the sum. A closed half-space constraint is also enforced, in order to limit the sum of the introduced epigraphical variables to the upper bound of the original lower level set. 
In this paper, we focus on a family of constraints involving linear transforms of l_1,p balls. Our main theoretical contribution is to provide closed form expressions of the epigraphical projections associated with the Euclidean norm and the sup norm. The proposed approach is validated in the context of image restoration with missing samples, by making use of TV-like constraints. Experiments show that our method leads to significant improvements in term of convergence speed over existing algorithms for solving similar constrained problems.

From a mathematical point of view self-organization can be described as patterns to which certain dynamical systems modeling social dynamics tend spontaneously to be attracted. In this paper we explore situations beyond self-organization, in particular how to externally control such dynamical systems in order to eventually enforce pattern formation also in those situations where this wished phenomenon does not result from spontaneous convergence. Our focus is on dynamical systems of Cucker-Smale type, modeling consensus emergence, and we question the existence of stabilization and optimal control strategies which require the minimal amount of external intervention for nevertheless inducing consensus in a group of interacting agents. We provide a variational criterion to explicitly design feedback controls that are componentwise sparse, i.e. with at most one nonzero component at every instant of time. Controls sharing this sparsity feature are very realistic and convenient for practical issues. Moreover, the maximally sparse ones are instantaneously optimal in terms of the decay rate of a suitably designed Lyapunov functional, measuring the distance from consensus. As a consequence we provide a mathematical justification to the general principle according to which "sparse is better" in the sense that a policy maker, who is not allowed to predict future developments, should always consider more favorable to intervene with stronger action on the fewest possible instantaneous optimal leaders rather than trying to control more agents with minor strength in order to achieve group consensus. We then establish local and global sparse controllability properties to consensus and, finally, we analyze the sparsity of solutions of the finite time optimal control problem where the minimization criterion is a combination of the distance from consensus and of the l1-norm of the control.

We consider continuous-time sparse stochastic processes from which we have only a finite number of noisy/noiseless samples. Our goal is to estimate the noiseless samples (denoising) and the signal in-between (interpolation problem). 
By relying on tools from the theory of splines, we derive the joint a priori distribution of the samples and show how this probability density function can be factorized. The factorization enables us to tractably implement the maximum a posteriori and minimum mean-square error (MMSE) criteria as two statistical approaches for estimating the unknowns. We compare the derived statistical methods with well-known techniques for the recovery of sparse signals, such as the $\ell_1$ norm and Log ($\ell_1$-$\ell_0$ relaxation) regularization methods. The simulation results show that, under certain conditions, the performance of the regularization techniques can be very close to that of the MMSE estimator.

This paper considers the problem of reconstructing sparse or compressible signals from one-bit quantized measurements. We study a new method that uses a log-sum penalty function, also referred to as the Gaussian entropy, for sparse signal recovery. Also, in the proposed method, sigmoid functions are introduced to quantify the consistency between the acquired one-bit quantized data and the reconstructed measurements. A fast iterative algorithm is developed by iteratively minimizing a convex surrogate function that bounds the original objective function, which leads to an iterative reweighted process that alternates between estimating the sparse signal and refining the weights of the surrogate function. Connections between the proposed algorithm and other existing methods are discussed. Numerical results are provided to illustrate the effectiveness of the proposed algorithm.

The orthogonal multi-matching pursuit (OMMP) is a natural extension of orthogonal matching pursuit (OMP). We denote the OMMP with the parameter $M$ as OMMP(M) where $M\geq 1$ is an integer. The main difference between OMP and OMMP(M) is that OMMP(M) selects $M$ atoms per iteration, while OMP only adds one atom to the optimal atom set. In this paper, we study the performance of orthogonal multi-matching pursuit (OMMP) under RIP. In particular, we show that, when the measurement matrix A satisfies $(9s, 1/10)$-RIP, there exists an absolutely constant $M_0\leq 8$ so that OMMP(M_0) can recover $s$-sparse signal within $s$ iterations. We furthermore prove that, for slowly-decaying $s$-sparse signal, OMMP(M) can recover s-sparse signal within $O(\frac{s}{M})$ iterations for a large class of $M$. In particular, for $M=s^a$ with $a\in [0,1/2]$, OMMP(M) can recover slowly-decaying $s$-sparse signal within $O(s^{1-a})$ iterations. The result implies that OMMP can reduce the computational complexity heavily.

Abstract— Reconstruction of biochemical reaction networks (BRN) and genetic regulatory networks (GRN) in particular is a central topic in systems biology which raises crucial theoretical challenges in system identification. Nonlinear Ordinary Differential Equations (ODEs) that involve polynomial and rational functions are typically used to model biochemical reaction networks. Such nonlinear models make the problem of determining the connectivity of biochemical networks from time-series experimental data quite difficult. In this paper, we present a network reconstruction algorithm that can deal with ODE model descriptions containing polynomial and rational functions. Rather than identifying the parameters of linear or
nonlinear ODEs characterised by pre-defined equation structures, our methodology allows us to determine the nonlinear ODEs structure together with their associated parameters. To solve the network reconstruction problem, we cast it as a compressive sensing (CS) problem and use sparse Bayesian learning (SBL) algorithms as a computationally efficient and robust way to obtain its solution.

We present a novel statistically-based discretization paradigm and derive a class of maximum a posteriori (MAP) estimators for solving ill-conditioned linear inverse problems. We are guided by the theory of sparse stochastic processes, which specifies continuous-domain signals as solutions of linear stochastic differential equations. Accordingly, we show that the class of admissible priors for the discretized version of the signal is confined to the family of infinitely divisible distributions. Our estimators not only cover the well-studied methods of Tikhonov and $\ell_1$-type regularizations as particular cases, but also open the door to a broader class of sparsity-promoting regularization schemes that are typically nonconvex. We provide an algorithm that handles the corresponding nonconvex problems and illustrate the use of our formalism by applying it to deconvolution, MRI, and X-ray tomographic reconstruction problems. Finally, we compare the performance of estimators associated with models of increasing sparsity.

Calibrationless Parallel Imaging Reconstruction Based on Structured Low-Rank Matrix Completion by Peter J. Shin, Peder E.Z. Larson, Michael A. Ohliger, Michael Elad, John M. Pauly, Daniel B. Vigneron, Michael Lustig

A calibrationless parallel imaging reconstruction method, termed simultaneous autocalibrating and k-space estimation (SAKÉ), is presented. It is a data-driven, coil-by-coil reconstruction method that does not require fully sampled calibrating signals. In SAKÉ,  an under-sampled multi-channel dataset is structured into a single matrix and data  reconstruction is formulated as a structured low-rank matrix completion problem. An  iterative solution that implements a projection-onto-sets algorithm with singular value hard-thresholding is described. Reconstruction results are demonstrated for undersampled, multi-channel Cartesian and non-Cartesian data with no calibration data. These exhibit excellent image quality comparable to those obtained with calibration data. Finally, improved image quality is demonstrated by combining SAKÉ with waveletbased compressed sensing. This method could benefit MR applications where acquiring accurate calibration data is limiting or not possible at all.

Takayuki Yamada, Doohwan Lee, Hideki Toshinaga, Kazunori Akabane, Yo Yamaguchi, and Kazuhiro Uehara 

Abstract—The “Flexible Wireless System (FWS)” has been proposed as a networked system for the “User-Centric Wireless Network (UCWN)”. The UCWN will allow users to make network connections easily at all times without being conscious of any upgrades or differences in wireless systems. The FWS is a unified wireless platform that simultaneously deals with various types of wireless signals. It consists of flexible access points and a wireless signal processing platform. Various types of wireless signals are received at a distributed flexible access point and transferred to a server in the wireless signal processing platform through the wired access line. Transferred signals are separated and demodulated at the server. To achieve highly flexible and efficient radio wave data transfer between the access point and the FWS server, we consider compression of transfer data. In this paper, we propose 1-bit compressed sensing with smoothed edge detection, which enhances compression and reconstruction performance. This paper shows the performance of the proposed method by using computer simulations to explore the method’s validity for transferring compressed radio wave data.

In this paper, an algorithm for sparse learning via Maximum Margin Matrix Factorization(MMMF) is proposed. The algorithm is based on L1 penality and Alternating Direction Method of Multipliers. It shows that with sparse factors, sparse factors method can obtain result as good as dense factors.

Lester Mackey

Motivated by the constrained factorization problems of sparse principal components analysis (PCA) for gene expression modeling, low-rank matrix completion for recommender systems, and robust matrix factorization for video surveillance, this dissertation explores the modeling, methodology, and theory of matrix factorization. We begin by exposing the theoretical and empirical shortcomings of standard deflation techniques for sparse PCA and developing alternative methodology more suitable for deflation with sparse “pseudo-eigenvectors.” We then explicitly reformulate the sparse PCA optimization problem and derive a generalized deflation procedure that typically outperforms more standard techniques on real-world datasets. We next develop a fully Bayesian matrix completion framework for integrating the complementary approaches of discrete mixed membership modeling and continuous matrix factorization. We introduce two Mixed Membership Matrix Factorization (M3F) models, develop highly parallelizable Gibbs sampling inference procedures, and find that M3F is both more parsimonious and more accurate than state-of-the-art baselines on real-world collaborative filtering datasets. Our third contribution is Divide-Factor-Combine (DFC), a parallel divide-and-conquer framework for boosting the scalability of a matrix completion or robust matrix factorization algorithm while retaining its theoretical guarantees. Our experiments demonstrate the near-linear to super-linear speed-ups attainable with this approach, and our analysis shows that DFC enjoys high-probability recovery guarantees comparable to those of its base algorithm. Finally, inspired by the analyses of matrix completion and randomized factorization procedures, we show how Stein’s method of exchangeable pairs can be used to derive concentration inequalities for matrix-valued random elements. As an immediate consequence, we obtain analogues of classical moment inequalities and exponential tail inequalities for independent and dependent sums of random matrices. We moreover derive comparable concentration inequalities for self-bounding matrix functions of dependent random elements.

Signal reconstruction in compressive sensing involves finding a sparse solution that satisfies a set of linear constraints. Several approaches to this problem have been considered in existing reconstruction algorithms. They each provide a trade-off between reconstruction capabilities and required computation time. In an attempt to push the limits for this trade-off, we consider a smoothed l0 norm (SL0) algorithm in a noiseless setup. We argue that using a set of carefully chosen parameters in our proposed adaptive SL0 algorithm may result in significantly better reconstruction capabilities in terms of phase transition while retaining the same required computation time as existing SL0 algorithms. A large set of simulations further support this claim. Simulations even reveal that the theoretical l1 curve may be surpassed in major parts of the phase space.

Compressive Sensing (CS) method is a burgeoning technique being applied to diverse areas including wireless sensor networks (WSNs). In WSNs, it has been studied in the context of data gathering and aggregation, particularly aimed at reducing data transmission cost and improving power efficiency. Existing CS based data gathering work in WSNs assume fixed and uniform compression threshold across the network, regard- less of the data field characteristics. In this paper, we present a novel data aggregation architecture model that combines a multi- resolution structure with compressed sensing. The compression thresholds vary over the aggregation hierarchy, reflecting the underlying data field. Compared with previous relevant work, the proposed model shows its significant energy saving from theoretical analysis. We have also implemented the proposed CS- based data aggregation framework on a SIDnet SWANS platform, discrete event simulator commonly used for WSN simulations. Our experiments show substantial energy savings, ranging from 37% to 77% for different nodes in the networking depending on the position of hierarchy.

This paper develops a novel framework to compute a projected Generalized Stein Unbiased Risk Estimator (GSURE) for a wide class of sparsely regularized solutions of inverse problems. This class includes arbitrary convex data fidelities with both analysis and synthesis mixed L1-L2 norms. The GSURE necessitates to compute the (weak) derivative of a solution w.r.t.~the observations. However, as the solution is not available in analytical form but rather through iterative schemes such as proximal splitting, we propose to iteratively compute the GSURE by differentiating the sequence of iterates. This provides us with a sequence of differential mappings, which, hopefully, converge to the desired derivative and allows to compute the GSURE. We illustrate this approach on total variation regularization with Gaussian noise and to sparse regularization with poisson noise, to automatically select the regularization parameter.

The model of low-dimensional manifold and sparse representation are two well-known concise models that suggest each data can be described by a few characteristics. Manifold learning is usually investigated for dimension reduction by preserving some expected local geometric structures from original space to low-dimensional space. The structures are generally determined by using pairwise distance, e.g., Euclidean distance. Alternatively, sparse representation denotes a data point as a linear combination of the points from the same subspace. In practical applications, however, the nearby points in terms of pairwise distance, may not belong to the same subspace, and vice versa. Consequently, it is interesting and important to explore how to get a better representation by integrating these two models together. To this end, this paper proposes a novel coding algorithm, called Locality-Constrained Collaborative Representation (LCCR), which improves the robustness and discrimination of data representation by incorporating the locality based on pairwise distance into coding process. In addition, the proposed objective function has an analytical solution, and it does not involve local minima. The empirical studies based on three public facial databases, ORL, AR and Extend Yale B, demonstrate that LCCR outperforms three state-of-the-art models, sparse representation based classification (SRC) \cite{Wright2009}, $\ell^2$-FR cite{Shi2011} and CRC-RLS \cite{Zhang2011} in the context of recognizing human faces from frontal views with varying expression and illumination, as well as various corruptions and occlusions.

This paper considers sequential adaptive estimation of sparse signals under a constraint on the total sensing effort. The advantage of adaptivity in this context is the ability to focus more resources on regions of space where signal components exist, thereby improving performance. A dynamic programming formulation is derived for the allocation of sensing effort to minimize the expected estimation loss. Based on the method of open-loop feedback control, allocation policies are then developed for a variety of loss functions. The policies are optimal in the two-stage case and improve monotonically thereafter with the number of stages. Numerical simulations show gains up to several dB as compared to recently proposed adaptive methods, and dramatic gains approaching the oracle limit compared to non-adaptive estimation. An application to radar imaging is also presented.

Natural stimuli are highly redundant, possessing significant spatial and temporal correlations. While sparse coding has been proposed as an efficient strategy employed by neural systems to encode sensory stimuli, the underlying mechanisms are still not well understood. Most previous approaches model the neural dynamics by the sparse representation dictionary itself and compute the representation coefficients offline. In reality, faced with the challenge of constantly changing stimuli, neurons must compute the sparse representations dynamically in an online fashion. Here, we describe a leaky linearized Bregman iteration (LLBI) algorithm which computes the time varying sparse representations using a biologically motivated network of leaky rectifying neurons. Compared to previous attempt of dynamic sparse coding, LLBI exploits the temporal correlation of stimuli and demonstrate better performance both in representation error and the smoothness of temporal evolution of sparse coefficients.

We consider a generalization of the multiple measurement vector (MMV) problem, where the measurement matrices are allowed to differ across measurements. This problem arises naturally when multiple measurements are taken over time, e.g., and the measurement modality (matrix) is time-varying. We derive probabilistic recovery guarantees showing that---under certain (mild) conditions on the measurement matrices---l2/l1-norm minimization and a variant of orthogonal matching pursuit fail with a probability that decays exponentially in the number of measurements. This allows us to conclude that, perhaps surprisingly, recovery performance does not suffer from the individual measurements being taken through different measurement matrices. What is more, recovery performance typically benefits (significantly) from diversity in the measurement matrices; we specify conditions under which such improvements are obtained. These results continue to hold when the measurements are subject to (bounded) noise.

This paper continues the Wu-Shamai-Verd\'u program [1] on characterizing the degrees of freedom (DoF) of interference channels (ICs) through R\'enyi information dimension. Concretely, we find a general formula for the DoF of vector ICs, encompassing multiple-input multiple-output (MIMO) ICs, time- and/or frequency-selective ICs, and combinations thereof, as well as constant single-antenna ICs considered in [1]. As in the case of constant single-antenna ICs, achieving full DoF requires the use of singular input distributions. Strikingly, in the vector case it suffices to enforce singularity on the joint distribution of individual transmit vectors. This can be realized through signaling in subspaces of the ambient signal space, which is in accordance with the idea of interference alignment, and, most importantly, allows the scalar components of the transmit vectors to have non-singular distributions. We recover the result by Cadambe and Jafar on the non-separability of parallel ICs [2] and we show that almost all parallel ICs are separable. Finally, our results extend the main finding in [1] to the complex case.

Computing sparse redundant representations is an important problem both in applied mathematics and neuroscience. In many applications, this problem must be solved in an energy efficient way. Here, we propose a hybrid distributed algorithm (HDA), which solves this problem on a network of simple nodes communicating via low-bandwidth channels. HDA nodes perform both gradient-descent-like steps on analog internal variables and coordinate-descent-like steps via quantized external variables communicated to each other. Interestingly, such operation is equivalent to a network of integrate-and-fire neurons, suggesting that HDA may serve as a model of neural computation. We show that the numerical performance of HDA is on par with existing algorithms. In the asymptotic regime the representation error of HDA decays with time, t, as 1/t. HDA is stable against time-varying noise, specifically, the representation error decays as 1/sqrt(t) for Gaussian white noise.

Abstract—We present a compressive sensing (CS) approach for secret key agreement based on UWB channel reciprocity. More generally, we show that the CS problem can be used for solving the distributed source coding problem. We also show that the proposed CS-based secret key agreement protocol (CS-SKAP) provides perfect secrecy, a better key agreement probability, and sometimes a longer secret key length, compared to the traditional syndrome-based distributed source coding techniques.

Felix Krahmer and Rachel Ward

To date, the theory for compressive sampling with frequency measurements has only been developed for bases that, like the canonical or `pixel' basis, are incoherent with the Fourier basis. In many applications, such as Magnetic Resonance Imaging (MRI) or inverse scattering, one instead acquires images that are sparse in transform domains such as spatial nite di erences or wavelets which are not incoherent with the Fourier basis. For these applications, overwhelming empirical evidence and heuristic arguments have suggested that superior image reconstruction can be obtained through certain variable density sampling strategies which concentrate on lower frequencies. Here we fortify these empirical studies with theoretical reconstruction guarantees, showing that sampling frequencies according to suitable power-law densities enables image reconstructions that are stable to sparsity defects and robust to measurement noise. Our results hinge on proving that the coherence between the Fourier and Haar wavelet basis is su ciently concentrated on low frequencies that an incoherent preconditioned system results by resampling the Fourier basis appropriately.

A proximity algorithm solving indicator functions based l1-norm minimization problems in compressive sampling by Feishe Chen (current student), Lixin Shen, Bruce W. Suter, and Yuesheng Xu

Abstract: An accurate and efficient algorithm for an l1-norm minimization problem is highly needed and is crucial for the success of sparse signal recovery in compressive sampling, a recent development in the field of data analysis. Most of existing algorithms in the literature give an approximated solution to the problem. We tackle the ℓ1-norm minimization problem by equivalently reformulating it via an indicator function which describes the constraints for the problem. It turns out that the resulting model can be solved efficiently and accurately by using an elegant proximity operator based algorithm. We establish the convergence analysis of the resulting algorithm. Numerical experiments show that the proposed algorithm performs well for sparse signals with magnitudes over a high dynamic range. Furthermore, it performs significantly better than the well-known algorithm NESTA in terms of the quality of restored signals and the computational complexity measured in the CPU-time consumed.

Image reconstruction in fluorescence diffuse optical tomography (FDOT) is a highly ill-posed inverse problem due to a large number of unknowns and limited measurements. In FDOT, the fluorophore distribution is often sparse in the imaging domain, since most fluorophores are designed to accumulate in relatively small regions. Compressive sensing theory has shown that sparse signals can be recovered exactly from only a small number of measurements when the forward sensing matrix is sufficiently incoherent. In this Letter, we present a method of preconditioning the FDOT forward matrix to reduce its coherence. The reconstruction results using real data obtained from a phantom experiment show visual and quantitative improvements due to preconditioning in conjunction with convex relaxation and greedy-type sparse signal recovery algorithms.

Variations on a theorem of Candès, Romberg and Tao The CRT theorem reconstructs a signal from a sparse set of frequencies, a paradigm of Compressed sensing. The signal is assumed to be carried by a small number of points, s , in a large cyclic set, of order N ; the frequencies consist of C s log N points chosen randomly in Z/N Z ; the reconstruction is based on a minimal extrapolation in the Wiener algebra of Z/N Z of the restriction of the Fourier transform of the signal to the chosen set of frequencies. The probability of reconstructing the signal is nearly 1 when C is large. The statement should be modified when we want all signals carried by s points to be reconstructed in that way. The CRT approach is based on random matrices, here the approach is classical Fourier analysis.

Yongjun Kwak, Seunghoon Nam, Mehmet Akçakaya, Tamer A. Basha, Beth Goddu, Warren J. Manningm Vahid Tarokh, Reza Nezafat

Phase contrast (PC) cardiac MR is widely used for the clinical assessment of blood flow in cardiovascular disease. One of the challenges of PC cardiac MR is the long scan time which limits both spatial and temporal resolution. Compressed sensing reconstruction with accelerated PC acquisitions is a promising technique to increase the scan efficiency. In this study, we sought to use the sparsity of the complex difference of the two flow-encoded images as an additional constraint term to improve the compressed sensing reconstruction of the corresponding accelerated PC data acquisition. Using retrospectively under-sampled data, the proposed reconstruction technique was optimized and validated in vivo on 15 healthy subjects. Then, prospectively under-sampled data was acquired on 11 healthy subjects and reconstructed with the proposed technique. The results show that there is good agreement between the cardiac output measurements from the fully sampled data and the proposed compressed sensing reconstruction method using complex difference sparsity up to acceleration rate 5. In conclusion, we have developed and evaluated an improved reconstruction technique for accelerated PC cardiac MR that uses the sparsity of the complex difference of the two flow-encoded images

Introduction: Fast iterative thresholding methods [1,2] have been extensively studied as alternatives to convex optimization for high-dimensional large-sized problems in compressed sensing (CS) [3]. A common large-sized problem is dynamic contrast enhanced (DCE) MRI, where the dynamic measurements possess data redundancies that can be used to estimate non-zero signal locations. In this work, we present a novel iterative thresholding method called LCAMP (Location Constrained Approximate Message Passing) that combines a non-zero location assumption and an approximate message passing term to previous methods [4]. The method can reduce computational complexity and improve reconstruction accuracy. We demonstrate the proposed reconstruction using 4D breast DCE MRI data, of a size that is often challenging for constrained reconstructions.

In conventional group testing, the goal is to detect a small subset of defecting items D in a large population N by grouping \textit{arbitrary} subset of N into different pools. The result of each group test T is a binary output depending on whether the group contains a defective item or not. The main challenge is to minimize the number of pools required to identify the set D. Motivated by applications in network monitoring and infection propagation, we consider the problem of group testing with graph constraints. As opposed to conventional group testing where \textit{any} subset of items can be pooled, here a test is admissible if it induces a connected subgraph H⊂G. In contrast to the non-adaptive pooling process used in previous work, we first show that by exploiting an adaptive strategy, one can dramatically reduce the number of tests. More specifically, for \textit{any} graph G, we devise a 2-approximation algorithm (and hence order optimal) that locates the set of defective items D. To obtain a good compromise between adaptive and non-adaptive strategies, we then devise a multi-stage algorithm. In particular, we show that if the set of defective items are uniformly distributed, then an l-stage pooling strategy can identify the defective set in O(l⋅|D|⋅|N|1/l) tests, on the average. In particular, for l=log(|N|) stages, the number of tests reduces to 4|D|log(|N|), which in turn is order optimum.

Online `1-Dictionary Learning with Application to Novel Document Detection Shiva Prasad Kasiviswanathan, Huahua Wang, Arindam Banerjee, Prem Melville 

 Abstract Given their pervasive use, social media, such as Twitter, have become a leading source of breaking news. A key task in the automated identification of such news is the detection of novel documents from a voluminous stream of text documents in a scalable manner. Motivated by this challenge, we introduce the problem of online `1-dictionary learning where unlike traditional dictionary learning, which uses squared loss, the `1-penalty is used for measuring the reconstruction error. We present an efficient online algorithm for this problem based on alternating directions method of multipliers, and establish a sublinear regret bound for this algorithm. Empirical results on news-stream and Twitter data, shows that this online `1-dictionary learning algorithm for novel document detection gives more than an order of magnitude speedup over the previously known batch algorithm, without any significant loss in quality of results. Our algorithm for online `1- dictionary learning could be of independent interest. Robust Doppler radar demodulation via compressed sensing W. Xu, C. Gu, C. Li and M. Sarrafzadeh The microwave Doppler radar sensor enables a non-contact approach for measuring movement in various applications. One of the most challenging issues is radar signal demodulation because it requires accurate DC offset compensation. Existing works either request a complicated setup procedure or are sensitive to environmental changes. In this letter, we discuss a compressed sensing based approach to effectively demodulate radar signals. Through ℓ1 minimization, the proposed method can reliably demodulate noisy signals with large measurement residuals. To validate the algorithm, we run three sets of experiments to evaluate the demodulation performance. Experimental results show that our proposed method is promising in both simulation and real-case studies.

Rare events can potentially occur in many applications. When manifested as opportunities to be exploited, risks to be ameliorated, or certain features to be extracted, such events become of paramount significance. Due to their sporadic nature, the information-bearing signals associated with rare events often lie in a large set of irrelevant signals and are not easily accessible. This paper provides a statistical framework for detecting such events so that an optimal balance between detection reliability and agility, as two opposing performance measures, is established. The core component of this framework is a sampling procedure that adaptively and quickly focuses the information-gathering resources on the segments of the dataset that bear the information pertinent to the rare events. Particular focus is placed on Gaussian signals with the aim of detecting signals with rare mean and variance values.

The fluorescence microscope is one of the most important tools in modern clinical diagnosis and biological science. However, its expense, size and limited field-of-view (FOV) are becoming bottlenecks in key applications such as large-scale phenotyping and low-resource-setting diagnostics. Here we report a low-cost, compact chip-scale fluorescence-imaging platform, termed the Fluorescence Talbot Microscopy (FTM), which utilizes the Talbot self-imaging effect to enable efficient fluorescence imaging over a large and directly-scalable FOV. The FTM prototype has a resolution of 1.2 microns and an FOV of 3.9 mm x 3.5 mm. We demonstrate the imaging capability of FTM on fluorescently labeled breast cancer cells (SK-BR-3) and HEK cells expressing green fluorescent protein.

Estimating the level set of a signal from measurements is a task that arises in a variety of fields, including medical imaging, astronomy, and digital elevation mapping. Motivated by scenarios where accurate and complete measurements of the signal may not available, we examine here a simple procedure for estimating the level set of a signal from highly incomplete measurements, which may additionally be corrupted by additive noise. The proposed procedure is based on box-constrained Total Variation (TV) regularization. We demonstrate the performance of our approach, relative to existing state-of-the-art techniques for level set estimation from compressive measurements, via several simulation examples.

The Bethe approximation, discovered in statistical physics, gives an efficient algorithm called belief propagation (BP) for approximating a partition function. BP empirically gives an accurate approximation for many problems, e.g., low-density parity-check codes, compressed sensing, etc. Recently, Vontobel gives a novel characterization of the Bethe approximation using graph cover. In this paper, a new approximation based on the Bethe approximation is proposed. The new approximation is derived from Vontobel's characterization using graph cover, and expressed by using the edge zeta function, which is related with the Hessian of the Bethe free energy as shown by Watanabe and Fukumizu. On some conditions, it is proved that the new approximation is asymptotically better than the Bethe approximation.

The alternating direction method of multipliers (ADMM) has sparked recent interest as an efficient optimization tool for solving imaging inverse problems, such as deconvolution and reconstruction. ADMM-based approaches achieve state-of-the-art speed, by adopting a divide and conquer strategy that splits a hard problem into simpler, efficiently solvable sub-problems (e.g., using fast Fourier or wavelet transforms, or proximity operators with low computational cost). In deconvolution problems, one of these sub-problems involves a matrix inversion (i.e., solving a linear system), which can be performed efficiently (in the discrete Fourier domain) if the observation operator is circulant, that is, under periodic boundary conditions. This paper proposes an ADMM approach for image deconvolution in the more realistic scenario of unknown boundary conditions. To estimate the image and its unknown boundary, we model the observation operator as a composition of a cyclic convolution with a spatial mask that excludes those pixels where the cyclic convolution is invalid, i.e., the unknown boundary. The proposed method can also handle, at no additional cost, problems that combine inpating (recovery of missing pixels) and deblurring. We show that the resulting algorithm inherits the convergence guarantees of ADMM and illustrate its state-of-the-art performance on non-cyclic deblurring (with and without inpainting of interior pixels) under total-variation (TV) regularization.

Consider the problem of reconstructing a multidimensional signal from partial information. Without any additional assumptions, this problem is ill-posed. However, for signals such as natural images or movies, the minimal total variation estimate consistent with the measurements often produces a good approximation to the underlying signal, even if the number of measurements is far smaller than the ambient dimensionality. While reconstruction guarantees and optimal measurement designs have been established for related L1-minimization problems, the theory for total variation minimization has remained elusive until recently, when guarantees for two-dimensional images were established. This paper extends the recent theoretical results to signals of arbitrary dimension d>1. To be precise, we show that a multidimensional signal can be reconstructed from O(sd log(N^d)) linear measurements using total variation minimization to within a factor of the best s-term approximation of its gradient. The reconstruction guarantees we provide are necessarily optimal up to polynomial factors in the spatial dimension $d$ and a logarithmic factor in the signal dimension N^d. The proof relies on bounds in approximation theory concerning the compressibility of wavelet expansions of bounded-variation functions.

We propose a compressive sensing algorithm that exploits geometric properties of images to recover images of high quality from few measurements. The image reconstruction is done by iterating the two following steps: 1) estimation of normal vectors of the image level curves and 2) reconstruction of an image fitting the normal vectors, the compressed sensing measurements and the sparsity constraint. The proposed technique can naturally extend to non local operators and graphs to exploit the repetitive nature of textured images in order to recover fine detail structures. In both cases, the problem is reduced to a series of convex minimization problems that can be efficiently solved with a combination of variable splitting and augmented Lagrangian methods, leading to fast and easy-to-code algorithms. Extended experiments show a clear improvement over related state-of-the-art algorithms in the quality of the reconstructed images and the robustness of the proposed method to noise, different kind of images and reduced measurements.

In Compressive Sensing, the Restricted Isometry Property (RIP) ensures that robust recovery of sparse vectors is possible from noisy, undersampled measurements via computationally tractable algorithms. It is by now well-known that Gaussian (or, more generally, sub-Gaussian) random matrices satisfy the RIP under certain conditions on the number of measurements. Their use can be limited in practice, however, due to storage limitations, computational considerations, or the mismatch of such matrices with certain measurement architectures. These issues have recently motivated considerable effort towards studying the RIP for structured random matrices. In this paper, we study the RIP for block diagonal measurement matrices where each block on the main diagonal is itself a sub-Gaussian random matrix. Our main result states that such matrices can indeed satisfy the RIP but that the requisite number of measurements depends on certain properties of the basis in which the signals are sparse. In the best case, these matrices perform nearly as well as dense Gaussian random matrices, despite having many fewer nonzero entries.

Christina Teflioudi, Faraz Makari, Rainer Gemulla

Abstract—We discuss parallel and distributed algorithms for large-scale matrix completion on problems with millions of rows, millions of columns, and billions of revealed entries. We focus on in-memory algorithms that run on a small cluster of commodity nodes; even very large problems can be handled effectively in such a setup. Our DALS, ASGD, and DSGD++ algorithms are novel variants of the popular alternating least squares and stochastic gradient descent algorithms; they exploit thread-level parallelism, in-memory processing, and asynchronous communication. We provide some guidance on the asymptotic performance of each algorithm and investigate the performance of both our algorithms and previously proposed MapReduce algorithms in large-scale experiments. We found that DSGD++ outperforms competing methods in terms of overall runtime, memory consumption, and scalability. Using DSGD++, we can factor a matrix with 10B entries on 16 compute nodes in around 40 minutes.

Matrix Completion by Franz J. Király, Louis Theran, Ryota Tomioka

Image Credit: NASA/JPL/Space Science Institute,
Full-Res: W00076440.jpg
W00076440.jpg was taken on October 18, 2012 and received on Earth October 19, 2012. The camera was pointing toward SATURN-RINGS at approximately 339,982 miles (547,148 kilometers) away, and the image was taken using the CL1 and CL2 filters.