Tuesday, November 12, 2013

Sparse nonnegative deconvolution for compressive calcium imaging: algorithms and phase transitions

In Sunday Morning Insight: The Map Makers, we have seen how phase transitions were changing our landscape. In particular, they provide a sound acid test on the robustness of assumptions used in reconstruction solvers but we also saw how they can be used for other purposes. If you recall the end of that entry, I mentioned a specific problem

Many different models for the human visual systems could probably be evaluated through this acid test.

that was part of an informal discussion at the last Paris Machine Learning Meetup last month (yes, there are very interesting side conversations on top of great speakers at those meetups). In today's paper, one uses a compressive sensor and uses the sharp phase transition to evaluate how many measurements from that compressive sensor that should be required to have a good reconstruction capability of neuron spiking populations. Wow.





Sparse nonnegative deconvolution for compressive calcium imaging: algorithms and phase transitions by Eftychios A. Pnevmatikakis, Liam Paninski
We propose a compressed sensing (CS) calcium imaging framework for monitoring large neuronal populations, where we image randomized projections of the spatial calcium concentration at each timestep, instead of measuring the concentration at individual locations. We develop scalable nonnegative deconvolution methods for extracting the neuronal spike time series from such observations. We also address the problem of demixing the spatial locations of the neurons using rank-penalized matrix factorization methods. By exploiting the sparsity of neural spiking we demonstrate that the number of measurements needed per timestep is significantly smaller than the total number of neurons, a result that can potentially enable imaging of larger populations at considerably faster rates compared to traditional raster-scanning techniques. Unlike traditional CS setups, our problem involves a block-diagonal sensing matrix and a non-orthogonal sparse basis that spans multiple timesteps. We study the effect of these distinctive features in a noiseless setup using recent results relating conic geometry to CS. We provide tight approximations to the number of measurements needed for perfect deconvolution for certain classes of spiking processes, and show that this number displays a “phase transition,” similar to phenomena observed in more standard CS settings; however, in this case the required measurement rate depends not just on the mean sparsity level but also on other details of the underlying spiking process.
I note from the paper the follwing surprise:

We show that as the number of measurements increases, the probability of successful recovery undergoes a phase transition, and study the resulting phase transition curve (PTC), i.e., the number of measurements per timestep required for accurate deconvolution as a function of the number of spikes. Our analysis uses recent results that connect CS with conic geometry through the “statistical dimension” (SD) of descent cones (Amelunxen et al., 2013). We demonstrate that in many cases of interest, the SD provides a very good estimate of the PTC. Moreover, our analysis shows that the exact location of the spikes (and not just their number) has an effect on the required number of measurements; this is a somewhat surprising and non-standard feature of our non-orthogonal setting



Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments:

Printfriendly