Monday, September 25, 2017

Second-Order Optimization for Non-Convex Machine Learning, FLAG, GIANT and Non-Convex Optimization Under Inexact Hessian

I like the fact that SGD hyperparameter tuning could be reduced.


Second-Order Optimization for Non-Convex Machine Learning: An Empirical Study by Peng Xu, Farbod Roosta-Khorasan, Michael W. Mahoney
The resurgence of deep learning, as a highly effective machine learning paradigm, has brought back to life the old optimization question of non-convexity. Indeed, the challenges related to the large-scale nature of many modern machine learning applications are severely exacerbated by the inherent non-convexity in the underlying models. In this light, efficient optimization algorithms which can be effectively applied to such large-scale and non-convex learning problems are highly desired. In doing so, however, the bulk of research has been almost completely restricted to the class of 1st-order algorithms. This is despite the fact that employing the curvature information, e.g., in the form of Hessian, can indeed help with obtaining effective methods with desirable convergence properties for non-convex problems, e.g., avoiding saddle-points and convergence to local minima. The conventional wisdom, in the machine learning community is that the application of 2nd-order methods, i.e., those that employ Hessian as well as gradient information, can be highly inefficient. Consequently, 1st-order algorithms, such as stochastic gradient descent (SGD), have been at the center-stage for solving such machine learning problems. Here, we aim at addressing this misconception by considering efficient and stochastic variants of Newton's method, namely, sub-sampled trust-region and cubic regularization, whose theoretical convergence properties have recently been established in [Xu 2017]. Using a variety of experiments, we empirically evaluate the performance of these methods for solving non-convex machine learning applications. In doing so, we highlight the shortcomings of 1st-order methods, e.g., high sensitivity to hyper-parameters such as step-size and undesirable behavior near saddle-points, and showcase the advantages of employing curvature information as effective remedy.

GIANT: Globally Improved Approximate Newton Method for Distributed Optimization by Shusen Wang, Farbod Roosta-Khorasani, Peng Xu, Michael W. Mahoney

For distributed computing environments, we consider the canonical machine learning problem of empirical risk minimization (ERM) with quadratic regularization, and we propose a distributed and communication-efficient Newton-type optimization method. At every iteration, each worker locally finds an Approximate NewTon (ANT) direction, and then it sends this direction to the main driver. The driver, then, averages all the ANT directions received from workers to form a Globally Improved ANT (GIANT) direction. GIANT naturally exploits the trade-offs between local computations and global communications in that more local computations result in fewer overall rounds of communications. GIANT is highly communication efficient in that, for 

d
-dimensional data uniformly distributed across m
 workers, it has 4
 or 6
 rounds of communication and O(dlogm)
 communication complexity per iteration. Theoretically, we show that GIANT's convergence rate is faster than first-order methods and existing distributed Newton-type methods. From a practical point-of-view, a highly beneficial feature of GIANT is that it has only one tuning parameter---the iterations of the local solver for computing an ANT direction. This is indeed in sharp contrast with many existing distributed Newton-type methods, as well as popular first order methods, which have several tuning parameters, and whose performance can be greatly affected by the specific choices of such parameters. In this light, we empirically demonstrate the superior performance of GIANT compared with other competing methods.


The celebrated Nesterov's accelerated gradient method offers great speed-ups compared to the classical gradient descend method as it attains the optimal first-order oracle complexity for smooth convex optimization. On the other hand, the popular AdaGrad algorithm competes with mirror descent under the best regularizer by adaptively scaling the gradient. Recently, it has been shown that the accelerated gradient descent can be viewed as a linear combination of gradient descent and mirror descent steps. Here, we draw upon these ideas and present a fast linearly-coupled adaptive gradient method (FLAG) as an accelerated version of AdaGrad, and show that our algorithm can indeed offer the best of both worlds. Like Nesterov's accelerated algorithm and its proximal variant, FISTA, our method has a convergence rate of 1/T2 after T iterations. Like AdaGrad our method adaptively chooses a regularizer, in a way that performs almost as well as the best choice of regularizer in hindsight.


We consider variants of trust-region and cubic regularization methods for non-convex optimization, in which the Hessian matrix is approximated. Under mild conditions on the inexact Hessian, and using approximate solution of the corresponding sub-problems, we provide iteration complexity to achieve 
ϵ
-approximate second-order optimality which have shown to be tight. Our Hessian approximation conditions constitute a major relaxation over the existing ones in the literature. Consequently, we are able to show that such mild conditions allow for the construction of the approximate Hessian through various random sampling methods. In this light, we consider the canonical problem of finite-sum minimization, provide appropriate uniform and non-uniform sub-sampling strategies to construct such Hessian approximations, and obtain optimal iteration complexity for the corresponding sub-sampled trust-region and cubic regularization methods.


Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Thursday, September 21, 2017

CSHardware: InView Multi-Pix Camera Demonstrates 1FPS SWIR Imaging



This is rare to see an embodiment in hardware of CS and DL ideas in the sensing area and in production. We mentioned the development at InView a while back, Here is a new announcement using compressive sensing technology and neural networks in the SWIR sensing realm. From the press release:
"...Having already harnessed the computational power of the famous Single-Pixel Camera architecture of the InView210 SWIR imager, InView has now enhanced its speed and image processing capability by incorporating a small array of pixels and new compressive computational methods. InView takes advantage of parallel measurements, matrix processing and efficient reconstruction algorithms to produce the highest resolution SWIR images at rates of just a few seconds per frame. As shown below, multi-pixel Compressive Sensing magnifies the resolution of a small pixel array. On the left, is a low-resolution image directly measured from a 64 x 64 InGaAs pixel array. When that same 64 x 64 array is used with compressive sensing, the image is transformed computationally into a detailed 512 x 512 image...."
The rest is here.


Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !

Tuesday, September 19, 2017

Stabilizing GAN Training with Multiple Random Projections - implementation -

Iacopo recently pointed out the following to us. How can you use the fact that most manifolds are low dimensional in training generative adversarial networks ? Random projections look like the answer !


Training generative adversarial networks is unstable in high-dimensions when the true data distribution lies on a lower-dimensional manifold. The discriminator is then easily able to separate nearly all generated samples leaving the generator without meaningful gradients. We propose training a single generator simultaneously against an array of discriminators, each of which looks at a different random low-dimensional projection of the data. We show that individual discriminators then provide stable gradients to the generator, and that the generator learns to produce samples consistent with the full data distribution to satisfy all discriminators. We demonstrate the practical utility of this approach experimentally, and show that it is able to produce image samples with higher quality than traditional training with a single discriminator.

Source codes and models are here: http://www.cse.wustl.edu/~ayan/rpgan/



Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Friday, September 15, 2017

Deep Null Space, Deep Factorization and the Last Image of Cassini


Deep Null Space property: what an aptly named property for a blog entry featuring the last image of Cassini taken before it entered Saturn's atmosphere.

We have followed Cassini since 2004 here on Nuit Blanche.

In a different direction, Deep Factorization is another aspect of The Great  Convergence. Here  are two instances of it in the following three papers:

We study a deep matrix factorization problem. It takes as input a matrix X obtained by multiplying K matrices (called factors). Each factor is obtained by applying a fixed linear operator to a vector of parameters satisfying a sparsity constraint. We provide sharp conditions on the structure of the model that guarantee the stable recovery of the factors from the knowledge of X and the model for the factors. This is crucial in order to interpret the factors and the intermediate features obtained when applying a few factors to a datum. When K = 1: the paper provides compressed sensing statements; K = 2 covers (for instance) Non-negative Matrix Factorization, Dictionary learning, low rank approximation, phase recovery. The particularity of this paper is to extend the study to deep problems. As an illustration, we detail the analysis and provide (entirely computable) guarantees for the stable recovery of a (non-neural) sparse convolutional network.


We study a deep matrix factorization problem. It takes as input a matrix X obtained by multiplying K matrices (called factors). Each factor is obtained by applying a fixed linear operator to a short vector of parameters satisfying a model (for instance sparsity, grouped sparsity, non-negativity, constraints defining a convolution network\ldots). We call the problem deep or multi-layer because the number of factors is not limited. In the practical situations we have in mind, we can typically have K=10 or 100. This work aims at identifying conditions on the structure of the model that guarantees the stable recovery of the factors from the knowledge of X and the model for the factors.We provide necessary and sufficient conditions for the identifiability of the factors (up to a scale rearrangement). We also provide a necessary and sufficient condition called Deep Null Space Property (because of the analogy with the usual Null Space Property in the compressed sensing framework) which guarantees that even an inaccurate optimization algorithm for the factorization stably recovers the factors. We illustrate the theory with a practical example where the deep factorization is a convolutional network.

Speech signals are complex intermingling of various informative factors, and this information blending makes decoding any of the individual factors extremely difficult. A natural idea is to factorize each speech frame into independent factors, though it turns out to be even more difficult than decoding each individual factor. A major encumbrance is that the speaker trait, a major factor in speech signals, has been suspected to be a long-term distributional pattern and so not identifiable at the frame level. In this paper, we demonstrated that the speaker factor is also a short-time spectral pattern and can be largely identified with just a few frames using a simple deep neural network (DNN). This discovery motivated a cascade deep factorization (CDF) framework that infers speech factors in a sequential way, and factors previously inferred are used as conditional variables when inferring other factors. Our experiment on an automatic emotion recognition (AER) task demonstrated that this approach can effectively factorize speech signals, and using these factors, the original speech spectrum can be recovered with high accuracy. This factorization and reconstruction approach provides a novel tool for many speech processing tasks.





Image Credit: NASA/JPL-Caltech/Space Science Institute
File name: W00110282.jpg, https://saturn.jpl.nasa.gov/raw_images/426594
Taken: Sep. 14, 2017 7:59 PM
Received: Sep. 15, 2017 7:04 AM

The camera was pointing toward SATURN, and the image was taken using the CL1 and CL2 filters. This image has not been validated or calibrated. A validated/calibrated image will be archived with the NASA Planetary Data System.

Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !

Random Subspace with Trees for Feature Selection Under Memory Constraints / Learning Mixture of Gaussians with Streaming Data


Probably the last Image of Titan by the Cassini spacecraft. Taken: Sep. 12, 2017 9:26 PM. Received: Sep. 13, 2017 10:19 AM. Image Credit: NASA/JPL-Caltech/Space Science Institute


As our capabilities to produce features from data is getting larger everyday, we are now getting into the stage where we have to learn/infer under a streaming constraint: i.e; we get to see the feature once and then have to produce some inference. The first paper tries to do in the random forest approach while the second paper looks at it in building a mixture of gaussians ( relevant: Compressive Statistical Learning with Random Feature MomentsSketching for Large-Scale Learning of Mixture ModelsSketchMLbox) . Enjoy !



Dealing with datasets of very high dimension is a major challenge in machine learning. In this paper, we consider the problem of feature selection in applications where the memory is not large enough to contain all features. In this setting, we propose a novel tree-based feature selection approach that builds a sequence of randomized trees on small subsamples of variables mixing both variables already identified as relevant by previous models and variables randomly selected among the other variables. As our main contribution, we provide an in-depth theoretical analysis of this method in infinite sample setting. In particular, we study its soundness with respect to common definitions of feature relevance and its convergence speed under various variable dependance scenarios. We also provide some preliminary empirical results highlighting the potential of the approach.



In this paper, we study the problem of learning a mixture of Gaussians with streaming data: given a stream of $N$ points in $d$ dimensions generated by an unknown mixture of $k$ spherical Gaussians, the goal is to estimate the model parameters using a single pass over the data stream. We analyze a streaming version of the popular Lloyd's heuristic and show that the algorithm estimates all the unknown centers of the component Gaussians accurately if they are sufficiently separated. Assuming each pair of centers are $C\sigma$ distant with $C=\Omega((k\log k)^{1/4}\sigma)$ and where $\sigma^2$ is the maximum variance of any Gaussian component, we show that asymptotically the algorithm estimates the centers optimally (up to constants); our center separation requirement matches the best known result for spherical Gaussians \citep{vempalawang}. For finite samples, we show that a bias term based on the initial estimate decreases at $O(1/{\rm poly}(N))$ rate while variance decreases at nearly optimal rate of $\sigma^2 d/N$.
Our analysis requires seeding the algorithm with a good initial estimate of the true cluster centers for which we provide an online PCA based clustering algorithm. Indeed, the asymptotic per-step time complexity of our algorithm is the optimal $d\cdot k$ while space complexity of our algorithm is $O(dk\log k)$.
In addition to the bias and variance terms which tend to $0$, the hard-thresholding based updates of streaming Lloyd's algorithm is agnostic to the data distribution and hence incurs an approximation error that cannot be avoided. However, by using a streaming version of the classical (soft-thresholding-based) EM method that exploits the Gaussian distribution explicitly, we show that for a mixture of two Gaussians the true means can be estimated consistently, with estimation error decreasing at nearly optimal rate, and tending to $0$ for $N\rightarrow \infty$.



Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Thursday, September 14, 2017

Overviews: Deep Learning on Reinforcement Learning, Music Generation and Recommender Systems

Cassini is taking its last image.right now. 



Today we have three overviews/review/tutorial on different aspect of Deep Learning. The first one is about Reinforcement Learning, the second is a book on music generation and the third is on recommender systems (as taught in the latest RecSys meeting at Lake Como). 

We give an overview of recent exciting achievements of deep reinforcement learning (RL). We discuss six core elements, six important mechanisms, and twelve applications. We start with background of machine learning, deep learning and reinforcement learning. Next we discuss core RL elements, including value function, in particular, Deep Q-Network (DQN), policy, reward, model, planning, and exploration. After that, we discuss important mechanisms for RL, including attention and memory, unsupervised learning, transfer learning, multi-agent RL, hierarchical RL, and learning to learn. Then we discuss various applications of RL, including games, in particular, AlphaGo, robotics, natural language processing, including dialogue systems, machine translation, and text generation, computer vision, neural architecture design, business management, finance, healthcare, Industry 4.0, smart grid, intelligent transportation systems, and computer systems. We mention topics not reviewed yet. After listing a collection of RL resources, we present a brief summary, and close with discussions. 


This book is a survey and an analysis of different ways of using deep learning (deep artificial neural networks) to generate musical content. At first, we propose a methodology based on four dimensions for our analysis: - objective - What musical content is to be generated? (e.g., melody, accompaniment...); - representation - What are the information formats used for the corpus and for the expected generated output? (e.g., MIDI, piano roll, text...); - architecture - What type of deep neural network is to be used? (e.g., recurrent network, autoencoder, generative adversarial networks...); - strategy - How to model and control the process of generation (e.g., direct feedforward, sampling, unit selection...). For each dimension, we conduct a comparative analysis of various models and techniques. For the strategy dimension, we propose some tentative typology of possible approaches and mechanisms. This classification is bottom-up, based on the analysis of many existing deep-learning based systems for music generation, which are described in this book. The last part of the book includes discussion and prospects. 







Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Deep Learning and Inverse Problems


Photojournal: PIA21345
September 11, 2017
Credit
NASA/JPL-Caltech/Space Science Institute


Much like what happening in compressive sensing, where sparse reconstruction solvers are being learned as if they were deep neural networks (LISTA,....), the more general field of inverse problems (with a larger variety of regularizers) is also falling in this Great Convergence vortex (see previous here or here). Today we have the following two approaches:

We propose a new method that uses deep learning techniques to accelerate the popular alternating direction method of multipliers (ADMM) solution for inverse problems. The ADMM updates consist of a proximity operator, a least squares regression that includes a big matrix inversion, and an explicit solution for updating the dual variables. Typically, inner loops are required to solve the first two sub-minimization problems due to the intractability of the prior and the matrix inversion. To avoid such drawbacks or limitations, we propose an inner-loop free update rule with two pre-trained deep convolutional architectures. More specifically, we learn a conditional denoising auto-encoder which imposes an implicit data-dependent prior/regularization on ground-truth in the first sub-minimization problem. This design follows an empirical Bayesian strategy, leading to so-called amortized inference. For matrix inversion in the second sub-problem, we learn a convolutional neural network to approximate the matrix inversion, i.e., the inverse mapping is learned by feeding the input through the learned forward network. Note that training this neural network does not require ground-truth or measurements, i.e., it is data-independent. Extensive experiments on both synthetic data and real datasets demonstrate the efficiency and accuracy of the proposed method compared with the conventional ADMM solution using inner loops for solving inverse problems.

Much of the recent research on solving iterative inference problems focuses on moving away from hand-chosen inference algorithms and towards learned inference. In the latter, the inference process is unrolled in time and interpreted as a recurrent neural network (RNN) which allows for joint learning of model and inference parameters with back-propagation through time. In this framework, the RNN architecture is directly derived from a hand-chosen inference algorithm, effectively limiting its capabilities. We propose a learning framework, called Recurrent Inference Machines (RIM), in which we turn algorithm construction the other way round: Given data and a task, train an RNN to learn an inference algorithm. Because RNNs are Turing complete [1, 2] they are capable to implement any inference algorithm. The framework allows for an abstraction which removes the need for domain knowledge. We demonstrate in several image restoration experiments that this abstraction is effective, allowing us to achieve state-of-the-art performance on image denoising and super-resolution tasks and superior across-task generalization.




Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Wednesday, September 13, 2017

Paris Machine Learning #1 Season 5: Code Mining, Mangas, Drug Discovery, Open Law, RAMP


So Season 5 of the Paris Machine Learning meetup starts today, woohoo ! The video of the streaming for the meetup can be found below. 

 
 



Thanks to Deep Algo for hosting this meetup and sponsoring the food and drinks afterwards. 

LightOn is sponsoring the streaming.

Schedule:

6:30 PM doors opening ; 6:45 PM : talks beginning ; 9:00 PM : talks ending
10:00 PM : end

TALKS:

Short: Franck Bardol, Igor Carron, We know what the AI world did Last Summer 
Short: Xavier Lagarrigue, Presentation de Deep Algo, La Piscine
Dans le cadre des Journées nationales de l'ingénieur, L'IESF organise avec le CDS un challenge en vision par ordinateur consistant à classifier les différentes especes de pollinisateurs. Un prix sera remis à la JNI le 19 Octobre à l'UNESCO. Le peu de nombre d'exemple dans la majorité des classes rend le challenge techniquement intéressant (one shot learning / domain adaptation). lien : http://bee-o-diversity-challenge.strikingly.com/
Short: Open Law: IA et droit, Dataset d'apprentissage , Olivier JeulinLefebvre-sarrut.eu
Abstract : L’association Open Law Le droit ouvert*, avec le soutien de la CNIL et de la Cour de Cassation, a décidé de créer un jeu de données d’apprentissage dans le domaine juridique. L'objectif est de zoner les décisions de justice des cours d'appel (discourse parsing). L’annotation est en cours et le jeu de données sera rendu public début décembre..
http://openlaw.fr/travaux/communs-numeriques/ia-droit-datasets-dapprentissage
15 minutes presentations:

Challenges in code mining, Information theoretic approach, Jérôme Forêt, Head of R&D de Deep Algo, Deep Algo - English-

The mission of Deep Algo is to make code understandable by anyone. This involves automatic extraction of the business logic from a code base. One of the challenges is to understand the developper's intentions that led to a specific organization of this business logic.
Using Posters to Recommend Anime and Mangas by Jill-Jênn Vie,  (livestream from Japan) - English-
The classic recommendation problem is the following: given a user and the items (mangas) that they like, how can we recommend new items (mangas) that they are also likely to enjoy? Typically this is done via collaborative filtering, i.e. people with similar taste also enjoy other mangas, so we recommend these to the original user. A very common problem occurs when you have a new or obscure manga, aka the cold-start problem. There are no reviews to use for this manga, so a cooler option is to build a system that actually understands the content it recommends. We propose extracting visual information from the posters of these little-known mangas, using a deep neural net called Illustration2Vec. The theory is that users that like mangas with "girl with sword" will also like other mangas that have "girl with sword" or perhaps "girl with bow" but probably not "multiple boys in a swimming pool".
Site: http://research.mangaki.frRelevant ArXiv: Using Posters to Recommend Anime and Mangas in a Cold-Start Scenario, https://arxiv.org/abs/1709.01584

Early-stage drug discovery requires a constant supply of new molecules, to be fed into High Throughput Screening robots. To increase this supply, virtual molecules can be generated on-demand with neural networks. In this talk, I present a Reinforcement Learning generative model, and a variant using Generative Adversarial Networks. I also present two challenges that both are facing: 1. multitasking
between different objectives and 2. generating chemically diverse molecules. Finally, I sketch how these generative models could become a useful proof-of-work for a 'Drugcoin' crypto-currency, in place of the 'useless' Hashcash proof-of-work of Bitcoin. 

Motivated by the shortcomings of traditional data challenges, we have developed a unique concept and platform, called Rapid Analytics and Model Prototyping (RAMP), based on modularization and code submission.
Open code submission allows participants to build on each other’s ideas, provides the organizers with a fully functioning prototype, and makes it possible to build complex machine learning workflows while keeping the contributions simple. Besides running public data challenges, the tool may also be useful for managing the building of data science workflows internally in a data science team. In the presentation I will focus on what you can use the tool for if you are a data scientist, a student, or a data science instructor. Links: https://www.ramp.studio https://github.com/paris-saclay-cds/ramp-workflow https://medium.com/@balazskegl


Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !

Tuesday, September 12, 2017

NIPS 2017 accepted papers

NIPSLogo

So it looks it is going to be difficult to get a ticket to NIPS if you are not buying one of these in the coming days !

The workshop are still open for submissions. The list is here

Accepted papers are now showing up on
Oh! and if you are a company. The sponsorship for NIPS is already oversubscribed. They cannot take your money anymore.





Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Modeling random projection for tensor objects




In this investigation, we discuss high order data structure (called tensor) for efficient information retrieval and show especially how well reduction techniques of dimensionality goes while preserving Euclid distance between information. High order data structure requires much amount of space. One of the effective approaches comes from dimensionality reduction such as Latent Semantic Indexing (LSI) and Random Projection (RP) which allows us to reduce complexity of time and space dramatically. The reduction techniques can be applied to high order data structure. Here we examine High Order Random Projection (HORP) which provides us with efficient information retrieval keeping feasible dimensionality reduction.






Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Printfriendly