Thursday, December 20, 2018

LightOn: Forward We Go !


It's been a while as I have been a bit busy. I will be back to a more regular schedule but in the meantime, we just raised some funds with Quantonation and Anorak for LightOn that should allow us to go forward in building up an optical technology for Machine Learning. Here are the announcements: 

 We just had some coverage on VentureBeat and you can follow our announcement and progress directly on Twitter and LinkedIn.

Wednesday, November 14, 2018

Paris ML E#2 S#6: Conscience, Code Analysis, Can a machine learn like a child?



Pour ce deuxieme meetup régulier de la saison, nous parlerons au moins de conscience, de santé et de code et de comment les machines apprennent comme les enfants... Merci à Samsung de nous accueillir !



Voici le programme pour l'instant:

+ Presentation Gilles Mazars, AI Labs Samsung, 


From my recent paper published in Brain: https://doi.org/10.1093/brain/awy267] Determining the state of consciousness in patients with disorders of consciousness is a challenging practical and theoretical problem. Recent findings suggest that multiple markers of brain activity extracted from the EEG may index the state of consciousness in the human brain. Furthermore, machine learning has been found to optimize their capacity to discriminate different states of consciousness in clinical practice. ... Our findings demonstrate that EEG markers of consciousness can be reliably, economically and automatically identified with machine learning in various clinical and acquisition contexts.


During this talk Eiso Kant demonstrates how different machine learning techniques can be used to learn from source code and provide developers with novel insights into their code. This talk includes several demos that show the power of MLonCode.
+ Autonomous developmental learning: can a machine learn like a child? , Pierre Oudeyer

Résumé: Current approaches to artificial intelligence and machine learning are still fundamentally limited in comparison with autonomous learning capabilities of children. Even impressive systems like AlphaGo require huge amounts of trial and error and the help of an engineer to deal with other games or tasks. On the contrary, children learn fast and robustly a wide and open-ended repertoire of skills, without needing any form of intervention by an engineer. I will present a research program that has studied computational modeling of child development and learning mechanisms in the last decade. I will explain approaches to model curiosity-driven autonomous learning, with algorithmic models enabling machines to sample and explore their own goals, self-organizing a learning curriculum without any external supervision. I will show how this has helped scientists understand better aspects of human development, and how this has opened novel approaches to address the current limits of machine learning. I will illustrate this research with experiments where robots learn autonomously repertoires of complex tasks. I will then conclude by illustrating how these approaches can be applied successfully in the domain of educational technologies, enabling to personalize sequences of exercises for human learners, while maximizing both learning efficiency and intrinsic motivation.


Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !

Wednesday, October 10, 2018

Paris Machine Learning #1 S6 at Vente-Privée

So after two "Hors Série" meetups, we decided to start Season 6 of the Paris Machine Learning meetup tonight. We'll talk about algorithms used at Vente Privée, one of the most ambitious company in France but also about Quantum computing and Machine Learning and eventually how SNCF is becoming algorithm/data driven. The streaming and the presentations will be accessible below:








Nous aurons donc le premier meetup régulier de la saison chez Vente-privée. C'est un cadre unique comme nous l'avons vu il y a deux ans. Il y aura surement une visite organisée des locaux.

Un grand merci à Vente Privée, de nous accueillir !

AGENDA :
Doors open 6:45PM // talk 7-9PM // network 9-10:30PM

Program

Jéremie Jakubowicz, Vente Privee, Data Science at Vente Privee
In this talk we will reveal what's been happening within the Data Science Team at Vente Privee this year...

At vente-privee we customize a lot of things, and this talk would describe the mechanism behind catalog customization. When a customer enters a sales, we create a section filled with items recommended for this specific customer, based on its previous purchases, and other criteria.

Quantum computing paradigm applied to automated machine learning. an efficient alternative to hyperparamter search.


Le rôle de la Fab Big Data et de l'équipe data science et engineering pour le groupe SNCF.. Quelques projets en cours représentatifs :
  • Adhérence : mieux connaître, localiser et comprendre les phénomènes de perte d'adhérence (c'est le contact entre la roue et le rail).
  • Energie : analyse des consommations d'energie électrique et prévision de consommation.
  • Projets prospectifs :
  • Active learning
  • Lisibilité des modèles de ML



Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Monday, October 08, 2018

Job: Postdoctoral Researcher in Small Data Deep Learning and Explainable Machine Learning, Livermore, CA

Bhavya just sen me the following:

Hi Igor, 
I would like to ask you a favor. We are looking for a Postdoctoral Researcher interested in small data deep learning and explainable machine learning. I was wondering whether it is possible to list the opening on your blog. Information on the available position is below.
We are looking for a Postdoctoral Researcher with expertise in statistics, machine learning, convex/non-convex optimization and/or uncertainty quantification. Postdoctoral Researcher will support ongoing efforts concerned with small-data deep learning and related topics, such as, transfer learning, generative modeling, self-supervised or unsupervised learning, and explainable ML. This position is in the Computation Directorate within the Center for Applied Scientific Computing (CASC) Division at Lawrence Livermore National Lab, Livermore, CA.
Essential Duties
  • Research, design, implement and apply a variety of advanced data science methods in multiple application areas (such as material science, high energy physics, predictive medicine, cybersecurity) in a collaborative scientific environment.
  • Document research by publishing papers at conferences/journals such as NIPS, ICML, ICLR, IJCAI, AAAI, AISTATS, ACL, CVPR, JMLR or similar.

Qualifications
  • Ph.D. in statistics or computer science or a related field.
  • Experience in modern machine learning environments (TensorFlow, PyTorch, etc.).
  • Proficiency in one or more of the following machine learning areas: deep learning, reinforcement learning, and Bayesian nonparametric.
  • Knowledge of C/C++, Python.

If interested, please contact me directly at kailkhura1@llnl.gov.
Regards,
Bhavya






Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

A Neural Architecture for Bayesian CompressiveSensing over the Simplex via Laplace Techniques



Steffen just sent me the following:

Dear Igor,

I'm a long-time reader of your blog and wanted to share our recent paper on a relation between compressed sensing and neural network architectures. The paper introduces a new network construction based on the Laplace transform that results in activations such as ReLU and gating/threshold functions. It would be great if you could distribute the link on nuit-blanche. 
The paper/preprint is here:
https://ieeexplore.ieee.org/document/8478823
https://www.netit.tu-berlin.de/fileadmin/fg314/limmer/LimSta18.pdf 
Many thanks and best regards,
Steffen 
Dipl.-Ing. Univ. Steffen Limmer
Raum HFT-TA 412
Technische Universität Berlin
Institut für Telekommunikationssysteme
Fachgebiet Netzwerk-Informationstheorie
Einsteinufer 25, 10587 Berlin

Thanks Steffen ! Here is the paper:


This paper presents a theoretical and conceptual framework to design neural architectures for Bayesian compressive sensing of simplex-constrained sparse stochastic vectors. First we recast the problem of MMSE estimation (w.r.t. a pre-defined uniform input distribution over the simplex) as the problem of computing the centroid of a polytope that is equal to the intersection of the simplex and an affine subspace determined by compressive measurements. Then we use multidimensional Laplace techniques to obtain a closed-form solution to this computation problem, and we show how to map this solution to a neural architecture comprising threshold functions, rectified linear (ReLU) and rectified polynomial (ReP) activation functions. In the proposed architecture, the number of layers is equal to the number of measurements which allows for faster solutions in the low-measurement regime when compared to the integration by domain decomposition or Monte-Carlo approximation. We also show by simulation that the proposed solution is robust to small model mismatches; furthermore, the proposed architecture yields superior approximations with less parameters when compared to a standard ReLU architecture in a supervised learning setting.









Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Sunday, October 07, 2018

Sunday Morning Video (in french): Les travaux de Grothendieck.sur les espaces de Banach, Gilles. Pisier (Lectures grothendieckiennes)

This video in French mentions the connection between Grothendieck's work and some of the subject areas mentioned on Nuit Blanche.( see here, here and here).


La thèse de Grothendieck et son article ultérieur intitulé "Résumé de la théorie métrique des produits tensoriels topologiques" (1956) a eu un énorme impact sur le développement de la géométrie des espaces de Banach pendant les 60 dernières années. Nous passerons en revue ce "Résumé" en nous concentrant sur le résultat que Grothendieck lui-même a appelé le théorème fondamental de la théorie métrique des produits tensoriels, maintenant devenu "l'inégalité de Grothendieck" ou "le théorème de Grothendieck". Ce résultat a récemment fait une apparition pour le moins inattendue dans plusieurs domaines a priori fort éloignés des préoccupations de Grothendieck. L'une a trait aux C ∗ -algèbres et aux espaces d'opérateurs (ou "espaces de Banach non-commutatifs"), une autre aux inégalités de Bell et à leur "violation" en mécanique quantique, une dernière relie la constante de Grothendieck au problème P=NP et à la théorie des graphes.

Here is a review that covers some of what is mentioned in the video: 


Probably the most famous of Grothendieck's contributions to Banach space theory is the result that he himself described as "the fundamental theorem in the metric theory of tensor products". That is now commonly referred to as "Grothendieck's theorem" (GT in short), or sometimes as "Grothendieck's inequality". This had a major impact first in Banach space theory (roughly after 1968), then, later on, in C∗-algebra theory, (roughly after 1978). More recently, in this millennium, a new version of GT has been successfully developed in the framework of "operator spaces" or non-commutative Banach spaces. In addition, GT independently surfaced in several quite unrelated fields:\ in connection with Bell's inequality in quantum mechanics, in graph theory where the Grothendieck constant of a graph has been introduced and in computer science where the Grothendieck inequality is invoked to replace certain NP hard problems by others that can be treated by "semidefinite programming" and hence solved in polynomial time. In this expository paper, we present a review of all these topics, starting from the original GT. We concentrate on the more recent developments and merely outline those of the first Banach space period since detailed accounts of that are already available, for instance the author's 1986 CBMS notes.





Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !

Wednesday, September 19, 2018

WiMLDS and Paris Machine Learning meetup Hors série #1: Scalable Automatic Machine Learning with H2O with Erin Ledell

We're back with Season 6 of the Paris Machine Learning meetup!
Tonight, the Women in Machine Learning & Data Science (WiMLDS) meetup and the Paris Machine Learning Group are hosting an exceptional “Hors Série” meetup featuring Erin LeDell and Jo-Fai Chow We will be hsoted and sponsored by by Ingima !

The meetup will be live streamed for those who can’t be there. Slides are also available below:




19:30 – Introduction by Ingima, the Paris WiMLDS + Paris ML Group teams


19:40 – “Scalable Automatic Machine Learning with H2O” (keynote format) by Erin LeDell, Chief Machine Learning Scientist at H2O.ai.

Abstract:
This presentation will provide a history and overview of the field of Automatic Machine Learning (AutoML), followed by a detailed look inside H2O's AutoML algorithm. H2O AutoML provides an easy-to-use interface which automates data pre-processing, training and tuning a large selection of candidate models (including multiple stacked ensemble models for superior model performance). The result of the AutoML run is a "leaderboard" of H2O models which can be easily exported for use in production. AutoML is available in all H2O interfaces (R, Python, Scala, web GUI) and due to the distributed nature of the H2O platform, can scale to very large datasets. The presentation will end with a demo of H2O AutoML in R and Python, including a handful of code examples to get you started using automatic machine learning on your own projects.

Bio:
Dr. Erin LeDell is the Chief Machine Learning Scientist at H2O.ai. Erin has a Ph.D. in Biostatistics with a Designated Emphasis in Computational Science and Engineering from University of California, Berkeley. Her research focuses on automatic machine learning, ensemble machine learning and statistical computing. She also holds a B.S. and M.A. in Mathematics. Before joining H2O.ai, she was the Principal Data Scientist at Wise.io (acquired by GE Digital in 2016) and Marvin Mobile Security (acquired by Veracode in 2012), and the founder of DataScientific, Inc.


Abstract:
Joe Chow (H2O.ai) recently teamed up with IBM and Aginity to create a proof of concept "Moneyball" app for the IBM Think conference in Vegas. The original goal was just to prove that different tools (e.g. H2O, Aginity AMP, IBM Data Science Experience, R and Shiny) could work together seamlessly for common business use-cases. Little did Joe know, the app would be used by Ari Kaplan (the real "Moneyball" guy) to validate the future performance of some baseball players. Ari recommended one player to a Major League Baseball team. The player was signed the next day with a multimillion-dollar contract. This talk is about Joe's journey to a real "Moneyball" application.
20:50 Networking / Cocktail

During the event, you can share content using #WiMLDSParis and @WiMLDS_Paris or #ParisML and @ParisMLgroup

After the meet-up, the video will be shared on : http://parismlgroup.org/about.php & https://medium.com/@WiMLDS_Paris

---
Host information :

The room can welcome 90 people. First arrived, first served!
Keep in mind the session will be streamed.



Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Friday, September 14, 2018

Highly Technical Reference Page: The Rice University Compressive Sensing page.




Rich sent this to me a a few days ago:

Hi Igor -  
i hope all goes well. FYI, the Rice CS Archive is back online after being down for more than a year thanks to some Russian hackers who thought we had something to do with the 2018 election. it’s available here:
richb 
Richard G. Baraniuk
Victor E. Cameron Professor of Electrical and Computer Engineering
Founder and Director, OpenStax
Rice University 
The Rice page is one of the first page that got me thinking I should list all those Highly Technical Reference Pages in one fell swoop.

Thursday, September 13, 2018

“And we’re back for Season 6” Paris Machine Learning Newsletter, September 2018 (in French)


“And we’re back for Season 6” the Paris Machine Learning Meetup Newsletter, September 2018

Sommaire
  1. L’édito de Franck, Jacqueline, Igor, “And we’re back for Season 6”
  2. On Aime Beaucoup !
  3. La saison dernière.

1 L’édito de Franck, Jacqueline, Igor, “And we’re back for Season 6”

Jacqueline Forien nous rejoint en tant qu’organisatrice du meetup.

La saison 5, c’était 8 hors série et 9 meetups réguliers, plus de 7200+ membres ce qui en fait un des plus grand meetup du monde sur cette thématique. On a vu plein de choses l’année dernière du point de vue politique mais aussi dans les meetups. On reviendra la dessus plus tard dans une autre newsletter. Ce qu’il faut savoir c’est que NIPS la conférence de référence en IA a vendu ses tickets en 11 minutes 38 secondes. D’expérience, c’est plus rapide que la vente des billets de BTS quand il viendront à Bercy en Octobre. Ce qui est sûr c’est que ces expériences que sont les rencontres autour du Machine Learning doivent rester et c’est pour cela que toutes les présentations et vidéos de nos meetups sont dans nos archives et sont listées plus bas dans cette newsletter.

Cette dernière saison n’aurait pas pu se faire sans les entreprises et associations suivantes:

Un grand merci pour leur implication dans une communauté dynamique sur l’IA ici à Paris et en Europe.

Notre premier meetup se fera en coordination avec le Women in Machine Learning and Data Science, pour s’inscrire c’est ici: #Hors-série — Paris WiMLDS & Paris ML Meetup

Les dates de nos meetups pour la saison 6:
  • Hors série #1 19/09
  • #2 10/10
  • #3 14/11
  • #4 12/12
  • #5 09/01
  • #6 13/02
  • #7 13/03
  • #8 10/04
  • #9 15/05
  • #10 12/06

Si vous voulez nous accueillir ou sponsoriser, n’hésitez pas à nous contacter grâce à ce formulaire ou via notre site.

Vous pouvez nous suivre sur Twitter @ParisMLgroup.



2. On Aime Beaucoup !

Chloé Azencott, une des speakers du meetup, vient de sortir un livre sur le Machine Learning en Français. C’est Introduction au Machine Learning et il y a plein d’exemples de code.

Des conférences et meetups qu’on aime bien!

++++Important: France is AI conférence: 3e édition de notre conférence annuelle les 17 et 18 octobre 2018 à Station F.+++: Le lien d’inscription eventbriteavec le code promo MEETUPS100 offre 100 place gratuites. Au-delà des 100 premières, les places peuvent être obtenu avec 50% de réduction avec le code MEETUPS50

Les petits nouveaux meetups:

Ceux qui recommencent:

3. La saison dernière


La saison dernière (Saison 5), c’était 8 hors série et 9 meetups réguliers pour un total de 95 meetups en 5 saisons. Voici les liens vers les présentations et videos faites à ces meetups:

Regular meetups

Hors série

Voilà, c’est tout pour aujourd’hui !


PS: N’oubliez pas que vous pouvez aussi suivre le Paris Machine Learning Meetup sur Twitter, LinkedIn, Facebook et Google+ .

Vous pouvez consulter les archives des meet ups précédents.

On travaille aussi sur un nouveau site web : MLParis.org

Le Paris Machine Learning Meetup, c’est 7200 membres ce qui en fait un des plus important du monde avec déjà plus de 95 rencontres et 10 dates programmées pour cette saison 6.
  • Si vous êtes étudiant, postdoc ou chercheur, le meet up est une belle tribune pour parler de vos travaux avant de les présenter aux conférences NIPS/ICML/ICLR/COLT/UAI/ACL/KDD ;
  • Pour les startups, c’est un bon moyen de parler de vos projets ou de recruter les futurs superstars de votre équipe IA/Data Science ;
  • Et pour tous, c’est un moyen simple de se tenir informé des derniers développements du domaine et d’avoir des échanges uniques avec les conférenciers et les autres participants.

Comme toujours, premier arrivé, premier entré. Le nombre de places dans les salles est limité. Au delà de leur capacité, nous ne pourrons pas vous faire rentrer. Vous pouvez suivre le taux de remplissage en suivant #MLParis sur twitter.



Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !

Wednesday, September 12, 2018

Manopt 5.0, toolbox release: Optimization on Manifolds: - implementation -



Nicolas just sent me the following:
Dear Igor,

Bamdev (cc) and I just released Manopt 5.0, our Matlab toolbox for optimization on manifolds:

We would be delighted if you could announce this major release on your blog once again.

Manopt is a toolbox for optimization on manifolds, with major applications in machine learning and computer vision (low rank constraints, orthogonal matrices, rotations, positive definite matrices, ...). Of course, Manopt can also optimize over linear spaces (and it's quite good at it).

The toolbox is user friendly, requiring little knowledge about manifolds to get started. See our tutorial and the many examples in the release:



Highlight -- this release brings:

Thanks!
Nicolas and Bamdev


Thanks Nicolas  !


Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Monday, August 20, 2018

SPORCO: Convolutional Dictionary Learning - implementation -



Brendt sent me the following a few days ago: 

Hi Igor,
We have two new papers on convolutional dictionary learning as well as some recent related code. Could you please post an announcement on Nuit Blanche?
Brendt
Sure Brendt ! It is already mentioned in the Advanced Matrix Factorization Jungle Page as this is an awesome update to the previous announcement.



"Convolutional Dictionary Learning: A Comparative Review and New Algorithms", available from http://dx.doi.org/10.1109/TCI.2018.2840334 and https://arxiv.org/abs/1709.02893, reviews existing batch-mode convolutional dictionary learning algorithms and proposes some new ones with significantly improved performance. Implementations of all of the most competitive algorithms are included in the Python version of the SPORCO library at https://github.com/bwohlberg/sporco .

"First and Second Order Methods for Online Convolutional Dictionary Learning", available from http://dx.doi.org/10.1137/17M1145689 and https://arxiv.org/abs/1709.00106, extends our previous work and proposes some new algorithms for online convolutional dictionary learning that we believe outperform existing alternatives. Implementations of all of the new algorithms are included in the
Matlab version of the SPORCO library at http://purl.org/brendt/software/sporco and the first order algorithm is also included in the Python version of the SPORCO library at https://github.com/bwohlberg/sporco . A very recent addition to the Python version is the ability to exploit the SPORCO-CUDA extension to greatly accelerate the learning process.



Convolutional sparse representations are a form of sparse representation with a dictionary that has a structure that is equivalent to convolution with a set of linear filters. While effective algorithms have recently been developed for the convolutional sparse coding problem, the corresponding dictionary learning problem is substantially more challenging. Furthermore, although a number of different approaches have been proposed, the absence of thorough comparisons between them makes it difficult to determine which of them represents the current state of the art. The present work both addresses this deficiency and proposes some new approaches that outperform existing ones in certain contexts. A thorough set of performance comparisons indicates a very wide range of performance differences among the existing and proposed methods, and clearly identifies those that are the most effective.


Convolutional sparse representations are a form of sparse representation with a structured, translation invariant dictionary. Most convolutional dictionary learning algorithms to date operate in batch mode, requiring simultaneous access to all training images during the learning process, which results in very high memory usage and severely limits the training data that can be used. Very recently, however, a number of authors have considered the design of online convolutional dictionary learning algorithms that offer far better scaling of memory and computational cost with training set size than batch methods. This paper extends our prior work, improving a number of aspects of our previous algorithm; proposing an entirely new one, with better performance, and that supports the inclusion of a spatial mask for learning from incomplete data; and providing a rigorous theoretical analysis of these methods.


Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Thursday, July 26, 2018

CfP: Call for Papers: Special Issue on Information Theory Applications in Signal Processing

Sergio just sent me the following:
Dear Igor,
Could you please announce in nuit blanche the following call for contributions to our Special Issue.
Best resgards,
Sergio
Sure Sergio !
Dear colleagues, 
We are currently leading a Special Issue entitled "Information Theory Applications in Signal Processing" for the journal Entropy (ISSN 1099-4300, IF 2.305). A short prospectus is given at the volume website: 
We would like to invite you to contribute a review or full research paper for publication in this Special Issue after standard peer-review procedure in Open access form.
The official deadline for submission is 30 November 2018. However, you may send your manuscript at any time before the deadline. We can organize a very fast peer-review, if accepted, the paper will be published immediately. Please also feel free to distribute this call for papers to colleagues and collaborators.
You can contact with the assistant editor Ms. Alex Liu (alex.liu@mdpi.com) to solve any question or doubt.
Thank you in advance for considering our invitation.
Sincerely,
Guest Editors:
Dr. Sergio Cruces (http://personal.us.es/sergio/)
Dr. Rubén Martín-Clemente (http://personal.us.es/ruben/)
Dr. Wojciech Samek (http://iphome.hhi.de/samek/)




Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Monday, July 23, 2018

Rank Minimization for Snapshot Compressive Imaging - implementation -



Yang just sent me the following:

Hi Igor,

I am writing regarding a paper on compressive sensing you may find of interest, co-authored with Xin Yuan, Jinli Suo, David Brady, and Qionghai Dai. We get exciting results on snapshot compressive imaging (SCI), i.e., encoding each frame of an image sequence with a spectral-, temporal-, or angular- variant random mask and summing them pixel-by-pixel to form one-shot measurement. Snapshot compressive hyperspectral, high-speed, and ligh-field imaging are among representatives.

We combine rank minimization to exploit the nonlocal self-similarity of natural scenes, which is widely acknowledged in image/video processing and alternating minimization approach to solve this problem. Results of both simulation and real data from four different SCI systems, where measurement noise is dominant, demonstrate that our proposed algorithm leads to significant improvements (>4dB in PSNR) and more robustness to noise compared with current state-of-the-art algorithms.

Paper arXiv link: https://arxiv.org/abs/1807.07837.
Github repository link: https://github.com/liuyang12/DeSCI.

Here is an animated demo for visualization and comparison with the state-of-the-art algorithms, , i.e., GMM-TP (TIP'14), MMLE-GMM (TIP'15), MMLE-MFA (TIP'15), and GAP-TV (ICIP'16).
Thanks,
Yang (y-liu16@mails.tsinghua.edu.cn)


Thanks Yang !

Snapshot compressive imaging (SCI) refers to compressive imaging systems where multiple frames are mapped into a single measurement, with video compressive imaging and hyperspectral compressive imaging as two representative applications. Though exciting results of high-speed videos and hyperspectral images have been demonstrated, the poor reconstruction quality precludes SCI from wide applications.This paper aims to boost the reconstruction quality of SCI via exploiting the high-dimensional structure in the desired signal. We build a joint model to integrate the nonlocal self-similarity of video/hyperspectral frames and the rank minimization approach with the SCI sensing process. Following this, an alternating minimization algorithm is developed to solve this non-convex problem. We further investigate the special structure of the sampling process in SCI to tackle the computational workload and memory issues in SCI reconstruction. Both simulation and real data (captured by four different SCI cameras) results demonstrate that our proposed algorithm leads to significant improvements compared with current state-of-the-art algorithms. We hope our results will encourage the researchers and engineers to pursue further in compressive imaging for real applications.

Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !

Thursday, July 19, 2018

CSJob: PhD and Postdoc positions KU Leuven: Optimization frameworks for deep kernel machines


Johan let me know of the following positions in his group:

Dear Igor,
could you please announce this on nuit blanche.
many thanks,
Johan


Sure thing Johan !

PhD and Postdoc positions KU Leuven: Optimization frameworks for deep kernel machines
The research group KU Leuven ESAT-STADIUS is currently offering 2 PhD and 1 Postdoc (1 year, extendable) positions within the framework of the KU Leuven C1 project Optimization frameworks for deep kernel machines (promotors: Prof. Johan Suykens and Prof. Panos Patrinos).
Deep learning and kernel-based learning are among the very powerful methods in machine learning and data-driven modelling. From an optimization and model representation point of view, training of deep feedforward neural networks occurs in a primal form, while kernel-based learning is often characterized by dual representations, in connection to possibly infinite dimensional problems in the primal. In this project we aim at investigating new optimization frameworks for deep kernel machines, with feature maps and kernels taken at multiple levels, and with possibly different objectives for the levels. The research hypothesis is that such an extended framework, including both deep feedforward networks and deep kernel machines, can lead to new important insights and improved results. In order to achieve this, we will study optimization modelling aspects (e.g. variational principles, distributed learning formulations, consensus algorithms), accelerated learning
schemes and adversarial learning methods.
The PhD and Postdoc positions in this KU Leuven C1 project (promotors: Prof. Johan Suykens and Prof. Panos Patrinos) relate to the following  possible topics:
-1- Optimization modelling for deep kernel machines
-2- Efficient learning schemes for deep kernel machines
-3- Adversarial learning for deep kernel machines
For further information and on-line applying, see
https://www.kuleuven.be/personeel/jobsite/jobs/54740654" (PhD positions) and
https://www.kuleuven.be/personeel/jobsite/jobs/54740649" (Postdoc position)
(click EN for English version).
The research group ESAT-STADIUS http://www.esat.kuleuven.be/stadius at the university KU Leuven Belgium provides an excellent research environment being active in the broad area of mathematical engineering, including data-driven modelling, neural networks and machine learning, nonlinear systems and complex networks, optimization, systems and control, signal processing, bioinformatics and biomedicine.





Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Friday, July 13, 2018

Phase Retrieval Under a Generative Prior


Vlad just sent me the following: 
Hi Igor,

I am writing regarding a paper you may find of interest, co-authored with Paul Hand and Oscar Leong. It applies a deep generative prior to phase retrieval, with surprisingly good results! We can show recovery occurs at optimal sample complexity for gaussian measurements, which in a sense resolves the sparse phase retrieval O(k^2 log n) bottleneck.

https://arxiv.org/pdf/1807.04261.pdf


Best,

-Vlad

Thanks Vlad ! Here is the paper:

Phase Retrieval Under a Generative Prior by Paul Hand, Oscar Leong, Vladislav Voroninski
The phase retrieval problem asks to recover a natural signal y0Rn from m quadratic observations, where m is to be minimized. As is common in many imaging problems, natural signals are considered sparse with respect to a known basis, and the generic sparsity prior is enforced via 1 regularization. While successful in the realm of linear inverse problems, such 1 methods have encountered possibly fundamental limitations, as no computationally efficient algorithm for phase retrieval of a k-sparse signal has been proven to succeed with fewer than O(k2logn) generic measurements, exceeding the theoretical optimum of O(klogn). In this paper, we propose a novel framework for phase retrieval by 1) modeling natural signals as being in the range of a deep generative neural network G:RkRn and 2) enforcing this prior directly by optimizing an empirical risk objective over the domain of the generator. Our formulation has provably favorable global geometry for gradient methods, as soon as m=O(kd2logn), where d is the depth of the network. Specifically, when suitable deterministic conditions on the generator and measurement matrix are met, we construct a descent direction for any point outside of a small neighborhood around the unique global minimizer and its negative multiple, and show that such conditions hold with high probability under Gaussian ensembles of multilayer fully-connected generator networks and measurement matrices. This formulation for structured phase retrieval thus has two advantages over sparsity based methods: 1) deep generative priors can more tightly represent natural signals and 2) information theoretically optimal sample complexity. We corroborate these results with experiments showing that exploiting generative models in phase retrieval tasks outperforms sparse phase retrieval methods.



Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Printfriendly