Monday, December 12, 2016

Some general takeaways from #NIPS2016


I am back home still recoiling from the information overload of NIPS2016.  

A few people have already written some great insights to what happened there. Here are a few that struck me:
  • With the astounding success of Deep Learning algorithms, other communities of science have essentially yielded to these tools in a manner of two or three years. I felt that the main question at the meeting was: which field would be next ? Since the Machine Learning/Deep Learning community was able to elevate itself thanks to high quality datasets such as MNIST all the way to Imagenet, it is only fair to see where this is going with the release of a few datasets during the conference including the Universe from OpenAI. Control systems and simulators (forward problems in science) seem the next target.
  • The touching tribute to David McKay brought home that we are not as unidimensional as we think we are. 
  • There are certain sub-communities within NIPS that still do not seem to have high quality datasets. I fear they will remain in the backseat for a little while longer. As in compressive sensing before phase transitions were found, any published paper was really just a meeting of a random dataset with a particular algorithm and no certain way to figure out how that algorithm fitted with the rest. High quality datasets, much like phase transitions, act as acid tests.
  • I am always dumbfounded to find out that people read Nuit Blanche. I know the stats, it doesn't take away the genuine element of surprise. Wow, and thank you !
  • Energy issues were bubbling up a little bit in different areas stemming from training large hyperparameter searches or learning-to-learn models but also in how to extract information from the brain.  
  • The meeting was big. Upon coming back home, I had a few: "What ? you were there too ?"  moments 
  • I bet with someone that it would take more than 20 years to come up with a theoretical understanding of some of the recipes used currently in ML/DL. It took longer for L_1 and sparsity.

 Here are some insightful take-aways: Tomasz pointed out some of the trends:

Jack Clark's newsletter before, during and after NIPS:

Paul Mineiro's Machined Learnings: NIPS 2016 Reflections and Jeremy Karnowski and Ross Fadely, Insight Artificial Intelligence
During the meeting, on Twitter, the Post-facto Fake News Challenge was launched.


Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments:

Printfriendly