Thursday, June 30, 2016

DropNeuron: Simplifying the Structure of Deep Neural Networks - implementation -

Regularizing DNNs with L1 and L0 norms. 


DropNeuron: Simplifying the Structure of Deep Neural Networks by Wei Pan, Hao Dong, Yike Guo

Deep learning using multi-layer neural networks (NNs) architecture manifests superb power in modern machine learning systems. The trained Deep Neural Networks (DNNs) are typically large. The question we would like to address is whether it is possible to simplify the NN during training process to achieve a reasonable performance within an acceptable computational time. We presented a novel approach of optimising a deep neural network through regularisation of net- work architecture. We proposed regularisers which support a simple mechanism of dropping neurons during a network training process. The method supports the construction of a simpler deep neural networks with compatible performance with its simplified version. As a proof of concept, we evaluate the proposed method with examples including sparse linear regression, deep autoencoder and convolutional neural network. The valuations demonstrate excellent performance.
The code for this work can be found in this http URL
 
 
 
 
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

ML Hardware: Spintronic nano-devices for bio-inspired computing / Precise deep neural network computation on imprecise low-power analog hardware

Two ML related hardware today, woohoo !
 


Spintronic nano-devices for bio-inspired computing  by Julie Grollier, Damien Querlioz, Mark D. Stiles
Bio-inspired hardware holds the promise of low-energy, intelligent and highly adaptable computing systems. Applications span from automatic classification for big data management, through unmanned vehicle control, to control for bio-medical prosthesis. However, one of the major challenges of fabricating bio-inspired hardware is building ultra-high density networks out of complex processing units interlinked by tunable connections. Nanometer-scale devices exploiting spin electronics (or spintronics) can be a key technology in this context. In particular, magnetic tunnel junctions are well suited for this purpose because of their multiple tunable functionalities. One such functionality, non-volatile memory, can provide massive embedded memory in unconventional circuits, thus escaping the von-Neumann bottleneck arising when memory and processors are located separately. Other features of spintronic devices that could be beneficial for bio-inspired computing include tunable fast non-linear dynamics, controlled stochasticity, and the ability of single devices to change functions in different operating conditions. Large networks of interacting spintronic nano-devices can have their interactions tuned to induce complex dynamics such as synchronization, chaos, soliton diffusion, phase transitions, criticality, and convergence to multiple metastable states. A number of groups have recently proposed bio-inspired architectures that include one or several types of spintronic nanodevices. In this article we show how spintronics can be used for bio-inspired computing. We review the different approaches that have been proposed, the recent advances in this direction, and the challenges towards fully integrated spintronics-CMOS (Complementary metal - oxide - semiconductor) bio-inspired hardware.


Precise deep neural network computation on imprecise low-power analog hardware by Jonathan Binas, Daniel Neil, Giacomo Indiveri, Shih-Chii Liu, Michael Pfeiffer

There is an urgent need for compact, fast, and power-efficient hardware implementations of state-of-the-art artificial intelligence. Here we propose a power-efficient approach for real-time inference, in which deep neural networks (DNNs) are implemented through low-power analog circuits. Although analog implementations can be extremely compact, they have been largely supplanted by digital designs, partly because of device mismatch effects due to fabrication. We propose a framework that exploits the power of Deep Learning to compensate for this mismatch by incorporating the measured variations of the devices as constraints in the DNN training process. This eliminates the use of mismatch minimization strategies such as the use of very large transistors, and allows circuit complexity and power-consumption to be reduced to a minimum. Our results, based on large-scale simulations as well as a prototype VLSI chip implementation indicate at least a 3-fold improvement of processing efficiency over current digital implementations.
 
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Literature survey on low rank approximation of matrices



Low rank approximation of matrices has been well studied in literature. Singular value decomposition, QR decomposition with column pivoting, rank revealing QR factorization (RRQR), Interpolative decomposition etc are classical deterministic algorithms for low rank approximation. But these techniques are very expensive )operations (O(n^3) are required for n x n matrices). There are several randomized algorithms available in the literature which are not so expensive as the classical techniques (but the complexity is not linear in n). So, it is very expensive to construct the low rank approximation of a matrix if the dimension of the matrix is very large. There are alternative techniques like Cross/Skeleton approximation which gives the low-rank approximation with linear complexity in n . In this article we review low rank approximation techniques briefly and give extensive references of many techniques.


This image was taken by Navcam: Right B (NAV_RIGHT_B) onboard NASA's Mars rover Curiosity on Sol 1385 (2016-06-29 06:31:23 UTC). 

Image Credit: NASA/JPL-Caltech 

Full Resolution

Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Wednesday, June 29, 2016

FMR: Fast randomized algorithms for covariance matrix computations - implementation -

The full poster is here.

from the poster: 

Sources are available online as part of the open-source package FMR. They can be downloaded for free at the following address https://gforge.inria.fr/projects/fmr 

Dependencies FMR relies on 
  • ScalFMM [1] for performing fast multipole matrix multiplication in parallel (in shared and distributed memory) 
  • MKL for dense linear algebra and FFT 
  • Scotch or CClusteringLib for partitionning Features The package provides: 
  • routines for generating Gaussian Random Fields based on
  • standard LRA: Cholesky Decomposition, SVD or FFT for regular grids. 
  • randomized LRA: RandSVD and Nystrom method with uniform or leverage score-based sampling.
    • a variety of correlation kernels: Mat´ern, Spherical model, Oseen-Gauss.
    • a Python interface for MDS using Randomized SVD or Nystrom
    • a Matlab interface for Ensemble Kalman Filtering




Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

international - Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST)




Thomas just sent me the following:

Hi Igor
Could I persuade you to put the following iTWIST workshop appetizer on Nuit Blanche? Interested participants still have a couple of days to register at 200€ before the fee goes up on Friday.
Best,
Thomas


Hell yes, you can persuade me ! Here it is:

iTWIST’16 - INVITATION
The third edition of the "international - Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST) is coming to Aalborg, the beautiful main city of Northern Jutland in Denmark. The workshop will be located on the campus of Aalborg University with excellent connections to the city center and less than 20 minutes from Aalborg’s international airport. The workshop aims to foster collaboration between international scientific teams by disseminating ideas through both specific oral/poster presentations and free discussions. For this edition, iTWIST’16 features 9 invited talks, 10 oral paper presentations, and 12 posters.


See full iTWIST’16 program at: http://itwist16.es.aau.dk
Dates: 24th to 26th of August, 2016.

INVITED SPEAKERS

For this edition, the workshop is honored by the participation of the following renowned speakers:

  • Lieven Vandenberghe (UCLA, USA)
    Semidefinite programming methods for continuous sparse optimization
  • Karin Schnass (Innsbruck U., Austria)
    Sparsity, Co-sparsity and Learning
  • Phil Schniter (Ohio State U., USA)
    Bilinear estimation via approximate message passing
  • Rachel Ward (Texas U., Austin, USA)
    Extracting governing equations in chaotic systems from highly corrupted data
  • Florent Krzakala (ENS, France)
    Approximate Message Passing and Low Rank Matrix Factorization Problems
  • Holger Rauhut (RWTH, Germany)
    Low rank tensor recovery
  • Gerhard Wunder (TU-Berlin, Germany)
    Compressive Coded Random Access for 5G Massive Machine-type Communication
  • Petros Boufounos (MERL, USA)
    On representing the right information: embeddings, quantization, and distributed coding
  • Bogdan Roman (Cambridge U., UK)
    Resolution enhancement and targeted sampling in physical imaging

IMPORTANT DATES

Early-bird registration fee: 200€ until 30th of June. Ordinary fee: 250€ from 1st of July.

Aug. 1, 2016 .......................... Registration closes
Aug. 24 – 26, 2016 .................... Workshop (3 days)

LOCAL AND SCIENTIFIC ORGANISING COMMITTEES



SCIENTIFIC ORGANISING COMMITTEE:  Thomas Arildsen (AAU, Denmark)  Morten Nielsen (AAU, Denmark)  Laurent Jacques (UCL, Belgium)  Pascal Frossard (EPFL, Switzerland)  Pierre Vandergheynst (EPFL, Switzerland)  Sandrine Anthoine (I2M, Aix-Marseille U., France)  Yannick Boursier (CPPM, Aix-Marseille U., France)  Aleksandra Pizurica (Ghent U., Belgium)  Christine De Mol (ULB, Belgium) Christophe De Vleeschouwer (UCL, Belgium).
LOCAL ORGANISING COMMITTEE:  Thomas Arildsen (AAU, Denmark)  Morten Nielsen (AAU, Denmark)






Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Tuesday, June 28, 2016

Thesis: Rich and Efficient Visual Data Representation, Mohammad Rastegari,

Here is a new thesis, congratulations Dr. Rastegari !



Rich and Efficient Visual Data Representation by Mohammad Rastegari


Increasing the size of training data in many computer vision tasks has shown to be very effective. Using large scale image datasets (e.g. ImageNet) with simple learning techniques (e.g. linear classifiers) one can achieve state-of-the-art performance in object recognition compared to sophisticated learning techniques on smaller image sets. Semantic search on visual data has become very popular. There are billions of images on the internet and the number is increasing every day. Dealing with large scale image sets is intense per se. They take a significant amount of memory that makes it impossible to process the images with complex algorithms on single CPU machines. Finding an efficient image representation can be a key to attack this problem. A representation being efficient is not enough for image understanding. It should be comprehensive and rich in carrying semantic information. In this proposal we develop an approach to computing binary codes that provide a rich and efficient image representation. We demonstrate several tasks in which binary features can be very effective. We show how binary features can speed up large scale image classification. We present learning techniques to learn the binary features from supervised image set (With different types of semantic supervision; class labels, textual descriptions). We propose several problems that are very important in finding and using efficient image representation.



Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Monday, June 27, 2016

LightOn. And so it begins.



 We are live, our name is LightOn and we are at LightOn.io

 You can follow us on Twitter, we have a LinkedIn page and you can also sign up for our newsletter.

You can also follow the LightOn tags on Nuit Blanche 
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Learning Infinite-Layer Networks: Beyond the Kernel Trick

Infinite--Layer Networks, I like the sound of that.


Learning Infini-te-Layer Networks: Beyond the Kernel Trick by Amir Globerson, Roi Livni

Infinite--Layer Networks (ILN) have recently been proposed as an architecture that mimics neural networks while enjoying some of the advantages of kernel methods. ILN are networks that integrate over infinitely many nodes within a single hidden layer. It has been demonstrated by several authors that the problem of learning ILN can be reduced to the kernel trick, implying that whenever a certain integral can be computed analytically they are efficiently learnable.
In this work we give an online algorithm for ILN, which avoids the kernel trick assumption. More generally and of independent interest, we show that kernel methods in general can be exploited even when the kernel cannot be efficiently computed but can only be estimated via sampling.
We provide a regret analysis for our algorithm, showing that it matches the sample complexity of methods which have access to kernel values. Thus, our method is the first to demonstrate that the kernel trick is not necessary as such, and random features suffice to obtain comparable performance.

 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Sunday, June 26, 2016

Five million page views: The Numbers Game in Long Distance Blogging



The Long Distance Blogging continues. Five million page views roughly amounts to about a million page views per year. Here are the historical figures:
Page views are one thing, here are the figures for unique visits.


Here are some other numbers:
Nuit Blanche community:
Compressive Sensing @LinkedIn (3604)
Advanced Matrix Factorization @Linkedin (1157)

Paris Machine Learning 

Saturday, June 25, 2016

Saturday Morning Video: Machine Learning in Computational Biology Workshop, @NIPS2015

 
 
 
The NIPS2015 Workshops videos are out. In particular, we have that of the Machine Learning in Computational Biology workshop today (I am trying to organize a workshop for this coming NIPS and if it accepted I'll make sure that all the videos are taken). Enjoy !

Credit photo: Date: 24 June 2016, Satellite: Rosetta, Depicts: Comet 67P/Churyumov-Gerasimenko
Copyright: ESA/Rosetta/NAVCAM, CC BY-SA IGO 3.0

Rosetta navigation camera (NavCam) image taken on 17 June 2016 at 30.8 km from the centre of comet 67P/Churyumov-Gerasimenko. The image measures 2.7 km across and has a scale of about 2.6 m/pixel.
The image has been cleaned to remove the more obvious bad pixels and cosmic ray artefacts, and intensities have been scaled.
Another version of this image, which has been contrast enhanced, is available here.
More images of comet 67P/Churyumov-Gerasimenko can be found in the '67P - by Rosetta' collection.

This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 IGO License.
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Friday, June 24, 2016

RedEye: Analog ConvNet Image Sensor Architecture for Continuous Mobile Vision - implementation -

 
While the paper is: RedEye: Analog ConvNet Image Sensor Architecture for Continuous Mobile Vision by Robert LiKamWa, Yunhui Hou, Yuan Gao, Mia Polansky, Lin Zhong .

Continuous mobile vision is limited by the inability to efficiently capture image frames and process vision features. This is largely due to the energy burden of analog readout circuitry, data traffic, and intensive computation. To promote efficiency, we shift early vision processing into the analog domain. This results in RedEye, an analog convolutional image sensor that performs layers of a convolutional neural network in the analog domain before quantization. We design RedEye to mitigate analog design complexity, using a modular column-parallel design to promote physical design reuse and algorithmic cyclic reuse. RedEye uses programmable mechanisms to admit noise for tunable energy reduction. Compared to conventional systems, RedEye reports an 85% reduction in sensor energy, 73% reduction in cloudlet-based system energy, and a 45% reduction in computation-based system energy.
 
 The Redee repository is at: https://github.com/JulianYG/redeye_sim that features the following:
RedEye is a vision sensor designed to execute early stages of a deep convolutional neural network (ConvNet) in the analog domain. This repo is a modification of Caffe to train, simulate and visualize analog ConvNet processing under noise vs. energy tradeoffs.
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Compressive light-field microscopy for 3D neural activity recording

Imaging Human Learning thanks to compressive sensing. Yes, devious reader of the blog, I see you nodding on how meta that paper could be framed. It's not a sensor that looks at something in the brain, it's a sensor that decodes a scene using an algorithm that somehow parallels elements of the algorithms being imaged.  That's a different way of looking at The Great Convergence. Without further ado:

Understanding the mechanisms of perception, cognition, and behavior requires instruments that are capable of recording and controlling the electrical activity of many neurons simultaneously and at high speeds. All-optical approaches are particularly promising since they are minimally invasive and potentially scalable to experiments interrogating thousands or millions of neurons. Conventional light-field microscopy provides a single-shot 3D fluorescence capture method with good light efficiency and fast speed, but suffers from low spatial resolution and significant image degradation due to scattering in deep layers of brain tissue. Here, we propose a new compressive light-field microscopy method to address both problems, offering a path toward measurement of individual neuron activity across large volumes of tissue. The technique relies on spatial and temporal sparsity of fluorescence signals, allowing one to identify and localize each neuron in a 3D volume, with scattering and aberration effects naturally included and without ever reconstructing a volume image. Experimental results on live zebrafish track the activity of an estimated 800+ neural structures at 100 Hz sampling rate.


Related:





Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Thursday, June 23, 2016

Highly Technical Reference Page: Laplacian Linear Equations, Graph Sparsification, Local Clustering, Low-Stretch Trees, etc. + implementation

Here is a new Highly Technical Reference Page entitled Laplacian Linear Equations, Graph Sparsification, Local Clustering, Low-Stretch Trees, etc. by Dan Spielman. Thanks to Rich Seymour"s tweet, here is a release of  Laplacians.jl a package built by Dan and collaborators. From the page:
Laplacians is a package containing graph algorithms, with an emphasis on tasks related to spectral and algebraic graph theory. It contains (and will contain more) code for solving systems of linear equations in graph Laplacians, low stretch spanning trees, sparsifiation, clustering, local clustering, and optimization on graphs.
All graphs are represented by sparse adjacency matrices. This is both for speed, and because our main concerns are algebraic tasks. It does not handle dynamic graphs. It would be very slow to implement dynamic graphs this way.
The documentation may be found in http://danspielman.github.io/Laplacians.jl/about/index.html.
This includes instructions for installing Julia, and some tips for how to start using it. It also includes guidelines for Dan Spielman's collaborators.
For some examples of some of the things you can do with Laplacians, look at




Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Slides, papers some videos: ICML, CVPR, NIPS





As you all know the proceedings for the NIPS 2016 is here. At CVPR, Yann LeCun will present the following  What's Wrong Deep Learning?. He also posted a video of Larry Jackel at the "Back to the Future" workshop at ICML.
All the papers at ICML are here. You can also follow Hugo Larochelle who streams some talks through periscope.

All ICML tutorials can be found here: They include the following:  Causal Inference for Policy Evaluation, Susan Athey  but also there:

Deep Reinforcement Learning

David Silver (Google DeepMind)

A major goal of artificial intelligence is to create general-purpose agents that can perform effectively in a wide range of challenging tasks. To achieve this goal, it is necessary to combine reinforcement learning (RL) agents with powerful and flexible representations. The key idea of deep RL is to use neural networks to provide this representational power. In this tutorial we will present a family of algorithms in which deep neural networks are used for value functions, policies, or environment models. State-of-the-art results will be presented in a variety of domains, including Atari games, 3D navigation tasks, continuous control domains and the game of Go.
[slides1] [slides2]

Memory Networks for Language Understanding

Jason Weston (Facebook)

There has been a recent resurgence in interest in the use of the combination of reasoning, attention and memory for solving tasks, particularly in the field of language understanding. I will review some of these recent efforts, as well as focusing on one of my own group’s contributions, memory networks, an architecture that we have applied to question answering, language modeling and general dialog. As we try to move towards the goal of true language understanding, I will also discuss recent datasets and tests that have been built to assess these models abilities to see how far we have come.

Deep Residual Networks: Deep Learning Gets Way Deeper

Kaiming He (Facebook, starting July, 2016)

Deeper neural networks are more difficult to train. Beyond a certain depth, traditional deeper networks start to show severe underfitting caused by optimization difficulties. This tutorial will describe the recently developed residual learning framework, which eases the training of networks that are substantially deeper than those used previously. These residual networks are easier to converge, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with depth of up to 152 layers—8x deeper than VGG nets but still having lower complexity. These deep residual networks are the foundations of our 1st-place winning entries in all five main tracks in ImageNet and COCO 2015 competitions, which cover image classification, object detection, and semantic segmentation.
In this tutorial we will further look into the propagation formulations of residual networks. Our latest work reveals that when the residual networks have identity mappings as skip connections and inter-block activations, the forward and backward signals can be directly propagated from one block to any other block. This leads us to promising results of 1001-layer residual networks. Our work suggests that there is much room to exploit the dimension of network depth, a key to the success of modern deep learning.
[slides]

Recent Advances in Non-Convex Optimization

Anima Anandkumar (University of California Irvine)

Most machine learning tasks require solving non-convex optimization.  The number of critical points in a non-convex problem grows exponentially with the data dimension. Local search methods such as gradient descent can  get stuck in one of these critical points, and therefore, finding the globally optimal solution is computationally hard. Despite this hardness barrier, we have seen many advances in guaranteed non-convex optimization.  The focus has shifted to characterizing transparent conditions under which the global solution can be found efficiently. In many instances, these conditions turn out to be mild and natural for machine learning applications. This tutorial will provide an overview of the recent theoretical success stories in non-convex optimization. This includes learning latent variable models, dictionary learning, robust principal component analysis, and so on. Simple iterative methods such as spectral methods, alternating projections, and so on, are proven to learn consistent models with polynomial sample and computational complexity.  This tutorial will present main ingredients towards establishing these results. The tutorial with conclude  with open challenges and possible path towards tackling them.

Stochastic Gradient Methods for Large-Scale Machine Learning

Leon Bottou (Facebook AI Research), Frank E. Curtis (Lehigh University), and Jorge Nocedal (Northwestern University)

This tutorial provides an accessible introduction to the mathematical properties of stochastic gradient methods and their consequences for large scale machine learning.  After reviewing the computational needs for solving optimization problems in two typical examples of large scale machine learning, namely, the training of sparse linear classifiers and deep neural networks, we present the theory of the simple, yet versatile stochastic gradient algorithm, explain its theoretical and practical behavior, and expose the opportunities available for designing improved algorithms.  We then provide specific examples of advanced algorithms to illustrate the two essential directions for improving stochastic gradient methods, namely, managing the noise and making use of second order information.
[slides1] [slides2] [slides3]

The convex optimization, game-theoretic approach to learning

Elad Hazan (Princeton University) and Satyen Kale (Yahoo Research)

In recent years convex optimization and the notion of regret minimization in games have been combined and applied to machine learning in a general framework called online convex optimization. We will survey the basics of this framework, its applications, main algorithmic techniques and future research directions.

Rigorous Data Dredging: Theory and Tools for Adaptive Data Analysis

Moritz Hardt (Google) and Aaron Roth (University of Pennsylvania)

Reliable tools for inference and model selection are necessary in all applications of machine learning and statistics. Much of the existing theory breaks down in the now common situation where the data analyst works interactively with the data, adaptively choosing which methods to use by probing the same data many times. We illustrate the problem through the lens of machine learning benchmarks, which currently all rely on the standard holdout method. After understanding why and when the standard holdout method fails, we will see practical alternatives to the holdout method that can be used many times without losing the guarantees of fresh data. We then transition into the emerging theory on this topic touching on deep connections to differential privacy, compression schemes, and hypothesis testing (although no prior knowledge will be assumed).

Graph Sketching, Streaming, and Space-Efficient Optimization

Sudipto Guha (University of Pennsylvania) and Andrew McGregor (University of Massachusetts Amherst)

Graphs ae one of the most commonly used data representation tools but existing algorithmicapproaches are typically not appropriate when the graphs of interest are dynamic, stochastic, ordo not fit into the memory of a single machine. Such graphs are often encountered as machinelearning techniques are increasingly deployed to manage graph data and large-scale graph opti-mization problems. Graph sketching is a form of dimensionality reduction for graph data that isbased on using random linear projections and exploiting connections between linear algebra andcombinatorial structure. The technique has been studied extensively over the last five years andcan be applied in many computational settings. It enables small-space online and data streamcomputation where we are permitted only a few passes (ideally only one) over an input sequence ofupdates to a large underlying graph. The technique parallelizes easily and can naturally be appliedin various distributed settings. It can also be used in the context of convex programming to enablemore efficient algorithms for combinatorial optimization problems such as correlation clustering. One of the main goals of the research on graph sketching is understanding and characterizing thetypes of graph structure and features that can be inferred from compressed representations of the relevant graphs.
[slides1] [slides2]

Causal inference for observational studies

David Sontag and Uri Shalit (New York University)

In many fields such as healthcare, education, and economics, policy makers have increasing amounts of data at their disposal. Making policy decisions based on this data often involves causal questions: Does medication X lead to lower blood sugar, compared with medication Y? Does longer maternity leave lead to better child social and cognitive skills? These questions have to be addressed in practice, every day, by scientists working across many different disciplines.
The goal of this tutorial is to bring machine learning practitioners closer to the vast field of causal inference as practiced by statisticians, epidemiologists and economists. We believe that machine learning has much to contribute in helping answer such questions, especially given the massive growth in the available data and its complexity. We also believe the machine learning community could and should be highly interested in engaging with such problems, considering the great impact they have on society in general.
We hope that participants in the tutorial will: a) learn the basic language of causal inference as exemplified by the two most dominant paradigms today: the potential outcomes framework, and causal graphs; b) understand the similarities and the differences between problems machine learning practitioners usually face and problems of causal inference; c) become familiar with the basic tools employed by practicing scientists performing causal inference, and d) be informed about the latest research efforts in bringing machine learning techniques to address problems of causal inference.


Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Tuesday, June 21, 2016

Around The Blogs in 78 Summer hours


Andrew mentioned on his twitter feed that if one want to see his upcoming book, one has to simply register at


While ICML is underway, here are a few blog posts of notes:
Ben
Sanjeev
Fabian
Tomasz
Charles


John

Pip
Bob
Anand
Timothy

Dustin

Suresh
Mike
Muthu
Laurent

Igor


This image was taken by Rear Hazcam: Right B (RHAZ_RIGHT_B) onboard NASA's Mars rover Curiosity on Sol 1377 (2016-06-20 21:46:58 UTC).

Image Credit: NASA/JPL-Caltech



Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Monday, June 20, 2016

CVPR papers are out !



The full set of CVPR papers are out and viewable here, here is a sample that caught my attention, enjoy !:


Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !

Printfriendly