Monday, October 06, 2014

Generalized Low Rank Models - implementations in Julia and Spark -

Here is a large installment of Advanced Matrix Factorization instances. The codes are also available in Julia and Spark (a sign of things to come ?)



Generalized Low Rank Models by Madeleine Udell, Corinne Horn, Reza Zadeh, Stephen Boyd

Principal components analysis (PCA) is a well-known technique for approximating a data set represented by a matrix by a low rank matrix. Here, we extend the idea of PCA to handle arbitrary data sets consisting of numerical, Boolean, categorical, ordinal, and other data types. This framework encompasses many well known techniques in data analysis, such as nonnegative matrix factorization, matrix completion, sparse and robust PCA, k-means, k-SVD, and maximum margin matrix factorization. The method handles heterogeneous data sets, and leads to coherent schemes for compressing, denoising, and imputing missing entries across all data types simultaneously. It also admits a number of interesting interpretations of the low rank factors, which allow clustering of examples or of features. We propose a number of large scale and parallel algorithms for fitting generalized low rank models which allow us to find low rank approximations to large heterogeneous datasets, and provide two codes that implement these algorithms.

    Implementations are available:


      h/t Guiseppe for the reminder:
       
      Join the CompressiveSensing subreddit or the Google+ Community and post there !
      Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

      No comments:

      Printfriendly