Monday, June 29, 2015

Thesis: Learning in High Dimensions with Projected Linear Discriminants by Robert Durrant

 
The enormous power of modern computers has made possible the statistical modelling of data with dimensionality that would have made this task inconceivable only decades ago.  However, experience in such modelling has made researchers aware of many issues associated with working in high-dimensional domains, collectively known as `the curse of dimensionality', which can confound practitioners' desires to build good models of the  world  from  these  data.   When  the  dimensionality  is  very  large,  low-dimensional methods and geometric intuition both break down in these high-dimensional spaces. To mitigate the dimensionality curse we can use low-dimensional representations of the original data that capture most of the information it contained.  However, little is currently known about the eff ect of such dimensionality reduction on classi fier performance.  In this thesis we develop theory quantifying the e ect of random projection { a recent, very promising, non-adaptive dimensionality reduction technique {on the classi cation performance of Fisher's Linear Discriminant (FLD), a successful and  widely-used  linear  classifier. We  tackle  the  issues  associated  with  small  sample size and high-dimensionality by using randomly projected FLD ensembles, and we develop theory explaining why our new approach performs well.  Finally, we quantify the generalization error of Kernel FLD, a related non-linear projected classifier.
 
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments:

Printfriendly