Wednesday, November 19, 2014

Sparse Reinforcement Learning via Convex Optimization

How do you learn policies when you have too many features ?


Sparse Reinforcement Learning via Convex Optimization Zhiwei Qin, Weichang Li

We propose two new algorithms for the sparse reinforcement learning problem based on different formulations. The first algorithm is an off-line method based on the alternating direction method of multipliers for solving a constrained formulation that explicitly controls the projected Bellman residual. The second algorithm is an online stochastic approximation algorithm that employs the regularized dual averaging technique, using the Lagrangian formulation. The convergence of both algorithms are established. We demonstrate the performance of these algorithms through several classical examples.




Supplementary material.



N00231074.jpg was taken on November 04, 2014 and received on Earth November 04, 2014. The camera was pointing toward SKY, and the image was taken using the CL1 and CL2 filters. This image has not been validated or calibrated.

Image Credit: NASA/JPL/Space Science Institute



Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments:

Printfriendly