Public code

I'll put here some code I developed over the years. As is often the case, I did not pay much attention to readability when I wrote it and making it clean enough for public use is a long and painful process. Hopefully, more code will be added with time.


This is Mark Schmidt's implementation of SAG, the algorithm described in A Stochastic Gradient Method with an Exponential Convergence Rate for Finite Training Sets


This is the code used in the paper Local Component Analysis.


This is a fast (or so I hope) and simple (that I know) code for training neural networks with any number of hidden layers in Matlab. No fancy statistics, graphs, just pure training. Since I've been using it quite a lot over the recent years, I figured I'd release it.
The demo requires the file mnist_small.mat.