Online passive-aggressive algorithms
One of my favourite papers is the paper on online passive-aggressive algorithms  by Koby Crammer et al. In this paper, a set of algorithms is presented, that can be used to learn classifiers and regressors with a simple, closed form update. The underlying intuition is that when a prediction is good enough, the model is not updated (i.e. it is passive). When the prediction is not good enough, the model is aggressively updated not to make that mistake again. This is done after every data point. Extensions are presented for multi-class predictions, one-class prediction and structured prediction.
What I like so much about the paper is that is is very clear about the objective of the algorithm, and that the derivation of the update is straight forward to follow. It results in a set of algorithms (with regret bounds!) that is very easy to implement, even in low-level programming...
Continue reading →