FastML

Machine learning made easy

Adversarial validation, part one

Many data science competitions suffer from a test set being markedly different from a training set (a violation of the “identically distributed” assumption). It is then difficult to make a representative validation set. We propose a method for selecting training examples most similar to test examples and using them as a validation set. The core of this idea is training a probabilistic classifier to distinguish train/test examples.

In part one, we inspect the ideal case: training and testing examples coming from the same distribution, so that the validation error should give good estimation of the test error and classifier should generalize well to unseen test examples.

Coming out

People often ask how we’ve been able to learn about and cover so many different and diverse topics in machine learning (using at least three different programming languages - Python, Matlab, and R) and generally achieve such prominence in the community, all this in a relatively short time. Today we finally give a definitive answer.

Bayesian machine learning

So you know the Bayes rule. How does it relate to machine learning? It can be quite difficult to grasp how the puzzle pieces fit together - we know it took us a while. This article is an introduction we wish we had back then.

What next?

We have a few ideas about what to write about next and are looking for your feedback. Vote in the poll at the bottom of this post.

Predicting sales: Pandas vs SQL

Pandas is Python software for data manipulation. We show that some rather simple analytics allow us to attain a reasonable score in an interesting Kaggle competition. While doing that, we look at analogies between Pandas and SQL, a standard in relational databases.

An excerpt from The Master Algorithm

Pedro Domingos’ new book, The Master Algorithm, is a readable overview of machine learning. The author discerns and describes five main schools of thought in the field: symbolists, connectionists, evolutionaries, Bayesians and analogizers. Here’a a piece about how Bayesians fit their models, that is, infer parameter values. Even though the context is Bayes nets, the described method is applicable to almost any model.

Evaluating recommender systems

If you dig a little, there’s no shortage of recommendation methods. The question is, which model to choose. One of the primary decision factors here is quality of recommendations. You estimate it through validation, and validation for recommender systems might be tricky. There are a few things to consider, including formulation of the task, form of available feedback, and a metric to optimize for. We address these issues and present an example.