Machine learning made easy

Two faces of overfitting

Overfitting is on of the primary problems, if not THE primary problem in machine learning. There are many aspects to it, but in a general sense, overfitting means that estimates of performance on unseen test examples are overly optimistic. That is, a model generalizes worse then expected.

We explain two common cases of overfitting: including information from a test set in training, and the more insidious form: overusing a validation set.


I already have my access code

Email address:

Access code:

Sign In →

Subscribers only

The article is for subscribers only. This is a new 2017 development: some articles, between a third and a half, will be like that.

The subscription is 9 USD per year. After purchasing, you'll instantly get an access code by email.

Thanks for your support. In case of any questions, email zygmunt(at)