It took a decade of work to turn a set of fresh ideas into a mature approach. This was the case with empirical hardness models, a new area in algorithmics, where machine learning is used to predict the computational cost of an algorithm over an instance.
This paper provides a methodology for the empirical analysis of algorithms and explains the now well-established concepts of instance distributions, problem features, and algorithm portfolios. A case study on combinatorial auctions illustrates the techniques. Although each special case requires a specific elicitation of distributions and features, the general ideas and methodology can be used for any algorithmic problem (especially nondeterministic polynomial-time (NP) complete problems).
Of course, a lot more work has to be done to make this set of techniques a regular tool for algorithm design. From a machine learning point of view, there are some issues regarding distributions, evaluation, and nonlinear regression techniques that could improve the results. In fact, some important ideas from machine learning should be brought to this field, including the no-free-lunch theorem, universal distribution, and meta-learning. Now that the paradigm is mature enough for new, relevant developments, this paper can serve as a very useful survey.