In this talk, we discuss the concept of so-called black-box complexity. The black-box complexity measures a problem's difficulty to be optimized by general purpose optimization algorithms.
We enrich the two existing black-box complexity notions due to Wegener and other authors by the restrictions that not actual objective values, but only the relative quality of the previously evaluated solutions may be taken into account by the algorithm. Many randomized search heuristics belong to this class of algorithms. We show that the new ranking-based model gives more realistic complexity estimates for some problems, while for others the low complexities of the previous models still hold.
This is joint work with Benjamin Doerr.