While several strong runtime analysis results have appeared in the last 20 years, a powerful complexity theory for such algorithms is yet to be developed.
In the first part of this talk we survey different existing complexity notions for randomized search heuristics. We present new results indicating that additional requirements must be added to the existing models in order to obtain more meaningful complexity measures.
In the second part of the talks we enrich the existing complexity notions by the additional restriction that not the actual objective values, but only the relative quality of the previously evaluated solutions may be taken into account by the black-box algorithm. Many randomized search heuristics belong to this class of algorithms.
We show that the new ranking-based model gives more realistic complexity estimates for some problems.