Learning augmented algorithms aim to combine the efficacy of machine learning approaches in handling practical inputs and the performance guarantees offered by traditional worst-case analysis of (usually) online algorithms. For numerous problems, machine learning techniques can readily generate predictions about the structure of the input. Several algorithmic results in the area can be seen as a meta-algorithm, dynamically "combining" a number of different algorithms. In this talk we illustrate two variations of this idea with the help of the Metrical Task System (MTS) problem, which captures online problems such as caching, k-server and convex body chasing.
First, we utilize results from the theory of online algorithms in order to develop a learning augmented algorithm that "combines" (i) a prediction-sensitive online algorithm that yields enhanced performance when these predictions are sufficiently accurate, and (ii) a classical online algorithm that disregards predictions. Our learning augmented algorithm assymptotically matches the performance of the best -- in hindsight -- between the two algorithms, a standard objective in the field.
Second, since the performance of different predictors may vary over time, it may be desirable to instead use as a benchmark a dynamic combination which follows different algorithms at different times. To this end, we design learning augmented algorithms against the benchmark of the best (in hindsight) unconstrained combination of k algorithms. We show that the algorithm obtains a competitive ratio of O(k^2), and that this is best possible. Finally, we discuss how our algorithms can be adapted to access predictors in a bandit-like fashion, querying only one algorithm at a time.