New for: D1
Our goal is to obtain a d^n . n^{O(1)} time \beta-approximation algorithm for the problem with d as small as possible.
That is, for every fixed \alpha, c, \beta \geq 1, we would like to determine the smallest possible d that can be achieved in a model where our problem-specific knowledge is limited to checking the feasibility of a solution and invoking the \alpha-approximate extension algorithm. Our results completely resolve this question:
* For every fixed \alpha, c, \beta \geq 1, a simple algorithm (``approximate monotone local search'') achieves the optimum value of d.
* Given \alpha, c, \beta \geq 1, we can efficiently compute the optimum d up to any precision \eps > 0.
Our technique gives novel results for a wide range of problems including Feedback Vertex Set, Directed Feedback Vertex Set, Odd Cycle Traversal and Partial Vertex Cover.
The monotone local search algorithm we use is a simple adaptation of [Fomin et al., J.\ ACM 2019, Esmer et al., ESA 2022, Gaspers and Lee, ICALP 2017]. Still, attaining the above results required us to frame the result in a different way, and overcome a major technical challenge. First, we introduce an oracle based computational model which allows for a simple derivation of lower bounds that, unexpectedly, show that the running time of the monotone local search algorithm is optimal. Second, while it easy to express the running time of the monotone local search algorithm in various forms, it is unclear how to actually numerically evaluate it for given values of \alpha, \beta and c. We show how the running time of the algorithm can be evaluated via a convex analysis of a continuous max-min optimization problem, overcoming the limitations of previous approaches to the \alpha=\beta case [Fomin et al., J.\ ACM 2019, Esmer et al., ESA 2022, Gaspers and Lee, ICALP 2017].