Abstract: Standard optimization algorithms assume precise knowledge of their inputs, and find optimal or near-optimal solutions under this assumption. However, in real-life applications, the input data may be known up to a limited precision with errors introduced possibly due to inaccuracy in measurements or lack of exact information about the precise value of the input parameters. Clearly, an optimization algorithm designed based on such distorted data to optimize a certain objective function would not yield reliable results, if no special consideration of such uncertainty is taken. One of the basic approaches to deal with uncertainty in data is robust optimization, where some (deterministic) assumption is made on the uncertain parameters, and the objective is to optimize over the worst-case these parameters can assume.
In this talk, we consider robust discrete optimization problems with cost uncertainty defined by a convex set. Assuming the existence of an integrality gap verifier with a bounded approximation guarantee for the LP relaxation of the non-robust version of the problem, we derive approximation algorithms for the robust version under different types of uncertainty, including polyhedral and ellipsoidal uncertainty