Max-Planck-Institut für Informatik
max planck institut
informatik
mpii logo Minerva of the Max Planck Society
 

MPI-INF or MPI-SWS or Local Campus Event Calendar

New for: D3
<< Previous Entry Next Entry >> New Event Entry Edit this Entry Login to DB (to update, delete)
What and Who
Title:Some algorithmic results on two-person zero-sum limit average payoff stochastic games
Speaker:Khaled Elbassioni
coming from:Max-Planck-Institut für Informatik - D1
Speakers Bio:
Event Type:AG1 Mittagsseminar (own work)
Visibility:D1, D3, D5, SWS, D4, RG1, MMCI
We use this to send out email in the morning.
Level:AG Audience
Language:English
Date, Time and Location
Date:Friday, 17 June 2011
Time:13:00
Duration:30 Minutes
Location:Saarbrücken
Building:E1 4
Room:024
Abstract
Stochastic games with limit average payoff were introduced by Gillette in 1957, as an extension of Shapley's discounted stochastic games. Each such game is played by two players on a finite set of states. At each state, a local matrix game is played, and according to its output, an amount is paid by one player to the other, and a transition is made to a new state. The game continues forever and the objective of the first player is to maximize the average expected payment received, while the second player wants to minimize this average.

Besides their mathematical beauty, this class of games have found numerous applications in different areas. The existence of $\eps$-equilibria, that is, a pair of strategies of the players, such that a deviating player would not be better off by more than
$\eps$, is already known from the 80's. Moreover, for several interesting classes, there exist optimal strategies that do not depend on the history of the play, the so-called stationary strategies. Nevertheless, the computational complexity of finding such equilibria, even in the simplest cases, remains an outstanding open problem.

In general locally optimal strategies played independently at each state are not globally optimal. A natural question therefore arises: Is it possible to apply potential transformations (which do not change value of the game) such that, in the transformed game, each player can apply a locally optimal strategy at each state and yet still guarantee global optimality? It turns out that this is indeed the case in a very general setting. I will give a sketch of these results, and some of their algorithmic consequences in deriving approximation schemes and smoothed complexity for the class of stochastic games with perfect information.

Contact
Name(s):Khaled Elbassioni
Video Broadcast
Video Broadcast:NoTo Location:
Tags, Category, Keywords and additional notes
Note:
Attachments, File(s):
  • Khaled Elbassioni, 06/03/2011 04:51 PM -- Created document.