Please use this identifier to cite or link to this item: http://artemis.cslab.ece.ntua.gr:8080/jspui/handle/123456789/17947
Title: Ευσταθή και ασταθή σημεία ισορροπίας σε εκμάθηση χωρίς regret και ενθόρυβα μοντέλα.
Authors: Γιάννου, Αγγελική
Φωτάκης Δημήτριος
Keywords: game theory
no-regret learning
bandits
online learning
multi-agent learning
Follow the Regularized Leader
Issue Date: 3-Jun-2021
Abstract: In this diploma thesis, we examine the Nash equilibrium convergence properties of no-regret learning in general N-player games. Despite the importance and widespread applications of no-regret algorithms, their long-run behavior in multi-agent environments is still far from understood, and most of the literature has focused by necessity on certain, specific classes of games (typically zero-sum or congestion games). Instead of focusing on a fixed class of games, we instead take a structural approach and examine different classes of equilibria in generic games. For concreteness, we focus on the archetypal "follow the regularized leader" (FTRL) class of algorithms, and we consider the full spectrum of information uncertainty that the players may encounter – from noisy, oracle-based feedback, to bandit, payoff-based information. In this general context, we establish a comprehensive equivalence between the stability of a Nash equilibrium and its support: a Nash equilibrium is stable and attracting with arbitrarily high probability if and only if it is strict (i.e., each equilibrium strategy has a unique best response). This result extends existing continuous-time versions of the "folk theorem" of evolutionary game theory to a bona fide discrete-time learning setting, and provides an important link between the literature on multi-armed bandits and the equilibrium refinement literature.
URI: http://artemis.cslab.ece.ntua.gr:8080/jspui/handle/123456789/17947
Appears in Collections:Διπλωματικές Εργασίες - Theses



Items in Artemis are protected by copyright, with all rights reserved, unless otherwise indicated.