Please use this identifier to cite or link to this item:
http://artemis.cslab.ece.ntua.gr:8080/jspui/handle/123456789/17947
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Γιάννου, Αγγελική | - |
dc.date.accessioned | 2021-06-08T06:59:54Z | - |
dc.date.available | 2021-06-08T06:59:54Z | - |
dc.date.issued | 2021-06-03 | - |
dc.identifier.uri | http://artemis.cslab.ece.ntua.gr:8080/jspui/handle/123456789/17947 | - |
dc.description.abstract | In this diploma thesis, we examine the Nash equilibrium convergence properties of no-regret learning in general N-player games. Despite the importance and widespread applications of no-regret algorithms, their long-run behavior in multi-agent environments is still far from understood, and most of the literature has focused by necessity on certain, specific classes of games (typically zero-sum or congestion games). Instead of focusing on a fixed class of games, we instead take a structural approach and examine different classes of equilibria in generic games. For concreteness, we focus on the archetypal "follow the regularized leader" (FTRL) class of algorithms, and we consider the full spectrum of information uncertainty that the players may encounter – from noisy, oracle-based feedback, to bandit, payoff-based information. In this general context, we establish a comprehensive equivalence between the stability of a Nash equilibrium and its support: a Nash equilibrium is stable and attracting with arbitrarily high probability if and only if it is strict (i.e., each equilibrium strategy has a unique best response). This result extends existing continuous-time versions of the "folk theorem" of evolutionary game theory to a bona fide discrete-time learning setting, and provides an important link between the literature on multi-armed bandits and the equilibrium refinement literature. | en_US |
dc.language | en | en_US |
dc.subject | game theory | en_US |
dc.subject | no-regret learning | en_US |
dc.subject | bandits | en_US |
dc.subject | online learning | en_US |
dc.subject | multi-agent learning | en_US |
dc.subject | Follow the Regularized Leader | en_US |
dc.title | Ευσταθή και ασταθή σημεία ισορροπίας σε εκμάθηση χωρίς regret και ενθόρυβα μοντέλα. | en_US |
dc.description.pages | 63 | en_US |
dc.contributor.supervisor | Φωτάκης Δημήτριος | en_US |
dc.department | Τομέας Τεχνολογίας Πληροφορικής και Υπολογιστών | en_US |
Appears in Collections: | Διπλωματικές Εργασίες - Theses |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Ευσταθή και ασταθή σημεία ισορροπίας σε εκμάθηση χωρίς regret και ενθόρυβα μοντέλα.pdf | 699.82 kB | Adobe PDF | View/Open |
Items in Artemis are protected by copyright, with all rights reserved, unless otherwise indicated.