Learning in Cooperative Multi-Agent Systems

In a distributed system, a number of individually acting agents coexist. In order to achieve a common goal, coordinated cooperation between the agents is crucial. Many real-world applications are well-suited to be formulated in terms of spatially or functionally distributed entities. Job-shop scheduling represents one such application. Multi-agent reinforcement learning (RL) methods allow for automatically acquiring cooperative policies based solely on a specification of the desired joint behavior of the whole system. However, the decentralization of the control and observation of the system among independent agents has a significant impact on problem complexity. The author Thomas Gabel addresses the intricacy of learning and acting in multi-agent systems by two complementary approaches. He identifies a subclass of general decentralized decision-making problems that features provably reduced complexity. Moreover, he presents various novel model-free multi-agent RL algorithms that are capable of quickly obtaining approximate solutions in the vicinity of the optimum. All algorithms proposed are evaluated in the scope of various established scheduling benchmark problems.

104,00 CHF

Lieferbar


Artikelnummer 9783838110363
Produkttyp Buch
Preis 104,00 CHF
Verfügbarkeit Lieferbar
Einband Kartonierter Einband (Kt)
Meldetext Folgt in ca. 10 Arbeitstagen
Autor Gabel, Thomas
Verlag Südwestdeutscher Verlag für Hochschulschriften AG Co. KG
Weight 0,0
Erscheinungsjahr 20170529
Seitenangabe 192
Sprache ger
Anzahl der Bewertungen 0

Dieser Artikel hat noch keine Bewertungen.

Eine Produktbewertung schreiben