- Title
- Q-Learning for Feedback Nash Strategy of Finite-Horizon Nonzero-Sum Difference Games
- Creator
- Zhang, Zhaorong; Xu, Juanjuan; Fu, Minyue
- Relation
- ARC.DP200103507 http://purl.org/au-research/grants/arc/DP200103507
- Relation
- IEEE Transactions on Cybernetics Vol. 52, Issue 9, p. 9170-9178
- Publisher Link
- http://dx.doi.org/10.1109/TCYB.2021.3052832
- Publisher
- Institute of Electrical and Electronics Engineers (IEEE)
- Resource Type
- journal article
- Date
- 2022
- Description
- In this article, we study the feedback Nash strategy of the model-free nonzero-sum difference game. The main contribution is to present the Q -learning algorithm for the linear quadratic game without prior knowledge of the system model. It is noted that the studied game is in finite horizon which is novel to the learning algorithms in the literature which are mostly for the infinite-horizon Nash strategy. The key is to characterize the Q -factors in terms of the arbitrary control input and state information. A numerical example is given to verify the effectiveness of the proposed algorithm.
- Subject
- feedback nash strategy; finite horizon difference game; Q-learning
- Identifier
- http://hdl.handle.net/1959.13/1466516
- Identifier
- uon:47573
- Identifier
- ISSN:2168-2267
- Language
- eng
- Reviewed
- Hits: 2534
- Visitors: 2534
- Downloads: 0