2024 GrandmasterLevelChessWithoutSea

From GM-RKB
Jump to navigation Jump to search

Subject Headings: Chess Playing Algorithm, Stockfish, Chess Playing.

Notes

Cited By

Quotes

Abstract

The recent breakthrough successes in machine learning are mainly attributed to scale: namely large-scale attention-based architectures and datasets of unprecedented scale. This paper investigates the impact of training at scale for chess. Unlike traditional chess engines that rely on complex heuristics, explicit search, or a combination of both, we train a 270M parameter transformer model with supervised learning on a dataset of 10 million chess games. We annotate each board in the dataset with action-values provided by the powerful Stockfish 16 engine, leading to roughly 15 billion data points. Our largest model reaches a Lichess blitz Elo of 2895 against humans, and successfully solves a series of challenging chess puzzles, without any domain-specific tweaks or explicit search algorithms. We also show that our model outperforms AlphaZero's policy and value networks (without MCTS) and GPT-3.5-turbo-instruct. A systematic investigation of model and dataset size shows that strong chess performance only arises at sufficient scale. To validate our results, we perform an extensive series of ablations of design choices and hyperparameters.

References

;

 AuthorvolumeDate ValuetitletypejournaltitleUrldoinoteyear
2024 GrandmasterLevelChessWithoutSeaAnian Ruoss
Sourabh Medapati
Jordi Grau-Moya
Li Kevin Wenliang
Elliot Catt
John Reid
Tim Genewein
Grégoire Delétang
Grandmaster-Level Chess Without Search10.48550/arXiv.2402.044942024