how to create a learning farm
how to create a learning farm
We can create different types of learning farms but to simplify things, we will create a single farm on a single computer.
In any case, the idea is to train engines on several openings at the same time.
To train an engine on an opening, we generally create a "gauntlet" tourney with a reference engine that will meet several opponents.
Thus the reference engine will learn different lines of attack/defense and their effectiveness depending on the type of opponents.
In any case, the idea is to train engines on several openings at the same time.
To train an engine on an opening, we generally create a "gauntlet" tourney with a reference engine that will meet several opponents.
Thus the reference engine will learn different lines of attack/defense and their effectiveness depending on the type of opponents.
Re: how to create a learning farm
Here is a non-exhaustive list of engines that have a learning feature suitable for this type of training :
- Aurora, a private engine by Ron Dougie
- BrainLearn, a public engine by Andrea Manzo
- Eman, a private engine by Khalid A. Omar
- HypnoS, a private engine by Marco Zerbinati
- JudaS, a private engine by Marco Zerbinati
- ShashChess, a public engine by Andrea Manzo
- SugaR, a private engine by Marco Zerbinati
- StockfishMZ, a private engine by Marco Zerbinati
- etc.
In the following examples, we will take the Eman engine as the reference engine.
- Aurora, a private engine by Ron Dougie
- BrainLearn, a public engine by Andrea Manzo
- Eman, a private engine by Khalid A. Omar
- HypnoS, a private engine by Marco Zerbinati
- JudaS, a private engine by Marco Zerbinati
- ShashChess, a public engine by Andrea Manzo
- SugaR, a private engine by Marco Zerbinati
- StockfishMZ, a private engine by Marco Zerbinati
- etc.
In the following examples, we will take the Eman engine as the reference engine.
Re: how to create a learning farm
To carry out several tournaments simultaneously on the same computer, we will avoid GUIs but we must be able to follow their progress and their results.
So we will use the "command line" version of CuteChess (cutechess-cli.exe).
There are several ways to use it but to simplify things, we will create an "engines.json" file which will contain all the engines and their settings.
Here is an example of an engines.json file (on the same folder as cutechess-cli.exe) made to launch 2 gauntlets with eman as the reference engine and brainlearn/shashchess as the opponents :
So we will use the "command line" version of CuteChess (cutechess-cli.exe).
There are several ways to use it but to simplify things, we will create an "engines.json" file which will contain all the engines and their settings.
Here is an example of an engines.json file (on the same folder as cutechess-cli.exe) made to launch 2 gauntlets with eman as the reference engine and brainlearn/shashchess as the opponents :
Code: Select all
[
{
"command" : "eman.exe",
"name" : "eman1",
"options" : [
{
"alias" : "",
"default" : "eman.exp",
"name" : "Experience File",
"type" : "file",
"value" : "eman1.exp"
},
{
"alias" : "",
"default" : true,
"name" : "Experience MultiPV",
"type" : "check",
"value" : false
},
{
"alias" : "",
"default" : "eval.nnue",
"name" : "NNUE Eval File",
"type" : "file",
"value" : "nn-ad9b42354671.nnue"
},
{
"alias" : "",
"default" : "<empty>",
"name" : "SyzygyPath",
"type" : "folder",
"value" : "c:/syzygy"
}
],
"protocol" : "uci",
"stderrFile" : "",
"whitepov" : true,
"variants" : [
"standard",
"fischerandom"
],
"workingDirectory" : "e:\\eman"
},
{
"command" : "eman.exe",
"name" : "eman2",
"options" : [
{
"alias" : "",
"default" : "eman.exp",
"name" : "Experience File",
"type" : "file",
"value" : "eman2.exp"
},
{
"alias" : "",
"default" : true,
"name" : "Experience MultiPV",
"type" : "check",
"value" : false
},
{
"alias" : "",
"default" : "eval.nnue",
"name" : "NNUE Eval File",
"type" : "file",
"value" : "nn-ad9b42354671.nnue"
},
{
"alias" : "",
"default" : "<empty>",
"name" : "SyzygyPath",
"type" : "folder",
"value" : "c:/syzygy"
}
],
"protocol" : "uci",
"stderrFile" : "",
"whitepov" : true,
"variants" : [
"standard",
"fischerandom"
],
"workingDirectory" : "e:\\eman"
},
{
"command" : "brainlearn.exe",
"name" : "brainlearn1",
"options" : [
{
"alias" : "",
"default" : "nn-ad9b42354671.nnue",
"name" : "EvalFile",
"type" : "file",
"value" : "nn-ad9b42354671.nnue"
},
{
"alias" : "",
"default" : "<empty>",
"name" : "SyzygyPath",
"type" : "folder",
"value" : "c:/syzygy"
}
],
"protocol" : "uci",
"stderrFile" : "",
"whitepov" : true,
"variants" : [
"standard",
"fischerandom"
],
"workingDirectory" : "e:\\brainlearn1"
},
{
"command" : "brainlearn.exe",
"name" : "brainlearn2",
"options" : [
{
"alias" : "",
"default" : "nn-ad9b42354671.nnue",
"name" : "EvalFile",
"type" : "file",
"value" : "nn-ad9b42354671.nnue"
},
{
"alias" : "",
"default" : "<empty>",
"name" : "SyzygyPath",
"type" : "folder",
"value" : "c:/syzygy"
}
],
"protocol" : "uci",
"stderrFile" : "",
"whitepov" : true,
"variants" : [
"standard",
"fischerandom"
],
"workingDirectory" : "e:\\brainlearn2"
},
{
"command" : "shashchess.exe",
"name" : "shashchess1",
"options" : [
{
"alias" : "",
"default" : "<empty>",
"name" : "SyzygyPath",
"type" : "folder",
"value" : "c:/syzygy"
},
{
"alias" : "",
"default" : "nn-ad9b42354671.nnue",
"name" : "EvalFile",
"type" : "file",
"value" : "nn-ad9b42354671.nnue"
},
{
"alias" : "",
"choices" : [
"Off",
"Standard",
"Self"
],
"default" : "Off",
"name" : "Persisted learning",
"type" : "combo",
"value" : "Standard"
}
],
"protocol" : "uci",
"stderrFile" : "",
"whitepov" : true,
"variants" : [
"standard",
"fischerandom"
],
"workingDirectory" : "e:\\shashchess1"
},
{
"command" : "shashchess.exe",
"name" : "shashchess2",
"options" : [
{
"alias" : "",
"default" : "<empty>",
"name" : "SyzygyPath",
"type" : "folder",
"value" : "c:/syzygy"
},
{
"alias" : "",
"default" : "nn-ad9b42354671.nnue",
"name" : "EvalFile",
"type" : "file",
"value" : "nn-ad9b42354671.nnue"
},
{
"alias" : "",
"choices" : [
"Off",
"Standard",
"Self"
],
"default" : "Off",
"name" : "Persisted learning",
"type" : "combo",
"value" : "Standard"
}
],
"protocol" : "uci",
"stderrFile" : "",
"whitepov" : true,
"variants" : [
"standard",
"fischerandom"
],
"workingDirectory" : "e:\\shashchess2"
}
]
Re: how to create a learning farm
For each gauntlet, we will create a command file containing the engines, their settings, the conditions of the tournament, etc.
Don't forget to allow a free thread for each gauntlet, for example with a "8 threads" processor, we should use 2 gauntlets with 3 threads per engine.
Here is an example of a "gauntlet1.cmd" file (on the same folder as cutechess-cli.exe) made to launch a gauntlet with eman1 as the reference engine and brainlearn1/shashchess1 as the opponents.
Here is an example of a "gauntlet2.cmd" file (on the same folder as cutechess-cli.exe) made to launch a gauntlet with eman2 as the reference engine and brainlearn2/shashchess2 as the opponents.
Don't forget to allow a free thread for each gauntlet, for example with a "8 threads" processor, we should use 2 gauntlets with 3 threads per engine.
Here is an example of a "gauntlet1.cmd" file (on the same folder as cutechess-cli.exe) made to launch a gauntlet with eman1 as the reference engine and brainlearn1/shashchess1 as the opponents.
Code: Select all
set opening=your_opening.pgn
cutechess-cli.exe -tournament gauntlet -engine conf="eman1" -engine conf="brainlearn1" -engine conf="shashchess1" -each option.Hash=1024 option.Threads=9 tc=120+2 -games 500 -openings file="%opening%.pgn" start=1 -pgnout "%opening% - eman1_vs_brainlearn1_shashchess1.pgn" fi -repeat -recover -concurrency 1 -maxmoves 200 -draw movenumber=40 movecount=5 score=10 -tb "c:\syzygy" -tbpieces 6 -event your_event -site your_site -ratinginterval 10
pause
Code: Select all
set opening=your_opening.pgn
cutechess-cli.exe -tournament gauntlet -engine conf="eman2" -engine conf="brainlearn2" -engine conf="shashchess1" -each option.Hash=1024 option.Threads=9 tc=120+2 -games 500 -openings file="%opening%.pgn" start=1 -pgnout "%opening% - eman2_vs_brainlearn2_shashchess2.pgn" fi -repeat -recover -concurrency 1 -maxmoves 200 -draw movenumber=40 movecount=5 score=10 -tb "c:\syzygy" -tbpieces 6 -event your_event -site your_site -ratinginterval 10
pause
Re: how to create a learning farm
Why not train an engine against itself ?
Unlike engines like AlphaZero / LeelaZero / Giraffe, here we do not train an AI or a NN, we seek efficiency through practice.
By playing the same opening hundreds or even thousands of times, the engine learns that :
- certain lines of attack/defense are more or less effective depending on the style of play of the opponents
- the best moves even at great depths are not always the most effective throughout the game (=horizon effect)
- some "secondary" moves can win more games (= more effective)
Unlike engines like AlphaZero / LeelaZero / Giraffe, here we do not train an AI or a NN, we seek efficiency through practice.
By playing the same opening hundreds or even thousands of times, the engine learns that :
- certain lines of attack/defense are more or less effective depending on the style of play of the opponents
- the best moves even at great depths are not always the most effective throughout the game (=horizon effect)
- some "secondary" moves can win more games (= more effective)
Re: how to create a learning farm
How many games per opening ?
Eman's author advises to play between 500 to 1000 games so that the engine begins to learn an opening.
This estimate corresponds to playing each move 50 times in each key position in an opening where there are 5 key positions containing several moves with a very close evaluation :
Eman's author advises to play between 500 to 1000 games so that the engine begins to learn an opening.
This estimate corresponds to playing each move 50 times in each key position in an opening where there are 5 key positions containing several moves with a very close evaluation :
Re: how to create a learning farm
What is the best TC for training ?
It depends on the future use of the learned data.
- To study the openings, to obtain stats on the most probable endgame materials, to know which are the key positions, training at TC 1m+1s is enough.
Here are some stats for a training at TC 1m+1s @ 7 threads :
In these conditions, the learned data already has a very good resistance :
ODD TC
- To use learned data in tournaments against opening books, training at TC 2m+2s is recommended.
Here are some stats for a training at TC 2m+2s @ 7 threads :
Here are some stats for a tourney at TC 3m+2s @ 40 threads :
It depends on the future use of the learned data.
- To study the openings, to obtain stats on the most probable endgame materials, to know which are the key positions, training at TC 1m+1s is enough.
Here are some stats for a training at TC 1m+1s @ 7 threads :
In these conditions, the learned data already has a very good resistance :
ODD TC
- To use learned data in tournaments against opening books, training at TC 2m+2s is recommended.
Here are some stats for a training at TC 2m+2s @ 7 threads :
Here are some stats for a tourney at TC 3m+2s @ 40 threads :
Re: how to create a learning farm
Why avoid GUIs ?
Even if your screen resolution allows it or if you have several screens, displaying several games at the same time can unnecessarily stress your system.
Especially if your GUIs use visual animations for moves and/or use TBs to adjudicate the games.
The engines often replay the same moves before trying others so there is nothing spectacular.
Some GUIs read/write their data in the user\appdata folder so multiple instances may overwrite this data.
Even if your screen resolution allows it or if you have several screens, displaying several games at the same time can unnecessarily stress your system.
Especially if your GUIs use visual animations for moves and/or use TBs to adjudicate the games.
The engines often replay the same moves before trying others so there is nothing spectacular.
Some GUIs read/write their data in the user\appdata folder so multiple instances may overwrite this data.
Re: how to create a learning farm
Why not set the threads / hash values in the engines.json file ?
Because it's easier and safer to configure them in the cutechess-cli command line.
Otherwise for a small learning farm of 4 gauntlets / 5 engines, this already represents 20 values per uci option !
Because it's easier and safer to configure them in the cutechess-cli command line.
Otherwise for a small learning farm of 4 gauntlets / 5 engines, this already represents 20 values per uci option !
Re: how to create a learning farm
How to avoid time forfeits ?
We can launch gauntlets with a low priority :
start /low gauntlet1.cmd
start /low gauntlet2.cmd
We can launch gauntlets with a low priority :
start /low gauntlet1.cmd
start /low gauntlet2.cmd