Stockfish 3 PA_GTB

Discussion about chess-playing software (engines, hosts, opening books, platforms, etc...)
Post Reply
Jeremy Bernstein
Site Admin
Posts: 1226
Joined: Wed Jun 09, 2010 7:49 am
Real Name: Jeremy Bernstein
Location: Berlin, Germany
Contact:

Re: Stockfish 3 PA_GTB

Post by Jeremy Bernstein » Wed Jun 05, 2013 9:15 pm

User923005 wrote:Thanks for your work on this. It is very useful for me to have a stockfish that probes the endgame. I use chess engines primarily for analysis rather than game play. Hence, I need the right answer, and I am willing to wait to get it if needed. Deep analysis (e.g. several hours per position) will often start to hit the tablebases with a surprising number of chessmen on the board, initially.

I think that people buy chess engines to win online games, for the most part. But if you want to become a better chess player, analysis is far more important than game play, which does not teach you very much, if we are real about it.

For this reason, tablebase files are important, because the chess engine will make the best possible move if given enough time. We have no such guarantee if the tablebase files are missing.

I think that analysis mode (e.g. when we send the directive to analyze) should probably use a bit different search and evaluation compared to game play. For instance, in game play, underpromotion usually hurts speed, especially underpromotion to rook or bishop. But if we definitely want the best answer and not to 'win a game in a hurry' then underpromotion is required. So, perhaps, for game play an engine could not underpromote, but for analysis it would.

Just a thought.

Thanks again for your diligence and service in providing this fine tool. And what is wrong with the stockfish team to not incorporate tablebase probes into their product? For crying out loud.
;-)
Thanks for using it. I absolutely recognize the need for better analysis tools for chess players, which is why I bother -- I need them myself! :-)

As for the Stockfish team, I obviously can't speak for them, but an examination of the code history paints a picture of heavy branch reduction. Runtime flags have been removed in favor of precompiled code paths where possible. Maybe for clarity, but probably for negligible but demonstrable speed gains. Anyway, it's their baby and I'm glad to have such a nice and orderly lab for experimentation. I can look into the promotion stuff at some point if you provide a problem position or two, although a quick examination of the code seems to confirm that non-queen promotions should be generated as part of the move list.

Now if only I could figure out why known drawn lines are bubbling up to the root with non-draw scores, I would go to sleep a happy man.

jb

User923005
Posts: 616
Joined: Thu May 19, 2011 1:35 am

Re: Stockfish 3 PA_GTB

Post by User923005 » Wed Jun 05, 2013 9:37 pm

I that stockfish underpromotes correctly, but many other high end engines do not.
About your permanent hash:
I have a few million analyzed positions I would like to load into permanent hash.
Any chance of a utility to turn analyzed Epd positions into permanent hash entries?

User923005
Posts: 616
Joined: Thu May 19, 2011 1:35 am

Re: Stockfish 3 PA_GTB

Post by User923005 » Wed Jun 05, 2013 9:52 pm

Speaking of underpromotions, I recall getting a big speedup in one of the old GnuChess engines by changing the promotion order to:
Q,N,B,R
I forget what it was, but I think it may have been something really dumb like alphabetical ordering:
BNQR
or something strange like that. Maybe it was nothing more than putting the knight second (which clearly matters).
Anyway, QNBR is best (though statistically, I can't see a provable improvement over QNRB).

Jeremy Bernstein
Site Admin
Posts: 1226
Joined: Wed Jun 09, 2010 7:49 am
Real Name: Jeremy Bernstein
Location: Berlin, Germany
Contact:

Re: Stockfish 3 PA_GTB

Post by Jeremy Bernstein » Wed Jun 05, 2013 10:00 pm

User923005 wrote:I that stockfish underpromotes correctly, but many other high end engines do not.
About your permanent hash:
I have a few million analyzed positions I would like to load into permanent hash.
Any chance of a utility to turn analyzed Epd positions into permanent hash entries?
I could do this, but it might take a while (as in probably a couple of weeks before I have time to do something like that). You might be able to do it yourself, though, in your favorite scripting language. The Persistent Hash files are in QDBM format, so there are command line tools available which can manipulate or even create them.

The relevant data structure (t_phash_data) is in qdbm.cpp (and now that I look closely at it, the struct is twice as large as necessary - I'll probably change this for the next version with some sort of backward compatibility). Anyway, the t_phash_data structure (24 bytes at the moment) is added to the database with an 8-byte key (the 64-bit zobrist hash of the position in question, as calculated in Position::compute_key()). So if you're feeling inspired and have some time on your hands, go for it. :-) Otherwise, I'll make a tool in C at some point.

Jeremy

User923005
Posts: 616
Joined: Thu May 19, 2011 1:35 am

Re: Stockfish 3 PA_GTB

Post by User923005 » Thu Jun 06, 2013 7:43 pm

I will probably try an experiment to perform the load.
I am familiar with QDBM ( and his more modern cousins Tokyo & Kyoto ).
Interesting to see that you have chosen a key/value store for this approach. I like it.

Jeremy Bernstein
Site Admin
Posts: 1226
Joined: Wed Jun 09, 2010 7:49 am
Real Name: Jeremy Bernstein
Location: Berlin, Germany
Contact:

Re: Stockfish 3 PA_GTB

Post by Jeremy Bernstein » Thu Jun 06, 2013 11:11 pm

User923005 wrote:I will probably try an experiment to perform the load.
I am familiar with QDBM ( and his more modern cousins Tokyo & Kyoto ).
Interesting to see that you have chosen a key/value store for this approach. I like it.
Somehow I overlooked Tokyo and Kyoto when I was shopping around for open source fast DB implementations. Maybe I'll try a Kyoto-based implementation, as well, since size and speed are relevant for this application. I already reduced the record size and wrote a converter, so maybe it's best to do the switch ASAP (if it performs better) and eventually phase out the QDBM version. Thanks for the info.

Jeremy

User923005
Posts: 616
Joined: Thu May 19, 2011 1:35 am

Re: Stockfish 3 PA_GTB

Post by User923005 » Thu Jun 06, 2013 11:57 pm

Rather than Tokyo Cabinet or Kyoto Cabinet, may I recommend libmdb. Here is some benchmark stuff:
http://symas.com/mdb/microbench/

I found I can easily compile it on many operating systems, and it does perform well.

Redis is nice, but the Windows ports are not as good as the Linux versions and generally lag behind by several versions.

Leveldb is a worthy competitor. Same caveats as Redis, though.

SQLite has the virtue of manipulation via SQL queries, though not quite as fast as a key value store (though the next generation SQLite will be based on libmdb).

I have not examined your code yet, but I do have a recommendation:
Always write every pv node to the persistent hash file and never write any node that is not a pv node to the file. When the program is running, if pv nodes are improved or added, they get written back to the file.
You will know this of the nodes in the file:
All of them are exact. All of them are interesting. All of them are worthy of consideration in some context.

Jeremy Bernstein
Site Admin
Posts: 1226
Joined: Wed Jun 09, 2010 7:49 am
Real Name: Jeremy Bernstein
Location: Berlin, Germany
Contact:

Re: Stockfish 3 PA_GTB

Post by Jeremy Bernstein » Fri Jun 07, 2013 7:27 am

User923005 wrote:Rather than Tokyo Cabinet or Kyoto Cabinet, may I recommend libmdb. Here is some benchmark stuff:
http://symas.com/mdb/microbench/

I found I can easily compile it on many operating systems, and it does perform well.

Redis is nice, but the Windows ports are not as good as the Linux versions and generally lag behind by several versions.

Leveldb is a worthy competitor. Same caveats as Redis, though.

SQLite has the virtue of manipulation via SQL queries, though not quite as fast as a key value store (though the next generation SQLite will be based on libmdb).

I have not examined your code yet, but I do have a recommendation:
Always write every pv node to the persistent hash file and never write any node that is not a pv node to the file. When the program is running, if pv nodes are improved or added, they get written back to the file.
You will know this of the nodes in the file:
All of them are exact. All of them are interesting. All of them are worthy of consideration in some context.
Thanks for this -- I'll look into LMDB.

As for the phash stashing: why do you assume that only PV nodes are interesting? Any nodes which achieves an exact score (at a certain depth) helps the engine make decisions, so why eliminate that information?

Jeremy

User923005
Posts: 616
Joined: Thu May 19, 2011 1:35 am

Re: Stockfish 3 PA_GTB

Post by User923005 » Fri Jun 07, 2013 9:04 am

If you are going to put millions of things into the list (as I am) then you only want to put the important things in there.
When doing a chess search, the only thing we really want to find are the pv nodes. All the other nodes are just in the way.
IMO-YMMV.

I am suggesting that it is worth a test to see how it comes out for you. If you have enough of them, I think it will turn out well for you.

Jeremy Bernstein
Site Admin
Posts: 1226
Joined: Wed Jun 09, 2010 7:49 am
Real Name: Jeremy Bernstein
Location: Berlin, Germany
Contact:

Re: Stockfish 3 PA_GTB

Post by Jeremy Bernstein » Fri Jun 07, 2013 9:16 am

User923005 wrote:If you are going to put millions of things into the list (as I am) then you only want to put the important things in there.
When doing a chess search, the only thing we really want to find are the pv nodes. All the other nodes are just in the way.
IMO-YMMV.

I am suggesting that it is worth a test to see how it comes out for you. If you have enough of them, I think it will turn out well for you.
Anyway, it's just rhetorical. The only things currently stored in the phash are PV nodes. :-)

jb

Post Reply