TT Table replacement strategy

Code, algorithms, languages, construction...
Post Reply
kickstone
Posts: 20
Joined: Fri Feb 26, 2016 7:27 am

TT Table replacement strategy

Post by kickstone » Sat Apr 29, 2017 11:14 pm

I have implemented a new TT table replacement strategy but I fear I may be doing something wrong. Could you please critique this:

I have two tables. One that uses a depth preferred (DP) strategy and one that does always replace (AR). Here is the pseudocode.

Probing

first probe the DP table:

If entry exists and the hash keys match then we check if the entry depth is sufficient to return a value based on alpha, beta and value flag.
If entry depth is not sufficient we check if the depth is enough to avoid doing a null move.
If no value was returned we set the hash move = entry->bestMove.

if no value was returned after the DP table probe we probe the AR table in a similar manner. However we only set the hash move to the AR entry's best move if we didn't already get a best move from the DP table probe.

Hash entries have an age counter that is incremented after every search.

Saving a Hash Entry

if the entry exists in the DP table and depth >= entry->depth OR age > 1 then copy this entry to the AR table and overwrite the DP entry with the new values (with age = 0).
if the entry exists in the DP table but does not meet the depth or age requirement then we overwrite the AR table entry with these new values.
if the entry doesn't exist in the DP table then we create a new DP entry with age = 0.
Note: we replace the DP entry if the depth is the same or better OR if this entry was not used in the previous search. I assume that if age > 1 it is of little value.

This is a little faster than an always replace strategy but I'm confused about one thing. I've read many posts that say that there are more entries in the always replace table than in the depth preferred table. But that is the opposite of what I'm getting. I always add an entry first to the DP table if possible so I have more entries in that table. Am I doing something wrong?

Also, when people refer to hashing qSearch nodes do they mean doing something like this in alphaBeta:

if (depth <= 0)
nodeValue = qSearch(....)
recordHash(hashKey, nodeValue,...etc)
return nodeValue

Or do they actually mean hashing nodes in the qSearch itself? When I tried doing the above in alphaBeta, hashing values from depth = 0 nodes I experienced a slowdown as many entries with depth > 0 were overwritten by depth = 0 entries.

H.G.Muller
Posts: 190
Joined: Sun Jul 14, 2013 10:00 am
Real Name: H.G. Muller

Re: TT Table replacement strategy

Post by H.G.Muller » Sun Apr 30, 2017 8:49 pm

Note that hash tables are typically too big to fit in any CPU cache, so that accessing them requires a memory access, which is very costly. (E.g. in the same time you could have done several hundred calculating instructions.) Making two probes in different tables doubles the cost. Memory fetches occur in groups, however, so if you access an entry of a hash table, the neighboring entries are likely to be in cache as well. So access to these would be nearly free. For this reason most engines use a single table, where for instance the use the even entries with a depth-replaced strategy, and the odd entries with an always-replace strategy.

Also typically they store an entry in only one place; if it qualifies for the depth-preferred part, it would not go into the always-replace part. The probing code then simplifies to just checking which of the keys in the places where the entry could go matches, and at the first match you would not have to check any remaining entries. When you get a match on any of them, you just do whatever you do when you have a hit; it doesn't matter from which part of the table the hit came.

On storing you would always overwrite an entry with a matching key, irrespective of its depth and the new depth. Even if it means replacing a d=9 entry by a d=1 entry. If the d=9 entry had still been useful, you would not have had to redo the search that has provided the d=1 result.

It is (slightly) better to first test the always-replace entry; in the overwhelmingly large majority of hash hits the entry will be found there. Very few nodes of the search tree are close to the root (and thus produce high-depth hash entries), and nodes close to the leaves usually cannot get any hash hits on them, because irreversible moves have been done in the brach leading to them. So they only get hash hits from very deep search requests, which are very rare in the tree. Almost all action takes place in the always-replace part of the table, where all the low-depth search results go. The depth-preferred entry is just sitting there, waiting for the occasional probe made in a node near the root. (But once that is made, and gets a hit, it saves an enormous amount of work.)

For my engines QS and full-width search are handled by the same routine, which both probes and stores the hash. So every node in a QS tree would probe the hash, and store its result. If you store without probing, it cannot possibly have a positive effect, as the stored results are not deep enough to satisfy any other search request than QS. For me limiting searching and probing to d >= 1 nodes always slowed down the search (time to depth).

kickstone
Posts: 20
Joined: Fri Feb 26, 2016 7:27 am

Re: TT Table replacement strategy

Post by kickstone » Mon May 01, 2017 11:50 pm

Thank you! I've given that a shot but its performing worse for me. I must be doing something wrong. Let me recap what i've done using the methodology that even numbered entries are depth-preffered and odd entries are always replace.

Saving to hash table

Compute the hash table indexes:
hashIndex = posHashKey & (TT_TABLE_SIZE - 1)
dpIndex = hashIndex - (hashIndex & 1)
arIndex = dpIndex + 1

Adjust nodeValue by ply for mate scores

if the entry at dpIndex is NULL then save the new entry there.
if the entry at dpIndex != NULL then check if the hashKeys match OR the search depth > entry->depth OR entry->age > 1. If so, overwrite this entry. Otherwise, overwrite (if one exists) or create a new entry at arIndex.

Probing

Compute the hashIndexes like above

If the entry at arIndex != NULL AND the hashKeys match then check that the entry->depth >= search depth. In which case we can use the info from this entry.
If the depth was insufficient, we check to see if it's enough to avoid doing a null move search by updating a doNullMove boolean.
At this point, if no value was returned, we set the hashMove = entry->bestMove and return VALUE_UNKNOWN. Since the same hash key should not be in both the always replace and depth preferred entries there is no point checking the dpEntry now.

next we do the exact same thing for entry at dpIndex

Is that what you meant?

Post Reply