Designing an analysis friendly Stockfish?

Code, algorithms, languages, construction...
Post Reply
User avatar
Uly
Posts: 838
Joined: Thu Jun 10, 2010 5:33 am

Designing an analysis friendly Stockfish?

Post by Uly » Fri Jan 28, 2011 11:57 am

This thread is a continuation from this branch:

http://rybkaforum.net/cgi-bin/rybkaforu ... pid=303130

In where it was discussed how it's difficult to analyze with Stockfish, not just because the score is unstable and does wild jumps around, but because long analysis sessions seem to become useless as Stockfish forgets what was analyzing. It also seems to be bad at forward and backward propagation of scores. It's just bad short term and long term memory.

The developer said that that is by design, probably because a different implementation would cost elo points, but he said anyone with elementary programming skill can fix this, as it's something as simple as "allowing probing the transposition table at PV nodes".

That's all that is wanted, if it turns out to be something that can be done in a simple way, one could try to implement a Stockfish Persistent Hash, that would be the same, but results are saved to hard disk, so that after unloading and reloading Stockfish, the results are the same.

What one wants is the behavior of Zappa Mexico II (note that Rybka 4 and Naum 4.2 allow this behavior in a UCI setting "Preserve Analysis" or "Preserve Hash", Zappa does this by default)

rnbqkbnr/pppppppp/8/8/8/8/PPPPPPPP/RNBQKBNR w KQkq -

Analysis starts normally

8/17 0:00 +0.10 1.d4 d5 2.Bf4 e6 3.e3 Bb4+ 4.c3 Bd6 5.Bxd6 Qxd6 (69.999) 372
8/20 0:00 +0.10 1.d4 d5 2.Bf4 e6 3.e3 Bb4+ 4.c3 (124.151) 377
9/21 0:00 +0.22 1.d4 Nf6 2.e3 d5 3.Nf3 Ne4 4.Bb5+ c6 (284.209) 395
9/21 0:00 +0.22 1.d4 Nf6 2.e3 d5 3.Nf3 Ne4 4.Bb5+ (318.440) 391
10/23 0:01 +0.08 1.d4 Nf6 2.e3 d5 3.Nf3 e6 4.Bb5+ c6 (521.354) 406
10/31 0:02 +0.08 1.d4 Nf6 2.e3 d5 3.Nf3 e6 4.Bb5+ (1.161.967) 404
11/31 0:04 +0.21 1.d4 Nf6 2.e3 d5 3.Nf3 Ne4 4.Bb5+ c6 (1.872.939) 400
11/31 0:05 +0.21 1.d4 Nf6 2.e3 d5 3.Nf3 Ne4 4.Bb5+ c6 (2.250.098) 402
12/33 0:08 +0.20 1.d4 Nf6 2.Bf4 d5 3.e3 e6 4.Nf3 Bb4+ 5.c3 (3.616.944) 410
12/33 0:12 +0.20 1.d4 Nf6 2.Bf4 d5 3.e3 e6 4.Nf3 Bb4+ 5.c3 (4.991.335) 407

If one stops the analysis, and restarts it, the engine reaches current depth with known score immediately.

12.00 0:00 +0.20 1.d4 Nf6 2.Bf4 d5 3.e3 e6 4.Nf3 Bb4+ 5.c3 Bd6 6.Ne5 (12)

If one forces a move, the engine reaches immediately depth -1, with known score, and continues from there to the next depth.

rnbqkbnr/pppppppp/8/8/3P4/8/PPP1PPPP/RNBQKBNR b KQkq -

11.00 0:00 +0.20 1...Nf6 2.Bf4 d5 3.e3 e6 4.Nf3 Bb4+ 5.c3 Bd6 6.Ne5 (12)
12.00 0:00 +0.19 1...Nf6 2.Bf4 d5 3.e3 e6 4.Nf3 Bb4+ 5.c3 Bd6 6.Ne5 (13)
12/34 0:02 +0.19 1...Nf6 2.Bf4 d5 3.e3 e6 4.Nf3 Bb4+ 5.c3 Bd6 6.Ne5 (1.047.929) 394

If one goes back a move, the engine reaches immediately depth +1 with known score and the same line.

rnbqkbnr/pppppppp/8/8/8/8/PPPPPPPP/RNBQKBNR w KQkq -

13.00 0:00 +0.19 1.d4 Nf6 2.Bf4 d5 3.e3 e6 4.Nf3 Bb4+ 5.Nbd2 O-O 6.c3 (14)
13/35 0:19 +0.29 1.e4 e5 2.Nf3 Nf6 3.Nc3 Bb4 4.Bc4 d6 5.Ng5 O-O (8.067.399) 423
13/35 0:19 +0.29 1.e4 e5 2.Nf3 Nf6 3.Nc3 Bb4 4.Bc4 d6 5.Ng5 O-O (8.241.862) 422

The engine should not do this dumbly, an improvement would be found if at earlier depth a better move is known, specially when one refutes the variation and backtracks to the root, a move with a score of >refutation would be shown at the earliest depth.

Currently, Stockfish does this:

rnbqkbnr/pppppppp/8/8/8/8/PPPPPPPP/RNBQKBNR w KQkq -

Analysis starts normally

15/16 0:00 +0.48 1.e4 Nf6 2.e5 Nd5 3.Nc3 e6 4.Nxd5 exd5 5.Nf3 f6 6.d4 Nc6 7.Bb5 fxe5 8.Nxe5 Qf6 (1.438.040) 2045
16/18 0:01 +0.52 1.e4 e5 2.Nf3 Nc6 3.Bb5 Nf6 4.O-O Bc5 5.d3 O-O 6.Nc3 Re8 7.Be3 Bxe3 8.fxe3 d6 9.Bxc6 bxc6 (2.773.018) 2305
17/22 0:01 +0.40 1.e4 e5 2.Nf3 Bd6 3.d4 exd4 4.Nxd4 Nf6 5.Nf5 O-O 6.Nxd6 cxd6 7.Nc3 Qe7 8.Be2 Nxe4 9.Nxe4 Qxe4 10.O-O Nc6 11.Be3 Re8 (4.082.568) 2374
18/16 0:02 +0.24-- 1.e4 e5 2.Nf3 Nc6 3.Bb5 Nf6 4.O-O Bd6 5.Bxc6 dxc6 6.d4 Qe7 7.dxe5 Bxe5 8.Nxe5 Qxe5 (6.276.123) 2449
18/21 0:02 +0.36 1.e4 e5 2.Nf3 Nf6 3.Nxe5 d6 4.Nf3 Nxe4 5.Nc3 Nxc3 6.dxc3 Be7 7.Bc4 Nc6 8.O-O O-O 9.Re1 Bf5 10.Bf4 Be6 11.Qd3 (7.241.882) 2439

Restarting analysis

15/18 0:00 +0.28 1.e4 e5 2.Nf3 Nf6 3.Nxe5 d6 4.Nf3 Nxe4 5.Bd3 Nf6 6.Nc3 Be7 7.O-O O-O 8.Nd4 Nc6 9.Nxc6 bxc6 (425.548) 1702
16/20 0:00 +0.32 1.e4 e5 2.Nf3 Nf6 3.Nxe5 d6 4.Nf3 Nxe4 5.d3 Nf6 6.d4 Be7 7.Bb5+ Bd7 8.Nc3 O-O 9.O-O Nc6 10.Re1 d5 (588.874) 1887
17/25 0:00 +0.32 1.e4 e5 2.Nf3 Nf6 3.Nxe5 d6 4.Nf3 Nxe4 5.Qe2 Qe7 6.Nc3 Nxc3 7.dxc3 Qxe2+ 8.Bxe2 Nc6 9.Be3 Be7 10.O-O-O O-O 11.Kb1 Bf5 12.Rhe1 Rfe8 13.Bc4 (1.185.425) 2109
18/21 0:00 +0.40++ 1.e4 e5 2.Nf3 Nf6 3.Nxe5 d6 4.Nf3 Nxe4 5.d4 d5 6.Bd3 Be7 7.Nbd2 f5 8.O-O O-O 9.c4 Nc6 10.cxd5 Qxd5 11.Bc4 (1.769.850) 2266
18/21 0:00 +0.40 1.e4 e5 2.Nf3 Nf6 3.d4 exd4 4.e5 Qe7 5.Be2 Nd5 6.O-O Nc6 7.Bb5 Nxe5 8.Nxe5 Qxe5 9.Re1 Ne3 10.Bxe3 dxe3 11.Rxe3 (2.013.653) 2301

Forcing move

rnbqkbnr/pppppppp/8/8/4P3/8/PPPP1PPP/RNBQKBNR b KQkq -

15/20 0:00 +0.40 1...e5 2.Nf3 Nf6 3.d4 exd4 4.e5 Qe7 5.Be2 Nd5 6.O-O Nc6 7.Bb5 Nxe5 8.Nxe5 Qxe5 9.Re1 Ne3 10.Bxe3 dxe3 11.Rxe3 (423.839) 1593
15/13 0:00 +0.32++ 1...Nc6 2.d4 d5 3.exd5 Qxd5 4.Nf3 Nf6 5.Nc3 Qa5 6.Be2 Be6 7.O-O O-O-O (570.409) 1739
15/15 0:00 +0.24++ 1...Nc6 2.d4 d5 3.exd5 Qxd5 4.Nf3 Nf6 5.Nc3 Qa5 6.Be2 Bf5 7.O-O Nb4 8.Bb5+ c6 (635.090) 1764
16/20 0:00 +0.48-- 1...e5 2.Nf3 Nf6 3.d4 exd4 4.e5 Qe7 5.Be2 Nd5 6.O-O Nc6 7.Bb5 Nxe5 8.Nxe5 Qxe5 9.Re1 Ne3 10.Bxe3 dxe3 11.Rxe3 (767.966) 1819
16/20 0:00 +0.40 1...e5 2.Nf3 Nf6 3.d4 exd4 4.e5 Qe7 5.Be2 Nd5 6.O-O Nc6 7.Bb5 Nxe5 8.Nxe5 Qxe5 9.Re1 Ne3 10.Bxe3 dxe3 11.Rxe3 (945.274) 1831
17/19 0:00 +0.32++ 1...e5 2.Nf3 Nf6 3.d4 exd4 4.e5 Qe7 5.Be2 Ng4 6.Qxd4 h5 7.Nc3 Nc6 8.Qf4 Ncxe5 9.O-O c6 10.Nd4 Nf6 (1.161.124) 1903
17/20 0:00 +0.48-- 1...e5 2.Nf3 Nf6 3.d4 exd4 4.e5 Qe7 5.Be2 Ng4 6.Qxd4 h5 7.Nc3 Nc6 8.Qf4 Ncxe5 9.O-O c6 10.Nxe5 Qxe5 11.Rd1 (1.261.184) 1922
17/20 0:00 +0.40 1...e5 2.Nf3 Nf6 3.d4 exd4 4.e5 Ne4 5.Qxd4 f5 6.exf6 Nxf6 7.Qe3+ Be7 8.Nc3 Nc6 9.Qg5 Nb4 10.Qxg7 Nxc2+ 11.Kd1 (1.663.451) 2008
18/21 0:00 +0.32++ 1...e5 2.Nf3 Nf6 3.d4 exd4 4.e5 Ne4 5.Qxd4 f5 6.exf6 Nxf6 7.Qe3+ Be7 8.Nc3 Nc6 9.Qg5 Nb4 10.Qxg7 Nxc2+ 11.Kd1 Rg8 (1.950.410) 2079
18/21 0:01 +0.48-- 1...e5 2.Nf3 Nf6 3.d4 exd4 4.e5 Ne4 5.Qxd4 f5 6.exf6 Nxf6 7.Qe3+ Be7 8.Nc3 Nc6 9.Bd3 d5 10.Qg5 O-O 11.O-O Kh8 (2.077.578) 2077
18/21 0:01 +0.24++ 1...e5 2.Nf3 Nf6 3.d4 exd4 4.e5 Ne4 5.Qxd4 f5 6.exf6 Nxf6 7.Qe3+ Be7 8.Nc3 Nc6 9.Bd3 d5 10.Qg5 O-O 11.O-O g6 (3.057.877) 2198
18/23 0:01 +0.28 1...e5 2.Nf3 Nf6 3.d4 exd4 4.e5 Ne4 5.Qxd4 f5 6.exf6 Nxf6 7.Qe3+ Be7 8.Nc3 Nc6 9.Bd3 d5 10.Qg5 O-O 11.O-O d4 12.Bc4+ Kh8 (3.124.383) 2197

Backtracking to root.

rnbqkbnr/pppppppp/8/8/8/8/PPPPPPPP/RNBQKBNR w KQkq -

15/18 0:00 +0.24-- 1.e4 e5 2.Nf3 Nf6 3.d4 exd4 4.e5 Ne4 5.Qxd4 f5 6.exf6 Nxf6 7.Be2 Nc6 8.Qe3+ Qe7 9.O-O Nd5 (159.682) 1132
15/16 0:00 +0.40 1.e4 e5 2.Nf3 Nf6 3.d4 Nxe4 4.Bd3 d5 5.Nxe5 Nd7 6.O-O Bd6 7.Nc4 O-O 8.Nxd6 Nxd6 (311.501) 1534
16/18 0:00 +0.48 1.e4 e5 2.Nf3 Nf6 3.d4 exd4 4.e5 Qe7 5.Be2 Nd5 6.Qxd4 Qb4+ 7.Qxb4 Nxb4 8.Na3 Bc5 9.c3 Nd5 (402.784) 1721
17/22 0:00 +0.32-- 1.e4 e5 2.Nf3 Nf6 3.d4 exd4 4.e5 Ne4 5.Qxd4 f5 6.exf6 Nxf6 7.Be2 Nc6 8.Qe3+ Be7 9.O-O O-O 10.Nc3 d5 11.Bd3 Bd6 (466.806) 1661
17/18 0:00 +0.52 1.e4 e5 2.Nf3 Bd6 3.d4 exd4 4.Nxd4 Nf6 5.Nf5 O-O 6.Nxd6 cxd6 7.Nc3 Re8 8.Bc4 Qc7 9.Bd3 Na6 (854.258) 2024
18/12 0:00 +0.44-- 1.e4 e5 2.Nf3 Bd6 3.d4 exd4 4.Nxd4 Nc6 5.Nf5 Be5 6.Nc3 Nge7 (1.128.361) 2124
18/16 0:00 +0.36-- 1.e4 e5 2.Nf3 Bd6 3.Nc3 Nf6 4.Bc4 O-O 5.O-O Nc6 6.d4 exd4 7.Nxd4 Qe8 8.Nf5 Be5 (1.404.873) 2141
18/19 0:00 +0.40 1.e4 e5 2.Nf3 Bd6 3.Nc3 Nf6 4.Bc4 Nc6 5.O-O O-O 6.a3 a6 7.d3 b5 8.Ba2 Bb7 9.Be3 Ng4 10.Bg5 (1.908.272) 2221
19/30 0:01 +0.32 1.e4 e5 2.Nf3 Nf6 3.d4 exd4 4.e5 Ne4 5.Qxd4 f5 6.exf6 Nxf6 7.Bd3 Qe7+ 8.Qe3 Nd5 9.Qxe7+ Bxe7 10.O-O Nc6 11.Bg5 Ne5 12.Nxe5 Bxg5 13.Nf3 Bf4 14.Re1+ (3.165.799) 2383

That just seems random!

BFL has experience with this, but he's not a code guy, he suggested me to contact BB+. Vempele gave some pointers. OnePostPete offered his help, but we're on different time zones and haven't managed to hold a chat session at all. Perhaps someone else can help with this, and a thread is enough?

Note: This is a simultaneous release with Rybka Forum.

BB+
Posts: 1484
Joined: Thu Jun 10, 2010 4:26 am

Re: Designing an analysis friendly Stockfish?

Post by BB+ » Fri Jan 28, 2011 11:29 pm

In where it was discussed how it's difficult to analyze with Stockfish, not just because the score is unstable and does wild jumps around
I've noticed that too. There is also the 0.04 granularity.
but because long analysis sessions seem to become useless as Stockfish forgets what was analyzing. It also seems to be bad at forward and backward propagation of scores. It's just bad short term and long term memory.

The developer said that that is by design, probably because a different implementation would cost elo points, but he said anyone with elementary programming skill can fix this, as it's something as simple as "allowing probing the transposition table at PV nodes".
The code you want looks to be in search.cpp:

Code: Select all

    // At PV nodes, we don't use the TT for pruning, but only for move ordering.
    // This is to avoid problems in the following areas:
    //
    // * Repetition draw detection
    // * Fifty move rule detection
    // * Searching for a mate
    // * Printing of full PV line

    if (!PvNode && tte && ok_to_use_TT(tte, depth, beta, ply))
    {
        TT.refresh(tte);
        ss->bestMove = ttMove; // Can be MOVE_NONE
        return value_from_tt(tte->value(), ply);
    }
Could be as simple as eliminating the first "!PvNode"? This is in "search", and there is a similar coding in "qsearch":

Code: Select all

   if (!PvNode && tte && ok_to_use_TT(tte, ttDepth, beta, ply))
    {
        ss->bestMove = ttMove; // Can be MOVE_NONE
        return value_from_tt(tte->value(), ply);
    }
There could still be some problems with forwards/backward analysis, as TT entries are overwritten based first and foremost on "generation" [and only secondly on "depth"], and this always increases (if you return to root position X a 2nd time, it will have a different "generation").

Persistent Hash seems more difficult to implement, though if you just want a stopgap version (likely unintelligent with regards to slow disk reads) I don't think it would be too hard.

User avatar
Uly
Posts: 838
Joined: Thu Jun 10, 2010 5:33 am

Re: Designing an analysis friendly Stockfish?

Post by Uly » Sat Jan 29, 2011 5:53 am

Thanks!

Would the codes end looking like this?

search.cpp:

Code: Select all

    if (tte && ok_to_use_TT(tte, depth, beta, ply))
    {
        TT.refresh(tte);
        ss->bestMove = ttMove; // Can be MOVE_NONE
        return value_from_tt(tte->value(), ply);
    }
In "qsearch":

Code: Select all

   if (tte && ok_to_use_TT(tte, ttDepth, beta, ply))
    {
        ss->bestMove = ttMove; // Can be MOVE_NONE
        return value_from_tt(tte->value(), ply);
    }
BB+ wrote:There could still be some problems with forwards/backward analysis, as TT entries are overwritten based first and foremost on "generation"
I see, so the implementation isn't trivial. here's what BFL said on RF:
Banned for Life wrote:Maximizing the probability that high depth moves remain in the TT, even as the operator moves forward or backward in a line. The Rybka method of manually clearing the hash to dump these high depth positions will certainly be easier to implement than the method used by Zappa and Shredder which uses some automated method for preventing these values from saturating the TT when they are no longer relevant.
So the idea here is to make Stockfish protect hash entries that have reached some depth, leaving to the user to clear the hash manually when he's visiting a different game.
BB+ wrote:Persistent Hash seems more difficult to implement, though if you just want a stopgap version (likely unintelligent with regards to slow disk reads) I don't think it would be too hard.
I don't think Rybka 3 does it intelligently and it works relatively great, basically, after some depth is reached, the pvs are written to disk at relative depths, and when starting analysis, it is retrieved from the file and put into the hash contents.

Rybka 3 has a lot of problems with such a concept (lots, and lots of bugs, that need to be applied workarounds as one analyzes more positions), engines like Hiarcs or Spike have a really poor implementations that seem almost useless, Pro Deo and Romichess have learning based on game results, and Naum and other engines only have book learning that is useless for analysis.

Shredder does it right, it seems to have an implementation that works as expected on all scenarios (but one noted below), so one would like to have one that behaves like Shredder.

Only PV moves (and PV nodes of internal main lines) are written to the Learning file, so the rest of analysis needs to be researched, but it is fine. Rybka doesn't research the rest of analysis and dumbly reaches current depth, which is bad as, if the user finds a better move at an earlier depth, Rybka won't know about it until she reaches the next depth at the root, which would be too time costly as she needs to resolve the old main move at a higher depth first. One has to workaround this by switching to MultiPV=2 and back to Single PV (which makes Rybka see the new main move from the start), one would want to avoid this problem.

Another problem is that old Learning contents should be overwritten, if e4 was scored as 0.40, and after forcing it e5 is scored as 0.36, when going back to the root there should be no sign of the 0.40 score. Rybka PH still shows 0.40 until reaching the next depth, a problem also solved by the same workaround.

I only mention these preventively as they could be problems faced when trying to implement learning.

The last problem is requiring the user to visit all positions, even Shredder has this problem, if the user examines the variations after 1.e4 e5 2.Nf3, and it is mainline, Shredder learning will not see any of that when one goes back to the root in one jump, the user has to visit the position after e5 for the learning to propagate before going back to the root for some reason, the same would be true if 1.e4 e5 2.Nf3 Nc6 3.Bb5 a6 was the mainline and its tail was analyzed, going directly to the root makes Shredder be unaware of the analysis, one has to visit 3.Bb5, 2...Nc6, 2.Nf3, and 1...e5, in that order, for the scores to be propagated (but if the best learning out these has this problem, it would be acceptable to have it - Actually, these jumps are dangerous, I tried this to double check Shredder, and it's disrupting, it's difficult to reconcile the evals of the root and the evals of the tail when the user jumps like that).

Rybka and Shredder have a preset size for the learning, that once filled starts over-writing older entries, why is this? Would it be possible to start with a file of 0 size and increment it as new entries are written to it?

BB+
Posts: 1484
Joined: Thu Jun 10, 2010 4:26 am

Re: Designing an analysis friendly Stockfish?

Post by BB+ » Sat Jan 29, 2011 6:20 am

Would the codes end looking like this?
If my understanding of the Stockfish developer's comment is correct, then yes, that it all that is required.
So the idea here is to make Stockfish protect hash entries that have reached some depth, leaving to the user to clear the hash manually when he's visiting a different game
The most banal way to do this is simply to prevent the "generation" from changing. :) I have no idea if this is really that bright of an idea. Just the next-to-last line of "tt.cpp", where it says "generation++"! Then "depth" is the only "over-write" criterion. A more intelligent way might to try to hack up the "store" function in that file. Right now, the operative code looks to be:

Code: Select all

      c1 = (replace->generation() == generation ?  2 : 0);
      c2 = (tte->generation() == generation ? -2 : 0);
      c3 = (tte->depth() < replace->depth() ?  1 : 0);

      if (c1 + c2 + c3 > 0)
          replace = tte;
Here tte is a bucket-sized set over which you are looping, and replace is the one of them that will eventually be replaced (the first entry is the default). I guess you might want to change c3 somehow, for instance if "tte->depth()" is at least 30 [half-ply, I think], then you would ignore anything about "generation". Maybe put

Code: Select all

      if (c1 + c2 + c3 > 0 && (tte->depth()<=30 || (tte->depth()<replace->depth()))
         replace = tte;
If I didn't screw this up, this would prevent entries of depth more than 30 half-ply from being over-written based upon a "generation" criterion.

User avatar
Uly
Posts: 838
Joined: Thu Jun 10, 2010 5:33 am

Re: Designing an analysis friendly Stockfish?

Post by Uly » Sat Jan 29, 2011 9:48 am

Genius! You're making this seem very simple, I'm going to make the changes and report back :)

SPAMMER
Posts: 2
Joined: Tue Jan 04, 2011 8:17 am
Real Name: Bozo The Clown

Re: Designing an analysis friendly Stockfish?

Post by SPAMMER » Sat Jan 29, 2011 11:06 am

I get all exited when the discussion turns to hashing policies, but it's almost 2 AM so I'll keep this short.

For analysis purposes, you don't want to be throwing things out of hash because of generation, but you do need to have some way to prevent the hash from getting filled with high depth entries that are never read. Basically, you want replacement to be some function of depth and time since a hashed position was accessed. If you don't mind being very inefficient in the implementation, you can store the depth (already done) and use the order of the positions in the set to let you know which positions have been accessed most recently.

One special case is a new write. If you don't mind making these the second lowest priority entry, you can always write to the lowest priority hash record. This allows hash writes to be handled as memory writes rather than much less efficient read-modify-writes. Of course if you do this, you need to handle the potential case where the same position is using two hash records as part of the read logic.

Anyway, what stays in the hash is really important for analysis, so these things should be carefully considered...

mcostalba
Posts: 91
Joined: Thu Jun 10, 2010 11:45 pm
Real Name: Marco Costalba

Re: Designing an analysis friendly Stockfish?

Post by mcostalba » Sat Jan 29, 2011 11:15 am

BB+ wrote:If my understanding of the Stockfish developer's comment is correct, then yes, that it all that is required.
Tried that. Result embarassingly negative. Not clear why also because Bob seems to do that in Crafty....perahps is one of the reasons why Crafty struggles 300 ELO behind SF :-)
BB+ wrote: I guess you might want to change c3 somehow, for instance if "tte->depth()" is at least 30 [half-ply, I think], then you would ignore anything about "generation". Maybe put

Code: Select all

      if (c1 + c2 + c3 > 0 && (tte->depth()<=30 || (tte->depth()<replace->depth()))
         replace = tte;
If I didn't screw this up, this would prevent entries of depth more than 30 half-ply from being over-written based upon a "generation" criterion.
Or something like this....

Code: Select all

      c1 = (replace->generation() == generation ?  2 : 0);
      c2 = (tte->generation() == generation || tte->depth() > 15 * ONE_PLY ? -2 : 0);
      c3 = (tte->depth() < replace->depth() ?  1 : 0);

      if (c1 + c2 + c3 > 0)
          replace = tte;

hyatt
Posts: 1242
Joined: Thu Jun 10, 2010 2:13 am
Real Name: Bob Hyatt (Robert M. Hyatt)
Location: University of Alabama at Birmingham
Contact:

Re: Designing an analysis friendly Stockfish?

Post by hyatt » Sat Jan 29, 2011 3:05 pm

mcostalba wrote:
BB+ wrote:If my understanding of the Stockfish developer's comment is correct, then yes, that it all that is required.
Tried that. Result embarassingly negative. Not clear why also because Bob seems to do that in Crafty....perahps is one of the reasons why Crafty struggles 300 ELO behind SF :-)
BB+ wrote: I guess you might want to change c3 somehow, for instance if "tte->depth()" is at least 30 [half-ply, I think], then you would ignore anything about "generation". Maybe put

Code: Select all

      if (c1 + c2 + c3 > 0 && (tte->depth()<=30 || (tte->depth()<replace->depth()))
         replace = tte;
If I didn't screw this up, this would prevent entries of depth more than 30 half-ply from being over-written based upon a "generation" criterion.
Or something like this....

Code: Select all

      c1 = (replace->generation() == generation ?  2 : 0);
      c2 = (tte->generation() == generation || tte->depth() > 15 * ONE_PLY ? -2 : 0);
      c3 = (tte->depth() < replace->depth() ?  1 : 0);

      if (c1 + c2 + c3 > 0)
          replace = tte;

What are you talking about? Hash probing and using EXACT scores along the PV? Not doing so _hurts_. This is trivial analysis of simple tree searching...

mcostalba
Posts: 91
Joined: Thu Jun 10, 2010 11:45 pm
Real Name: Marco Costalba

Re: Designing an analysis friendly Stockfish?

Post by mcostalba » Sat Jan 29, 2011 4:03 pm

hyatt wrote: What are you talking about? Hash probing and using EXACT scores along the PV? Not doing so _hurts_. This is trivial analysis of simple tree searching...
I have tested in the past with the patch now suggested by BB+ and result is greatly and impressingly negative:

After 2674 games Mod- Orig: 230 - 1212 - 1232 ELO -133 !!!!! (+- 5.4)


No more no less.

hyatt
Posts: 1242
Joined: Thu Jun 10, 2010 2:13 am
Real Name: Bob Hyatt (Robert M. Hyatt)
Location: University of Alabama at Birmingham
Contact:

Re: Designing an analysis friendly Stockfish?

Post by hyatt » Sat Jan 29, 2011 5:23 pm

mcostalba wrote:
hyatt wrote: What are you talking about? Hash probing and using EXACT scores along the PV? Not doing so _hurts_. This is trivial analysis of simple tree searching...
I have tested in the past with the patch now suggested by BB+ and result is greatly and impressingly negative:

After 2674 games Mod- Orig: 230 - 1212 - 1232 ELO -133 !!!!! (+- 5.4)


No more no less.

All I can conclude is that the test is somehow flawed. That makes zero sense from a theoretical point of view. And I have tested it as well and found it costs 2-3 elo to avoid EXACT hashes on PV. There is simply no way it can hurt that much unless it is exposing another unknown bug...

Post Reply