Still waiting on Ed

General discussion about computer chess...
hyatt
Posts: 1242
Joined: Thu Jun 10, 2010 2:13 am
Real Name: Bob Hyatt (Robert M. Hyatt)
Location: University of Alabama at Birmingham
Contact:

Re: Still waiting on Ed

Post by hyatt » Thu Jul 07, 2011 6:42 pm

Rebel wrote:
hyatt wrote:I have asked, _repeatedly_ for specifics about such "contaminated ideas that I took." I asked for specific examples, and it would be even better to verify that the supposedly "copied ideas" are not in Crafty versions that pre-date the release of ip/robo*. To date, you have provided nothing, except for this same vague accusations.

Please tell me where the search "behaves exactly as ..." quite contrary to previous versions. Here is a challenge for you, since you want to put yourself on a spot. Compare the search in version 23.4, to the search in version 22.4, which is a couple of years old. You will have to work around the "collapse" where I got rid of SearchRoot(), Search(), etc. and Quiesce(), QuiesceChecks() and ended up with a single search, and a single quiesce. But once you do that, tell me _exactly_ what changes are in search.
Ball is in your court...
The lower branch factor.

It's origin: Rybka. The base and origin of the success and the 3-5 years domination of Rybka.

We are both programmers and around too long, let's not fool each other.

WHAT are you talking about? Crafty's effective branching factor has been reduced in two ways. Both in use _well_ before Rybka existed. (1) reductions. I started working on this idea right after Fruit came out with Fabien's "history pruning". I tested a ton, you can find many threads in CCC about it, and I discovered that the history counters were no good for that. And when I removed them from Fruit, it did not hurt a bit. So LMR pre-dates Rybka. The other is forward pruning. The futility stuff for Heinz was around way before Rybka. And I think Jerimiah Pennery actually implemented futility pruning in Crafty. Then Heinz discussed "extended futility pruning". Before Rybka. And that is where the pruning in the last 2-3-4 plies came from. Not from Rybka.

There are no _other_ ideas in Crafty that post-date Rybka. Null-move? Way before Rybka. I was using "adaptive null-move pruning" in 1995 after a suggestion by John Stanback dealing with null-move blindness in certain types of positions.

So, again, what exactly did I take from Rybka? (robo*/ip*)? You said I should remove whatever I took. I said I took nothing. You said I did. One of us is wrong. Can you not prove your statement? Is it my responsibility? Did Vas defend against _our_ statements or did we have to prove them true ourseives? Your statement, prove it is true. Or retract it and learn to look before you leap. Your choice.

hyatt
Posts: 1242
Joined: Thu Jun 10, 2010 2:13 am
Real Name: Bob Hyatt (Robert M. Hyatt)
Location: University of Alabama at Birmingham
Contact:

Re: Still waiting on Ed

Post by hyatt » Thu Jul 07, 2011 7:03 pm

Some notes. First, for the futility array, note that that it is big, but we do not prune at those depths. They are there for future testing. I rewrote that code for one simple reason. Heinz did futility at the last ply, extended futility one ply back. During testing, I broke the damned test with a <= vs < comparison, and ended up doing futility on last ply and extended on two previous plies, not just one. And amazingly it played a good bit better. It took me a couple of weeks to figure out that the comparison was wrong. But then, since it worked, I naturally assumed that since Heinz used a different threshold for futility vs extended futility, I probably should add a 3rd threshold for the extra ply I had discovered was added by accident.

And in doing that, one never knows how far to go, so I think I allowed for up to 10 plies. And after a lot of testing, I found 4 futility bounds that were significantly better than the ones Heinz was using, and in addition, was doing futility pruning for 4 plies, not just 2 or 3. But I never got futility for the 5 (or beyond) to work. But I was not sure whether that was a result of the fairly fast games (1min+1sec) or was it just too dangerous to work at all. So the ability to test further is there. The variable "pruning_depth" is currently set to 5, and we only do the futility pruning when this is satisfied:

if (depth < pruning_depth && MaterialSTM(wtm) + pruning_margin[depth] <= alpha) {

For the SearchControl() change, if you notice, all I extend for now are checks. When I was doing the cluster testing on search changes (above pruning, etc) I also decided to test different values for the extensions we used (in check, one legal escape, mate threat, passed pawn pushes, etc) and discovered that setting all but in_check to zero produced the strongest engine. SO I removed all the other code, and now with just one test, a procedure actually made the code harder to read, rather than easier, so I manually inlined the 4 lines of code.

I reported a good while back on the Swap() optimization I had done in several places. Why extend a check if the checking move loses material? Yes, on rare occasions, a sac might be good. But the majority of the time this forced the search to look harder at an irrelevant part of the tree.

the number of moves searched issue was a very serendipitous finding. Crafty avoids reductions until it gets to the "REMAINING_MOVES" phase. This means that we have searched the hash move, good captures and killer moves (if there were any of those). And then we are subject to reductions if an individual move does not possess some characteristic that would suggest it not be done (A check would extend, so why reduce it?). But again, by accident, I discovered that if I reduce the _first_ move searched at any ply by a lower amount, there is an advantage to be gained. I had covered this for most moves because every position generally has a hash move, or non-losing captures, or killers that are legal, but it turns out some do not. And by not reducing the first move searched, the elo jumped a bit. I don't recall by how much although with some work I could figure it out.

We had a long discussion on CCC at one point as there are two ways to implement futility pruning. You can either just lop the move off and forget about it. Or you can just drop directly into the q-search for selected moves, which reduces the effort, but at least makes sure that material is not hanging after you make a move you think is not worth examining.

Somewhere between those versions, we went to the "continue" (really forward-prune the move) as opposed to "drop to quiesce()" Someone pointed out that Heinz had suggested the "hard pruning" approach, while I was doing the safer version. When I tried the "hard pruning" it worked better, but not with Heinz's threshold(s). His seemed to be too pessimistic after looking at cluster-testing results.

Notice that nowhere in there is a mention of Rybka.

BB+
Posts: 1484
Joined: Thu Jun 10, 2010 4:26 am

Re: Still waiting on Ed

Post by BB+ » Fri Jul 08, 2011 4:44 am

Rebel wrote:The lower branch factor.
Can you be more specific? For instance, did you observe this with SMP, or also with 1-cpu? If the latter, then any SMP search changes can be put aside for now. Also, it has recently been noted that tuning evaluation can often (slightly) lower the branch factor by infusing additional stability into the search, so perhaps another test to do would be to plop the 23.1 pruning code directly into 22.1 and compare.

hyatt
Posts: 1242
Joined: Thu Jun 10, 2010 2:13 am
Real Name: Bob Hyatt (Robert M. Hyatt)
Location: University of Alabama at Birmingham
Contact:

Re: Still waiting on Ed

Post by hyatt » Fri Jul 08, 2011 5:06 am

Only problem is, I do not believe the pruning code (as it exists in Crafty) exists in ip* and friends in exactly the same way. And the changes were not hundreds of elo for that. Not a hundred. Maybe 20 after all the tuning was done...

I think the LMR stuff is more valuable, Elo-wise, and it, too, is not implemented as it is in ip*. And the implementation is not that different today than it was in 22.2. You can find the first references to using Swap() to avoid extensions and encourage reductions 2-3-4 years ago when I was playing with that stuff.

IP (Actually Ivanhoe is the one I have source for on my laptop) does LMR significantly different. I reduce by 2, period, except for the first move searched at any ply, which will only be reduced by 1, and only if it is not a hash move, non-losing capture, or killer move. I don't have the "singular" extension code (ugly name for this algorithm, maybe the more referenced tt-singular description is better. I don't differentiate between PV and non-PV nodes with respect to any pruning of any kind. I reduce, extend, and prune the same regardless of whether it appears to be a PV node or not.

Too many differences to really count, actually... that's just for starters.

this entire thread is a complete crock.

BB+
Posts: 1484
Joined: Thu Jun 10, 2010 4:26 am

Re: Still waiting on Ed

Post by BB+ » Fri Jul 08, 2011 5:30 am

hyatt wrote:And the changes were not hundreds of elo for that. Not a hundred. Maybe 20 after all the tuning was done...
I was going to ask for a guesstimate of this (as Ed had implies the gains were large), but figured that "plagiarism" was somewhat oblivious to Elo accounting.
BB+ wrote:the pruning_margin array as {0, 120, 120, 310, 310, 400, 400, 500} (note that only the first few values matter, due to pruning_depth).
The low_depth formula in the current IvanHoe is:

Code: Select all

      if (cnt >= depth && NextMove->phase == ORDINARY_MOVES
          && (move & 0xe000) == 0 && (SqSet[fr] & ~MyXRAY)
          && MyOccupied ^ (BitboardMyP | BitboardMyK))
        {
          if ((2 * depth) + MAX_POSITIONAL (move) + POS0->Value <
              VALUE + 40 + 2 * cnt)
            {
              cnt++;
              continue;
            }
        }
The cut_node formula in the current IvanHoe is:

Code: Select all

      if (cnt > 5 && NextMove->phase == ORDINARY_MOVES
          && (move & 0xe000) == 0 && (SqSet[fr] & ~MyXRAY) && depth < 20
          && ((1 << (depth - 6)) + MAX_POSITIONAL (move) + (POS0->Value)
              < VALUE + 35 + 2 * cnt))
        {
          cnt++;
          continue;
        }
And the all_node version has (5 << (depth - 6)) instead. In all cases, there is also another pruning possibility for bad-SEE moves, when the margins are enlarged somewhat -- there is also post-makemove pruning, if the "positional gain" of the move didn't help as much as perhaps expected. Rybka 3 controls pruning margins (to some extent) via "pre-evaluation", which is gamephase-dependent.
BB+ wrote:Note that Crafty compares alpha to Material, while others seem to compare to some sort of evaluation.
In fact, others (Rybka/IPPOLIT for instance) use "positional gain" in addition to evaluation. They both also allow (at least in low_depth search) a pruning after makemove+eval is done, if the "positional gain" of the move was not as much as hoped.

BB+
Posts: 1484
Joined: Thu Jun 10, 2010 4:26 am

Re: Still waiting on Ed

Post by BB+ » Fri Jul 08, 2011 6:00 am

I think these are the tabulated Rybka 3 pruning margins at the game start, at CUT and ALL nodes:

Code: Select all

0 0 0 16 43 96 BIG BIG
0 0 0 16 43 43 106 BIG
Note that the first three entries (depth 0,1,2 ply) will typically be in low_depth search in any case. Rybka divides the half-ply count by 2 before using these tables, and "BIG" means more than 10000.

These are factored in with positional gain, evaluation, and a constant such as 36. The "tempo bonus" [which is 9 at the game-start] also appears in the comparison. Conditions apply, such as those involving move_count, and can be dependent on CUT versus ALL.

Rybka does not have a scoring-based bad-SEE pruning at these types of nodes (it is merely condition-based on the depth and nextmove-phase). Rybka also has pruning after makemove+eval, which uses essentially the table(s) of above.

For low_depth nodes, the margin (at the game start) corresponding to the table above seems to be 0 at depth 1 and 11 at depth 2 (the constant of 36, tempo bonus, and "positional gain" are also involved). The bad-SEE pruning here has fixed numbers like 75 and 150. Again there is possible pruning after makemove+eval.

hyatt
Posts: 1242
Joined: Thu Jun 10, 2010 2:13 am
Real Name: Bob Hyatt (Robert M. Hyatt)
Location: University of Alabama at Birmingham
Contact:

Re: Still waiting on Ed

Post by hyatt » Fri Jul 08, 2011 4:38 pm

BB+ wrote:
hyatt wrote:And the changes were not hundreds of elo for that. Not a hundred. Maybe 20 after all the tuning was done...
I was going to ask for a guesstimate of this (as Ed had implies the gains were large), but figured that "plagiarism" was somewhat oblivious to Elo accounting.
It looks like the "best tuning" for the search stuff, after we finished the eval tuning (which was released as 22.2) was worth another 24 Elo:

5 Crafty-22.4R01-0 2604 4 4 31128 44% 2646 24%
14 Crafty-22.2-100 2580 5 5 31128 41% 2646 23%

22.4R01-0 was the final extension tuning run where the "mate extension" was set to 0 (the -0 in the version.

22.2 itself represented a really significant jump from 22.1, over +120 in the cluster testing. There were hundreds of individual runs, tweaking eval paramaters one by one. That was reported here as it was going on.
BB+ wrote:the pruning_margin array as {0, 120, 120, 310, 310, 400, 400, 500} (note that only the first few values matter, due to pruning_depth).
The low_depth formula in the current IvanHoe is:

Code: Select all

      if (cnt >= depth && NextMove->phase == ORDINARY_MOVES
          && (move & 0xe000) == 0 && (SqSet[fr] & ~MyXRAY)
          && MyOccupied ^ (BitboardMyP | BitboardMyK))
        {
          if ((2 * depth) + MAX_POSITIONAL (move) + POS0->Value <
              VALUE + 40 + 2 * cnt)
            {
              cnt++;
              continue;
            }
        }
The cut_node formula in the current IvanHoe is:

Code: Select all

      if (cnt > 5 && NextMove->phase == ORDINARY_MOVES
          && (move & 0xe000) == 0 && (SqSet[fr] & ~MyXRAY) && depth < 20
          && ((1 << (depth - 6)) + MAX_POSITIONAL (move) + (POS0->Value)
              < VALUE + 35 + 2 * cnt))
        {
          cnt++;
          continue;
        }
And the all_node version has (5 << (depth - 6)) instead. In all cases, there is also another pruning possibility for bad-SEE moves, when the margins are enlarged somewhat -- there is also post-makemove pruning, if the "positional gain" of the move didn't help as much as perhaps expected. Rybka 3 controls pruning margins (to some extent) via "pre-evaluation", which is gamephase-dependent.
BB+ wrote:Note that Crafty compares alpha to Material, while others seem to compare to some sort of evaluation.
In fact, others (Rybka/IPPOLIT for instance) use "positional gain" in addition to evaluation. They both also allow (at least in low_depth search) a pruning after makemove+eval is done, if the "positional gain" of the move was not as much as hoped.

hyatt
Posts: 1242
Joined: Thu Jun 10, 2010 2:13 am
Real Name: Bob Hyatt (Robert M. Hyatt)
Location: University of Alabama at Birmingham
Contact:

Re: Still waiting on Ed

Post by hyatt » Sun Jul 10, 2011 4:26 pm

Have not seen a single piece of evidence from Ed. Wonder why?

User avatar
Rebel
Posts: 515
Joined: Wed Jun 09, 2010 7:45 pm
Real Name: Ed Schroder

Re: Still waiting on Ed

Post by Rebel » Sun Jul 10, 2011 6:44 pm

hyatt wrote:Have not seen a single piece of evidence from Ed. Wonder why?
Cause I have nothing to add nor to distract.

What's causing the huge branch factor difference between 23.2 and 23.3 ?

Whatever the answer, it's not your original idea. You heard it from someone. And that person heard it from another one.

And In the end the origin of the idea comes from the hacked Rybka.

You Robert Hyatt are using idea's in Crafty that smell Rybka.

Hacked Rybka.

Idea's never meant to be yours.

Yet you use them.

Peterpan
Posts: 44
Joined: Sat Nov 27, 2010 7:22 pm
Real Name: Izak

Re: Still waiting on Ed

Post by Peterpan » Sun Jul 10, 2011 7:08 pm

I'm sick of all the lies and corruption going on in computer chess.
What used to be an interesting hobby has now turned into one big mess.
I will take a break from computer chess and one day perhaps write a 1800 or 2000 elo strong chess program and know at least it's my own with my own ideas.
Being the best and strongest is not always the most important things in life,when you have to sacrifice integrity and honesty and stealing in getting there.
I quit

Post Reply