Category Archives: Computer Chess

Another round of sparring partners for Prophet3

Back in early March I added aspiration windows to Prophet with a window of 1/3 pawn, as well as some futility pruning and delta pruning. My schedule hasn’t allowed me to do a lot of work on the code since then, so I took advantage of the time to find some new sparring partners. The results are in the table below.

These games — almost 200k of them — are played at 3+0.5. That is, the clock starts at 3 seconds and 1/2 second is added after each move. That may seem like a fast game but when you’re going for quantity, it’s actually slower than I’d like. That’s about as fast as I could go on my older test system without getting a lot of time forfeits.

Rank Name Elo games score oppo. draws
1 plisk 98 25092 62% 17 18%
2 tjchess 98 32400 62% 11 21%
3 jazz 74 33286 60% 4 19%
4 myrddin 49 31082 58% -7 18%
5 Horizon 2 31080 51% -3 17%
6 tcb -5 31080 50% -2 18%
7 prophet3-20170319 -11 42068 45% 27 23%
8 jumbo -12 31240 49% -2 19%
9 madeleine -12 31240 49% -2 19%
10 Beowulf -94 24900 39% -17 19%
11 prophet2 -104 21700 39% -30 18%
12 matheus -107 21700 39% -29 18%

I’ve actually dropped Beowulf, Prophet2, and Matheus off the active list. The others should keep Prophet busy for a while.

With summer coming and other commitments starting to wind down for a while, I’ve started working again. The latest version seems to be about +10 ELO above this version.

chess4j 3.2 is released and it’s magic!

I just released chess4j 3.2. This release is purely about optimizing for speed. That is, there are no changes to the search or evaluation terms.

chess4j is now a bitboard engine, and uses magic bitboards for move generation. Simply by replacing some loops over all squares with loops over (possibly sparsely populated) bitboards, and probably most importantly getting rid of some managed collections stuff, chess4j 3.2 is 4-5x faster than 3.1. In a head to head match of almost 700 games with 3.1, 3.2 scored 79% which translates into a 200-250 elo gain.

One other important difference is that the opening book is now off by default. If you want to use the opening book included with the release, you must add “-book=book.db” to the command line. (This is done in the included chess4j.bat.)

I think I can still get some more ELO just off speed optimizations and efficiency improvements (such as splitting up capture and noncapture move generation), so there will probably be one more release in the 3.x line before moving onto 4.0. The 4.x line will be focused on parallel search.

As usual you can grab the latest and greatest from the project website. Enjoy!

Prophet’s sparring partners

Several months ago I decided to get a little more rigorous about how I test changes to Prophet. With previous versions of Prophet, I would make a change, run some test suites consisting of a few hundred or a thousand tactical positions, play a few games online to convince myself the change was good, and that was it. That doesn’t really cut it any more though. Fruit and other engines that have followed suit have shown the importance of a having a solid testing methodology that accurately measures how effective changes are. So, to that end, I found some sparring partners and took my first measurements:

Rank Name Elo games score oppo. draws
1 matheus 101 7360 67% -20 16%
2 prophet2 86 7360 64% -17 14%
3 lime 35 7360 56% -7 12%
4 gerbil -25 7360 46% 5 15%
5 prophet3-20160903 -54 8000 41% 12 17%
6 elephant -138 7360 28% 27 11%

Then Christmas happened, and I found myself with a little spare time to work on Prophet. I implemented bitboards, and then magic bitboards, and a few other speed optimizations. Suddenly it looked like I needed some new sparring partners!

Rank Name Elo games score oppo. draws
1 prophet3-20170118 91 24436 68% -42 17%
2 matheus 65 10648 58% 5 20%
3 prophet2 23 10649 51% 10 16%
4 lime -24 10649 44% 17 13%
5 gerbil -81 10643 35% 24 14%
6 elephant -191 10648 21% 39 10%

Here is what I ended up with. I like this a lot, as Prophet3 is right in the middle with some engines below and above. That seems like a pretty good cross section. As Prophet3 continues to improve, I’ll just add newer strong engines to the mix and drop the bottom ones off.

Rank Name Elo games score oppo. draws
1 myrddin 84 4811 63% -7 18%
2 tcb 46 4810 57% -4 18%
3 Horizon 30 4812 55% -2 17%
4 jumbo 19 4972 53% -2 18%
5 prophet3 7 12733 51% -2 22%
6 madeleine -26 4972 46% 4 20%
7 matheus -36 4812 44% 4 20%
8 Beowulf -67 4812 39% 7 20%
9 prophet2 -71 4812 39% 7 18%

chess4j 3.1 is released

I just released version 3.1 of chess4j, my Java based chess engine. This version is about 100 elo stronger than 3.0. The key differences are:

  • The Groovy language is no longer used. Groovy is a fine language for some purposes. chess4j didn’t make heavy use of it but did use it for some I/O type classes, which I wrote about in this post. Unfortunately, it was a constant struggle with my IDE of choice (STS). It constantly balked and complained about the Groovy classes. I had hoped that in time IDE support would improve but in the end I just got tired of constantly messing around with it. On a positive note the chess4j jar file went from 11.5 mb to 4.5 mb.
  • Some modest search improvements, worth about 20 elo.
  • Added pondering. This means that when it’s the opponent’s turn to move, chess4j is already thinking about its response. If chess4j correctly predicted what the opponent will play, then it is that far ahead in formulating a response (it may even be able to play its move instantly). Seems to be worth about 80 elo.

In my testing, chess4j 3.1 scores 53% against 3.0 in a 1000 game match without pondering, and 64% in a 1600 game match with pondering.

I have my sights set on a parallel search for the 4.0 release, but I think there will be another minor release or two before that to work on search speed optimizations.

You can grab the latest and greatest from the project website. Enjoy!

Prophet is back

After a VERY long absence from server play, the rewrite of my chess engine Prophet is finally far enough along to play some games online. The engine itself has been capable of play for a little while now, but I’ve establishing a testing routine and wanted to get an opening book.

Previous versions of Prophet had a custom book format, using fseek() and the like on a binary file. I may go back to that at some point but right now Prophet 3 uses SQLite to query the book built for chess4j.

Anyway, the engine has been online ICC most of the day:


67: + 1632 W 1469 brittabeast [ br 5 3] D11 Res Nov 30 16 15:29
66: + 1622 B 1479 brittabeast [ br 5 3] B03 Mat Nov 30 16 15:23
65: + 1611 W 1466 ituchesstech [ br 5 3] D08 Mat Nov 30 16 15:13
64: + 1600 B 1533 Knipshun [ br 5 3] A00 Res Nov 30 16 14:56
63: + 1586 W 1547 Knipshun [ br 5 3] C00 Res Nov 30 16 14:53
62: + 1570 B 1563 Knipshun [ br 5 3] A00 Res Nov 30 16 14:49
61: + 1553 W 1092 Darkpower [ br 5 3] A04 Res Nov 30 16 14:42
60: + 1551 B 1094 Darkpower [ br 5 3] B00 Res Nov 30 16 14:37
59: + 1549 W 1096 Darkpower [ br 5 3] D35 Res Nov 30 16 14:33
58: + 1547 B 1098 Darkpower [ br 5 3] B02 Res Nov 30 16 14:29
57: + 1545 W 1100 Darkpower [ br 5 3] C41 Res Nov 30 16 14:23
56: + 1543 B 1102 Darkpower [ br 5 3] B07 Res Nov 30 16 14:16
55: + 1541 W 1104 Darkpower [ br 5 3] D31 Res Nov 30 16 14:14
54: + 1539 B 1106 Darkpower [ br 5 3] B06 Res Nov 30 16 14:04
53: + 1537 W 1108 Darkpower [ br 5 3] D37 Res Nov 30 16 13:59
52: + 1534 B 1111 Darkpower [ br 5 3] B01 Res Nov 30 16 13:53
51: + 1531 B 1520 zalk [ br 5 3] A00 Res Nov 30 16 13:50
50: + 1514 W 1537 zalk [ br 5 3] B06 Res Nov 30 16 13:45
49: + 1495 W 1233 TT [ br 5 3] A04 Fla Nov 30 16 13:33
48: + 1695 B 1431 amr [ Br 2 1] B00 Mat Nov 30 16 13:28

20/20 isn’t too bad! Granted, it helps that it was vastly underrated, but still…what fun 🙂

chess4j 3.0 is released!

After a long period of inactivity I’m happy to announce the release of version 3.0 of my Java chess program, chess4j. This version is a good bit stronger than the previous version, though still not as strong as my C/C++ program Prophet (see below). The key areas of improvement are:

  • The addition of an opening book, which I discussed here.
  • Improvements in the positional evaluation function
    • Passed pawns are rewarded
    • Isolated pawns are penalized
    • Doubled pawns are penalized
    • Rooks on open files or half open files are rewarded
    • Rooks and queens on the 7th rank, particularly when connected to another rook or queen are rewarded
    • There is some attempt to keep the king sheltered until the endgame, and then encouraged to move towards the center
  • Almost 2x speed increase

The table below compares the performance of chess4j v2 against v3 on several different test suites. Each suite was run twice and the results averaged. I highlighted changes of +10% or more in green (good), and -10% or more in red (bad). The top 4 suites are tactical suites. There weren’t any significant changes here, despite the 2x speed increase. However, the ‘SBD’ suite and all the ‘STS’ suites below that are positional suites, and there were some differences there. Key areas of improvement (according to these tests) were ‘undermining’, ‘open files and diagonals’, ‘bishop vs knight’, ‘offer of simplification’, and ‘queens and rooks to the 7th rank’. The one suite marked in red measures advancement of the f/g/h pawns. I think the added king safety code has something to do with that: the program is less willing to push those pawns if the king is castled on that side.

Suite/Time v2 raw v2 % v3 raw v3 % Delta raw Delta %
WAC/5 258 86.0% 266 88.7% 8 2.7%
WAC/10 270.5 90.2% 273.5 91.2% 3 1.0%
WCSAC/5 775 77.4% 784.5 78.4% 9.5 0.9%
ECMGCP/10 35.5 19.4% 37 20.2% 1.5 0.8%
SBD/60 45.5 34.0% 96.5 72.0% 51 38.1%
STS1/5 30 30.0% 48.5 48.5% 18.5 18.5%
STS2/5 19.5 19.5% 39.5 39.5% 20 20.0%
STS3/5 45 45.0% 41.5 41.5% -3.5 -3.5%
STS4/5 36.5 36.5% 41.5 41.5% 5 5.0%
STS5/5 41 41.0% 59.5 59.5% 18.5 18.5%
STS6/5 47 47.0% 48 48.0% 1 1.0%
STS7/5 22.5 22.5% 33 33.0% 10.5 10.5%
STS8/5 31 31.0% 20.5 20.5% -10.5 -10.5%
STS9/5 30 30.0% 37 37.0% 7 7.0%
STS10/5 55.5 55.5% 57.5 57.5% 2 2.0%
STS11/5 19 19.0% 28 28.0% 9 9.0%
STS12/5 42 42.0% 48.5 48.5% 6.5 6.5%
STS13/5 45.5 45.5% 48 48.0% 2.5 2.5%
STS14/5 31 31.0% 62.5 62.5% 31.5 31.5%
STS15/5 22 22.0% 29.5 29.5% 7.5 7.5%

The table below shows chess4j’s performance in a head-to-head match verse four other engines (the bottom one being my engine, Prophet). The match conditions were: only one processor could be used, no thinking on the opponents time, and no opening book libraries or endgame tablebases. The matches were done using a set of 20 starting positions called the ‘Nunn Test Suite’. The engines play from each position twice- once as white and once as black, making the entire match 40 games. One thing that may jump out is that the results don’t seem consistent. Why the variability against the same opponent using different times? It very well could be that chess4j is better at longer time controls, but it would take thousands of games to prove that theory. This game set is too small to be statistically significant. The main point was to get a feel for how strong the engine is.

Opp 1 min 3 min 10 min
Gerbil 5-31-4 9-23-8 10-23-7
Horizon 4.3 6-31-3 8-28-4 4-31-5
Lime 6.2 5-32-3 4-29-7 6-29-5
Prophet 2.0e 2-32-5 4-35-1 2-35-3

While there is certainly a lot to do with the evaluation, it’s not so far behind Prophet’s that it would explain such a lopsided result. I’m afraid the main disadvantage chess4j has comes down to speed. On my laptop I typically see chess4j search between 180,000 and 200,000 positions per second. That may sound like a lot but this (32 bit compile of Prophet) searches between 800,000 and 1 million positions per second (and there are programs out there that search much faster than that!). To test that theory I ran another set of matches against the same opponents using time odds. In the table below, the 2x, 4x, and 8x columns show how chess4j would fare in a 1 minute match if it were 2x, 4x, or 8x faster than it is now. As you can see, if chess4j were 5-10 times faster than it is it would be much more competitive against this group.

Opp 1 min 2x 4x 8x
Gerbil 5-31-4 10-22-8 12-21-7 21-8-11
Horizon 4.3 6-31-3 8-26-6 11-21-8 12-21-7
Lime 6.2 5-32-3 6-30-4 14-20-6 16-12-12
Prophet 2.0e 2-32-5 6-29-5 10-27-3 15-9-6

Is it possible to make chess4j that much faster? Maybe. I was able to improve the speed of this version by a factor of two by doing a careful analysis of the data structures being used (mostly Collections stuff, like ‘ArrayList’ vs ‘HashSet’ and such), and looking up their runtime complexities for different operations (like adding, or iterating…) and changing the data structures where appropriate. But, I think I’ve gone about as far as I can down that road. This is a Java program after all. It’s not as “close to the metal” as a C program. I think to squeeze much more out of I’d have to scale back on how much object instantiation it does. But then it would look less like an object oriented program, and would probably be less readable and possibly harder to test… all of which go against what I want to accomplish with this program.

So, what’s next? My next areas of focus are:

  • It’s time to establish a proper testing routine. Meaning that, going forward, all changes to the program need to be measured in terms of ELO gain. This is a lot harder than it sounds but it will ensure changes are good changes.
  • Add a pondering search. Instead of sitting idly as the opponent is on move, try to guess what the opponent is going to do and start formulating a response. If we’re right often enough we can allow more time to think about each move.
  • Use all the machine’s processors! Most computers these days have multiple processors, yet chess4j is only capable of using one. (using multiple processors is easy. doing it efficiently [in the context of a computer chess] is MUCH harder than you might think)

If you’d like to try your hand against chess4j, you can download it from the
project website. It’s a console program though, not graphical. However, you can download a GUI and set it up to use chess4j… see the readme.txt file.

Enjoy!

chess4j + SQLite

The last time I wrote anything about chess4j, way back in November, I reported that chess4j was using a small in-memory database of opening moves. The program was reading a library of about 500 games from GM Kasparov, which it would do every time the program initialized. Before each move, chess4j would consult this in-memory database to see what moves have been played by Kasparov (or his opponent), and choose between them using a weighted random selection algorithm. The effect of this was that chess4j no longer played the same opening moves, so it added some variability to its online play. It also produced games that were a little more “aesthetic”, or pleasing to look at.

That was a huge step forward, but it had some limitations. Reading all those games takes a little time, so it isn’t really practical to read in more than about 500. It’s also expensive in terms of memory since they are all held in-memory, so there are some limitations there. Finally, it doesn’t leave any room for any learning algorithms since the program is initialized to the same state each time it starts.

To address those issues I needed a solution that persisted data to the disk. In past programs written in C/C++ I’ve created custom file formats and used fseek() and the like to probe it. This time I decided to use a proper database. I had considered MongoDB, mainly just to play around with it, but in the end I decided on SQLite. SQLite is lightweight, reliable, and fast – a perfect fit. (Side note: the primary author of SQLite, Dr. Richard Hipp, was roommates with my graduate school advisor, Dr. Ronnie Smith when they were both working on their PhDs at Duke University. I’ve met him… very nice guy.)

The database table to hold these opening book moves is very simple:

sqlite> .schema book_moves
CREATE TABLE book_moves (key int not null,fromsq int not null,tosq int not null,frequency int default 1,wins int,losses int,draws int);
CREATE INDEX idx_book_moves_key on book_moves(key);

Really, the only fields that are necessary are the first three: key, fromsq, and tosq. ‘key’ is used as a signature for a position. That is, every possible chess position has its own unique value for ‘key’. (I’m not going to get into the algorithm to do that in this post, just accept that it’s true!) ‘fromsq’ and ‘tosq’ are probably self explanatory – they represent the square the piece is moving from and the square it’s moving to. So, if we want to probe the opening book to see what moves it contains from any given position, we just produce the value of “key”, then perform the query:

select fromsq,tosq,frequency,wins,losses,draws from book_moves where key=?

‘frequency’ is a count of how many times the move was encountered as the opening book was constructed. This is what makes the random weighted selection possible. The idea here is that, if move A was encountered 10 times as often as move B, then chances are it’s a better move and I should play it roughly 10 times as often. This approach helps keep a stray “bad move” that may have found its way into the book database from getting on equal footing with good moves.

Finally, we have ‘wins’, ‘losses’, and ‘draws’. Each time chess4j completes a game, it increments the appropriate counter for the moves it played from the opening book (not moves the opponent played). It doesn’t actually do anything with this information yet, but future versions will probably use the information to avoid any moves it consistently loses with.

So, that’s a brief description of the implementation. To seed the database I found an online collection of about 50,000 games played by Master level players or above. The database I’m using currently has nearly 150,000 moves.

Watching chess4j play with this opening book is really fun. Since it doesn’t have to “think,” it plays almost instantaneously when it has a response in the opening book. Below is a segment from logfile. Notice how it begins with “c4”, then parses the opponent’s response “g8f6” and responds with “b1c3”. The frequency counts on the moves it encounters in its opening book gradually diminishes until it has no book response for “c7c6” and has to start thinking on its own.


# parsing: level 0 5 3
# level: 0, 5, 3
# setting increment to 3000 ms.
# parsing: name Pepo2424
# opponent is: Pepo2424
# parsing: rating 1989 1611
# parsing: time 30000
#time : 300000
# parsing: otim 30000
# parsing: go
# book move: BookMove [move=c2c4, frequency=8939]
move c2c4
# parsing: time 30300
#time : 303000
# parsing: otim 30100
# parsing: usermove g8f6
# book move: BookMove [move=b1c3, frequency=1520]
move b1c3
# parsing: time 30600
#time : 306000
# parsing: otim 30100
# parsing: usermove g7g6
# book move: BookMove [move=g2g3, frequency=182]
move g2g3
# parsing: time 30900
#time : 309000
# parsing: otim 30300
# parsing: usermove f8g7
# book move: BookMove [move=f1g2, frequency=225]
move f1g2
# parsing: time 31200
#time : 312000
# parsing: otim 30400
# parsing: usermove d7d6
# book move: BookMove [move=e2e4, frequency=24]
move e2e4
# parsing: time 31400
#time : 314000
# parsing: otim 30500
# parsing: usermove e7e5
# book move: BookMove [move=g1e2, frequency=5]
move g1e2
# parsing: time 31700
#time : 317000
# parsing: otim 30600
# parsing: usermove e8g8
# book move: BookMove [move=e1g1, frequency=12]
move e1g1
# parsing: time 32000
#time : 320000
# parsing: otim 30200
# parsing: usermove c8e6
# book move: BookMove [move=d2d3, frequency=7]
move d2d3
# parsing: time 32300
#time : 323000
# parsing: otim 30300
# parsing: usermove c7c6
# transposition table initialized with 1048576 entries.
# time remaining: 323000, increment: 3000, allotted time: 15920
1 -183 1 2 c3d5
1 -1 3 4 c3b1
1 4 4 6 c3a4
1 17 4 10 f1e1
1 20 4 12 h2h3
1 22 4 14 h2h4
1 22 6 39 h2h4
2 7 6 78 h2h4 b8d7
2 12 12 230 d1b3 b7b5
2 12 12 246 d1b3 b7b5
3 17 21 368 d1b3 b7b5 h2h4
3 17 28 1500 d1b3 b7b5 h2h4
4 7 45 3541 d1b3 d8c7 h2h4 b8d7
4 7 79 5825 d1b3 d8c7 h2h4 b8d7
5 12 121 9522 d1b3 d8c7 h2h4 b8d7 a2a4
5 12 139 15133 d1b3 d8c7 h2h4 b8d7 a2a4
6 7 190 23677 d1b3 d8c7 h2h4 b8d7 a2a4 d7c5
6 9 293 38345 h2h4 b8d7 a2a4 a7a6 f2f4 h7h5
6 9 339 64469 h2h4 b8d7 a2a4 a7a6 f2f4 h7h5
7 10 440 97805 h2h4 b8d7 a2a4 a7a5 b2b3 d7c5 f2f4
7 10 640 172694 h2h4 b8d7 a2a4 a7a5 b2b3 d7c5 f2f4
8 7 817 243868 h2h4 b8d7 a2a4 a7a5 f2f4 d7c5 f4e5 d6e5
# hash probes: 628514
# hash hits: 58960 (9.38%)
# hash collisions: 31636 (5.03%)
# fail highs: 32795 (5.22%)
# fail lows: 2422 (0.39%)
# exact scores: 37 (0.01%)
move h2h4

-----SNIP-----

# parsing: result 1-0 {Pepo2424 resigns}

This improvement was a lot of fun to implement and it’s a lot of fun to watch. chess4j’s online opponents seem to like it too:

Ara61 says: Thank you for the great game!

I’ve also made some great improvements to the engine’s evaluation routine, but that will be the subject of another post very soon! And, as always the latest development code is available from the project website. These changes are not included in the last official “release” but will be in the 3.0 release coming within the next few weeks.

chess4j learns some moves from Kasparov

Ok, “learn” is too strong a word. More accurately, chess4j now has a small opening book database populated with a little over 500 Kasparov games. Before doing any thinking on its own, chess4j will consult this database, and if a move is found it will play it. This has the effect of steering the game into a nice position that a Grandmaster might play fairly quickly, and it also adds some more variation to the opening moves as well.

500 games is not a lot, but at the moment the opening book is all contained in memory. That is, when the program starts it reads through those 500+ games, storing them in internal memory, and holding them in memory for the duration of the program’s execution. The next time it starts, it does it again.

I’m really pretty happy with the way this all came together. Here is the method that initializes this in memory opening book:

private static void initBook() throws Exception {
	OpeningBook book = OpeningBookInMemoryImpl.getInstance();
	FileInputStream fis = new FileInputStream(new File("pgn/Kasparov.pgn"));
	PGNIterator it = new PGNIterator(fis);
 
	PGNGame pgnGame;
	while ((pgnGame = it.next()) != null) {
		book.addToBook(pgnGame);
	}
 
	fis.close();
}

As you can see there are a few key classes that make this all work. First, we need an OpeningBook. OpeningBook is an interface, which in this case is implemented by the Singleton OpeningBookInMemoryImpl. I won’t go into the in memory implementation here, because in the future that will be replaced with something with a persistent datastore behind it (maybe MongoDB). But, I will show the interface it implements:

public interface OpeningBook {
 
	public void addToBook(PGNGame game);
 
	public List<BookMove> getMoves(Board board);
 
	public BookMove getMoveWeightedRandomByFrequency(Board board);
}

Pretty simple at the moment, and likely to be expanded. The key points are that you can add a move to the book, or get a list of BookMoves in a given position, or you can get a single BookMove using a weighted random selection algorithm.

Now that we have an OpeningBook, we need something that is capable of reading through a Portable Game Notation (PGN) file, producing a sequence of PGNGame objects.

PGNGame is really just a data object so I won’t show the code here. It really just encapsulates a game, which we can think of as a series of PGN tags, a list of moves, and a result (win, loss, draw).

The PGNIterator class is a little more interesting though. Since some of these PGN files get fairly large (40-50 mb is not unusual), it’s best to take a ‘streaming’ approach to processing them. Hence, if you look back at the initBook() method, you’ll notice the constructor for PGNIterator is given a FileInputStream. (It will accept any type of InputStream, which it uses internally to create a BufferedReader.)

Here is the next() method of PGNIterator :

public PGNGame next() throws IOException, ParseException, IllegalMoveException {
 
	PGNParser parser = new PGNParser();
	String nextGame = getNextPGN();
 
	if (nextGame != null) {
		return parser.parseGame(nextGame);
	}
 
	return null;
}

I’m glossing over a few details here but hopefully that gets the point across. The call to getNextPGN() looks far enough ahead in the stream to capture the String representation of a game, or returns NULL if it can’t (probably because it hit the end of the file). It then uses this PGNParser to convert the String into a PGNGame. As you might imagine PGNParser uses some regular expression stuff that frankly made my head hurt a little. Finally, we saw above that the PGNGame is added to the opening book.

All of this is part of the 3.0 development so it’s not in the latest release, but the source code is available (along with several unit tests) so if you’re the developer type feel free to check it out from the project website on SourceForge .

chess4j 2.0 is released

I’m happy to announce the release of version 2.0 of my chess program, chess4j. This version is a good bit stronger than the previous release, though there is still much to do. The focus of this release has been to improve the efficiency of the search. These improvements include:

  • Improved Move Ordering
    • Previous PV move (this was already in place)
    • Hash move (this was already in place)
    • Winning and even captures, as sorted using a Static Exchange Evaluator
    • Killer move heuristic – 2 moves that must be noncaptures
    • Remaining non-capturing moves in any order
    • Remaining captures, sorted using the Static Exchange Evaluator
  • Addition of a quiescence (capture only) search after the full width search. Captures are sorted using the MVV/LVA algorithm
  • Extensions
    • Check Extensions
    • Pawn to 7th or 8th (promotion) extension
  • Null Move Pruning, using R=2
  • Principal Variation Search
  • Aspiration Windows of +/- 1/3 pawn
  • Late Move Reductions
  • Improved Node Rate by 10x (!!)
  • Very, very simple opening book. It’s only two moves deep, only added to add some variety to online play

So, how good is it? As I write this the program has an ICC Blitz rating (playing exclusively 5 3) of around 2050, and a Bullet rating (playing exclusively 2 1) of around 2100. So, not very strong as far as chess programs go, but good enough to beat most club players at rapid speeds. As far as test suites go, I’ve only really used the Win At Chess Suite at 10 seconds per problem and it’s scored 264/300.

What’s next? There is a lot to choose from, but I’ve decided to make the opening book the focus of the 3.x branch. I think I’ll start by creating a database of high caliber games, and some tooling to add or remove lines. I’d also like to have some sort of learning mechanism so the program learns moves over time, as well as learns to prevent disastrous moves.

Further down the road, the evaluator is certainly the weak point now. It really just measures material and some static piece placement for knights and pawns.

You can download chess4j from the project website: http://chess4j.sourceforge.net/. As always, the source code is also available.

Enjoy!

chess4j gets a little smarter

Over the holidays I managed to find a little time to do some work on chess4j. It felt really good to dust the codebase off and make some improvements that have had real impact on playing strength.

Up to this point, I have not really focused on playing strength at all. The official release number is 1.2. The focus of the entire initial release (well, the entire 1.x series of releases) has been to create a Winboard compatible program that is rock solid in terms of reliability, and to do so in a test driven manner, meaning a high percentage of the code is covered by unit tests, with some functional tests thrown in for good measure. That goal was accomplished. chess4j has played literally thousands of games on the Internet Chess Club, and it never crashes. It loses a lot! But, it never crashes.

The goal of the 2.x series will be to implement some well known search improvements. This probably will not include parallel or distributed search just yet, just ways to make the single threaded search more efficient. The focus will be on intelligently ordering moves to reduce the branching factor, implementing a capture-only search towards the leaf nodes, extending promising lines, and pruning back less promising lines.

Over the holidays I accomplished the following:

  • added the ability to process test suites, which included a FEN parser, in order to help measure the impact of search improvements
  • implemented a transposition table using an “always replace” strategy.
  • modified the search loop to order moves instead of just playing them in the order they were generated (the only exception to this is it was playing the previous PV first). Now it plays the previous PV, followed by any move suggested by the transposition table, followed by promotions and then all captures, with captures sorted by the well known MVV/LVA algorithm.
  • made a couple of obvious speed/efficiency improvements

I haven’t gotten to the point where I need to be too scientific about measuring the impact of improvements yet. But, there’s no doubt those changes have had a dramatic impact. Before, chess4j’s blitz rating on ICC was typically in the 1100-1200 range. As I type this, it has a blitz rating of 1419. That’s still very, very weak, and far far below Prophet’s former strength, but the point is those changes are having the desired effect!

All of this has been committed to the Sourceforge account, so anyone is free to peek at the code. The official release doesn’t include these changes yet, and probably won’t for several months, but the code is committed for my developer friends who’d like to get a sneak peek at the changes coming in the 2.x line.

The project website: http://chess4j.sourceforge.net/