Archive for the ‘chess4j’ Category

chess4j 3.4 is officially released. The main changes in this release are the hashing related changes that I discussed some time back in this post. Other than that, there are many smaller changes, many of which I’m sure I’ve lost track of, but some that come to mind are: converting to Java8 syntax, incorporating Scala in testing, and improved Winboard compatibility.

Also, the source code has been moved from SourceForge to Github. The new project website is

The next major area of work will be to parallelize the search, but I anticipate another minor release before that work begins.

Hashing is an area I’ve meant to revisit in chess4j since it was originally implemented, for a couple of reasons. (1) The hash entries consume too much space, limiting the number of entries the hash table can store, and (2) in preparation for multi-processing capabilities. I’ll address the first issue in this post, and follow up on the second at a later date.

If you don’t know what a hash table is used for, I recommend reading this article: .

Up until recently the table entries in chess4j had the following structure:

   long zobristKey (64 bits)
   TranspositionTableEntryType type  (32 bit reference to an Enum)
   int score (32 bits)
   int depth (32 bits)
   Move move (32 bit reference to an Object)

So how much space does a single hash entry take? It’s not very straightforward to figure it out. With C you can use the ‘sizeof’ operation, but things get more complicated with object oriented languages. But, we can make some educated guesses and do some hand waiving to get some estimates.

The zobristKey, score, and depth members are all primitive: longs are 64 bits, and ints are 32 bits each. The ‘type’ member is a reference to an enum. An assumption is that references are 32 bits, since we’re using less than 32GB of heap — though that is JVM specific. By the same logic ‘move’ is also a 32 bit reference. Therefore, the TranspositionTableEntry itself is 64 + 32 + 32 + 32 + 32 bits == 192 bits == 24 bytes. (That likely ignores some OO “overhead.”)

That is already pretty bad, but it’s not even the complete picture. Note ‘move’ is a reference to an instance of a Move object. The Move object being referenced has a cost associated with it too. That object can’t be deallocated from the heap as long as we’re referencing it. So how big is a Move?

   Square from
   Square to
   Piece piece
   Piece captured (possibly null)
   Piece promotion (possibly null)
   boolean castle
   boolean epCapture

Again, it’s difficult to come up with an exact number, but we have between 3 and 5 references (2 Square and 1-3 Piece) to Singleton objects, and two booleans which are 8 bits each. So, somewhere between 14 and 22 bytes. Note that not every hash entry will have a Move associated with it: upper bounds for example, but most would.

All in all, that makes the cost of a single hash entry somewhere between 24 and 46 bytes! Doing a little more hand waiving, let’s assume an average of 38 bytes (which is probably low). That means a fully loaded hash table would have the following maximums:

RAM Entries (millions)
64 mb 1.7
128 mb 3.5
256 mb 7
512 mb 14.1
1 gb 28.3

So how can this be improved? It turns out that with a little bit twiddling it’s possible to encode the type, score, depth and move members into a single 64 bit value!

   long zobristKey (64 bits)
   long val (64 bits)

Not only does that reduce the overhead of an entry from 192 bits to 128 bits (24 bytes to 16 bytes), but it completely eliminates the need to keep those Move objects on the heap! Now a fully loaded hash table has the following maximums:

RAM Entries (millions)
64 mb 4.2
128 mb 8.4
256 mb 16.8
512 mb 33.6
1 gb 67.1

That’s a fantastic improvement in terms of storage capacity. Essentially we’re trading the computational cost of doing the transformations for space. The theory is that the extra space, which allows us to store far more entries, will more than offset that computational cost. So does it work?

chess4j has a command line parameter “-hash=NNN” that can be used to set the number of hash entries. NNN must be a power of 2. I ran several tests with the old and new hash schemas. Using the Silent but Deadly test suite at 30 seconds per problem, I measured average depth achieved at various combinations of max heap space (the Java -Xmx VM argument) and max # hash entries.

Results for the old hash schema:

Heap size (-Xmx) NNN=1m NNN=2m NNN=4m NNN=8m NNN=16m NNN=32m NNN=64m NNN=128m
64m 11.31 11.34 11.03 OOM
128m 12.42 12.22 12.12 11.93 11.42 OOM
256m 12.59 12.93 13.04 12.99 12.68 12 OOM
512m 12.64 12.99 13.31 13.4 13.49 13.43 12.64 OOM
1g 12.64 13.02 13.21 13.42 13.48 13.61 13.54 13.03

OOM = “Out of Memory”. The green cells indicate the peak performance setting for that combination of heap space and # hash entries. As you can see, performance can actually decline when the number of hash entries is too large for the given heap space, even before the JVM throws an Out of Memory error.

Results for the new hash schema:

Heap size (-Xmx) NNN=1m NNN=2m NNN=4m NNN=8m NNN=16m NNN=32m NNN=64m NNN=128m
64m 12.01 11.83 11.61 10.84 OOM
128m 12.58 12.84 12.84 12.46 11.69 OOM
256m 12.68 12.93 13.26 13.43 13.31 12.36 OOM
512m 12.72 12.97 13.26 13.45 13.62 13.46 12.89 OOM
1g 12.70 13.06 13.30 13.50 13.58 13.63 13.62 13.37

So, it appears the new hash schema indeed performs better, particularly with smaller heap sizes.

It occurred to me during testing that perhaps the JVM should have been initialized with the ‘-Xms’ argument equal to ‘-Xmx’, in order to not “penalize” the program for the overhead of increasing the heap, since it will surely grow to capacity. I reasoned that, using a large initial -Xms value would likely make the larger heap sizes look even better relative to the smaller ones, but doesn’t really matter if the point of the test is to compare performance of one schema to the other, as long as all settings are consistent.

Node rate per second, hash hit rate, and hash collision rate were also measured, but were less interesting. Node rate and hash collision rate predictably decreased with successively larger tables while hash hit rate increased.

It’s worth pointing out that my experience here is similar to Jonatan Pettersson’s with Mediocre Chess. See

These changes will be part of the 3.4 release. For the impatient, the changes to the main hash table have already been merged into master on the project github repo.

I just released chess4j 3.2. This release is purely about optimizing for speed. That is, there are no changes to the search or evaluation terms.

chess4j is now a bitboard engine, and uses magic bitboards for move generation. Simply by replacing some loops over all squares with loops over (possibly sparsely populated) bitboards, and probably most importantly getting rid of some managed collections stuff, chess4j 3.2 is 4-5x faster than 3.1. In a head to head match of almost 700 games with 3.1, 3.2 scored 79% which translates into a 200-250 elo gain.

One other important difference is that the opening book is now off by default. If you want to use the opening book included with the release, you must add “-book=book.db” to the command line. (This is done in the included chess4j.bat.)

I think I can still get some more ELO just off speed optimizations and efficiency improvements (such as splitting up capture and noncapture move generation), so there will probably be one more release in the 3.x line before moving onto 4.0. The 4.x line will be focused on parallel search.

As usual you can grab the latest and greatest from the project website. Enjoy!

I just released version 3.1 of chess4j, my Java based chess engine. This version is about 100 elo stronger than 3.0. The key differences are:

  • The Groovy language is no longer used. Groovy is a fine language for some purposes. chess4j didn’t make heavy use of it but did use it for some I/O type classes, which I wrote about in this post. Unfortunately, it was a constant struggle with my IDE of choice (STS). It constantly balked and complained about the Groovy classes. I had hoped that in time IDE support would improve but in the end I just got tired of constantly messing around with it. On a positive note the chess4j jar file went from 11.5 mb to 4.5 mb.
  • Some modest search improvements, worth about 20 elo.
  • Added pondering. This means that when it’s the opponent’s turn to move, chess4j is already thinking about its response. If chess4j correctly predicted what the opponent will play, then it is that far ahead in formulating a response (it may even be able to play its move instantly). Seems to be worth about 80 elo.

In my testing, chess4j 3.1 scores 53% against 3.0 in a 1000 game match without pondering, and 64% in a 1600 game match with pondering.

I have my sights set on a parallel search for the 4.0 release, but I think there will be another minor release or two before that to work on search speed optimizations.

You can grab the latest and greatest from the project website. Enjoy!

After a long period of inactivity I’m happy to announce the release of version 3.0 of my Java chess program, chess4j. This version is a good bit stronger than the previous version, though still not as strong as my C/C++ program Prophet (see below). The key areas of improvement are:

  • The addition of an opening book, which I discussed here.
  • Improvements in the positional evaluation function
    • Passed pawns are rewarded
    • Isolated pawns are penalized
    • Doubled pawns are penalized
    • Rooks on open files or half open files are rewarded
    • Rooks and queens on the 7th rank, particularly when connected to another rook or queen are rewarded
    • There is some attempt to keep the king sheltered until the endgame, and then encouraged to move towards the center
  • Almost 2x speed increase

The table below compares the performance of chess4j v2 against v3 on several different test suites. Each suite was run twice and the results averaged. I highlighted changes of +10% or more in green (good), and -10% or more in red (bad). The top 4 suites are tactical suites. There weren’t any significant changes here, despite the 2x speed increase. However, the ‘SBD’ suite and all the ‘STS’ suites below that are positional suites, and there were some differences there. Key areas of improvement (according to these tests) were ‘undermining’, ‘open files and diagonals’, ‘bishop vs knight’, ‘offer of simplification’, and ‘queens and rooks to the 7th rank’. The one suite marked in red measures advancement of the f/g/h pawns. I think the added king safety code has something to do with that: the program is less willing to push those pawns if the king is castled on that side.

Suite/Time v2 raw v2 % v3 raw v3 % Delta raw Delta %
WAC/5 258 86.0% 266 88.7% 8 2.7%
WAC/10 270.5 90.2% 273.5 91.2% 3 1.0%
WCSAC/5 775 77.4% 784.5 78.4% 9.5 0.9%
ECMGCP/10 35.5 19.4% 37 20.2% 1.5 0.8%
SBD/60 45.5 34.0% 96.5 72.0% 51 38.1%
STS1/5 30 30.0% 48.5 48.5% 18.5 18.5%
STS2/5 19.5 19.5% 39.5 39.5% 20 20.0%
STS3/5 45 45.0% 41.5 41.5% -3.5 -3.5%
STS4/5 36.5 36.5% 41.5 41.5% 5 5.0%
STS5/5 41 41.0% 59.5 59.5% 18.5 18.5%
STS6/5 47 47.0% 48 48.0% 1 1.0%
STS7/5 22.5 22.5% 33 33.0% 10.5 10.5%
STS8/5 31 31.0% 20.5 20.5% -10.5 -10.5%
STS9/5 30 30.0% 37 37.0% 7 7.0%
STS10/5 55.5 55.5% 57.5 57.5% 2 2.0%
STS11/5 19 19.0% 28 28.0% 9 9.0%
STS12/5 42 42.0% 48.5 48.5% 6.5 6.5%
STS13/5 45.5 45.5% 48 48.0% 2.5 2.5%
STS14/5 31 31.0% 62.5 62.5% 31.5 31.5%
STS15/5 22 22.0% 29.5 29.5% 7.5 7.5%

The table below shows chess4j’s performance in a head-to-head match verse four other engines (the bottom one being my engine, Prophet). The match conditions were: only one processor could be used, no thinking on the opponents time, and no opening book libraries or endgame tablebases. The matches were done using a set of 20 starting positions called the ‘Nunn Test Suite’. The engines play from each position twice- once as white and once as black, making the entire match 40 games. One thing that may jump out is that the results don’t seem consistent. Why the variability against the same opponent using different times? It very well could be that chess4j is better at longer time controls, but it would take thousands of games to prove that theory. This game set is too small to be statistically significant. The main point was to get a feel for how strong the engine is.

Opp 1 min 3 min 10 min
Gerbil 5-31-4 9-23-8 10-23-7
Horizon 4.3 6-31-3 8-28-4 4-31-5
Lime 6.2 5-32-3 4-29-7 6-29-5
Prophet 2.0e 2-32-5 4-35-1 2-35-3

While there is certainly a lot to do with the evaluation, it’s not so far behind Prophet’s that it would explain such a lopsided result. I’m afraid the main disadvantage chess4j has comes down to speed. On my laptop I typically see chess4j search between 180,000 and 200,000 positions per second. That may sound like a lot but this (32 bit compile of Prophet) searches between 800,000 and 1 million positions per second (and there are programs out there that search much faster than that!). To test that theory I ran another set of matches against the same opponents using time odds. In the table below, the 2x, 4x, and 8x columns show how chess4j would fare in a 1 minute match if it were 2x, 4x, or 8x faster than it is now. As you can see, if chess4j were 5-10 times faster than it is it would be much more competitive against this group.

Opp 1 min 2x 4x 8x
Gerbil 5-31-4 10-22-8 12-21-7 21-8-11
Horizon 4.3 6-31-3 8-26-6 11-21-8 12-21-7
Lime 6.2 5-32-3 6-30-4 14-20-6 16-12-12
Prophet 2.0e 2-32-5 6-29-5 10-27-3 15-9-6

Is it possible to make chess4j that much faster? Maybe. I was able to improve the speed of this version by a factor of two by doing a careful analysis of the data structures being used (mostly Collections stuff, like ‘ArrayList’ vs ‘HashSet’ and such), and looking up their runtime complexities for different operations (like adding, or iterating…) and changing the data structures where appropriate. But, I think I’ve gone about as far as I can down that road. This is a Java program after all. It’s not as “close to the metal” as a C program. I think to squeeze much more out of I’d have to scale back on how much object instantiation it does. But then it would look less like an object oriented program, and would probably be less readable and possibly harder to test… all of which go against what I want to accomplish with this program.

So, what’s next? My next areas of focus are:

  • It’s time to establish a proper testing routine. Meaning that, going forward, all changes to the program need to be measured in terms of ELO gain. This is a lot harder than it sounds but it will ensure changes are good changes.
  • Add a pondering search. Instead of sitting idly as the opponent is on move, try to guess what the opponent is going to do and start formulating a response. If we’re right often enough we can allow more time to think about each move.
  • Use all the machine’s processors! Most computers these days have multiple processors, yet chess4j is only capable of using one. (using multiple processors is easy. doing it efficiently [in the context of a computer chess] is MUCH harder than you might think)

If you’d like to try your hand against chess4j, you can download it from the
project website. It’s a console program though, not graphical. However, you can download a GUI and set it up to use chess4j… see the readme.txt file.


The last time I wrote anything about chess4j, way back in November, I reported that chess4j was using a small in-memory database of opening moves. The program was reading a library of about 500 games from GM Kasparov, which it would do every time the program initialized. Before each move, chess4j would consult this in-memory database to see what moves have been played by Kasparov (or his opponent), and choose between them using a weighted random selection algorithm. The effect of this was that chess4j no longer played the same opening moves, so it added some variability to its online play. It also produced games that were a little more “aesthetic”, or pleasing to look at.

That was a huge step forward, but it had some limitations. Reading all those games takes a little time, so it isn’t really practical to read in more than about 500. It’s also expensive in terms of memory since they are all held in-memory, so there are some limitations there. Finally, it doesn’t leave any room for any learning algorithms since the program is initialized to the same state each time it starts.

To address those issues I needed a solution that persisted data to the disk. In past programs written in C/C++ I’ve created custom file formats and used fseek() and the like to probe it. This time I decided to use a proper database. I had considered MongoDB, mainly just to play around with it, but in the end I decided on SQLite. SQLite is lightweight, reliable, and fast – a perfect fit. (Side note: the primary author of SQLite, Dr. Richard Hipp, was roommates with my graduate school advisor, Dr. Ronnie Smith when they were both working on their PhDs at Duke University. I’ve met him… very nice guy.)

The database table to hold these opening book moves is very simple:

sqlite> .schema book_moves
CREATE TABLE book_moves (key int not null,fromsq int not null,tosq int not null,frequency int default 1,wins int,losses int,draws int);
CREATE INDEX idx_book_moves_key on book_moves(key);

Really, the only fields that are necessary are the first three: key, fromsq, and tosq. ‘key’ is used as a signature for a position. That is, every possible chess position has its own unique value for ‘key’. (I’m not going to get into the algorithm to do that in this post, just accept that it’s true!) ‘fromsq’ and ‘tosq’ are probably self explanatory – they represent the square the piece is moving from and the square it’s moving to. So, if we want to probe the opening book to see what moves it contains from any given position, we just produce the value of “key”, then perform the query:

select fromsq,tosq,frequency,wins,losses,draws from book_moves where key=?

‘frequency’ is a count of how many times the move was encountered as the opening book was constructed. This is what makes the random weighted selection possible. The idea here is that, if move A was encountered 10 times as often as move B, then chances are it’s a better move and I should play it roughly 10 times as often. This approach helps keep a stray “bad move” that may have found its way into the book database from getting on equal footing with good moves.

Finally, we have ‘wins’, ‘losses’, and ‘draws’. Each time chess4j completes a game, it increments the appropriate counter for the moves it played from the opening book (not moves the opponent played). It doesn’t actually do anything with this information yet, but future versions will probably use the information to avoid any moves it consistently loses with.

So, that’s a brief description of the implementation. To seed the database I found an online collection of about 50,000 games played by Master level players or above. The database I’m using currently has nearly 150,000 moves.

Watching chess4j play with this opening book is really fun. Since it doesn’t have to “think,” it plays almost instantaneously when it has a response in the opening book. Below is a segment from logfile. Notice how it begins with “c4”, then parses the opponent’s response “g8f6” and responds with “b1c3”. The frequency counts on the moves it encounters in its opening book gradually diminishes until it has no book response for “c7c6” and has to start thinking on its own.

# parsing: level 0 5 3
# level: 0, 5, 3
# setting increment to 3000 ms.
# parsing: name Pepo2424
# opponent is: Pepo2424
# parsing: rating 1989 1611
# parsing: time 30000
#time : 300000
# parsing: otim 30000
# parsing: go
# book move: BookMove [move=c2c4, frequency=8939]
move c2c4
# parsing: time 30300
#time : 303000
# parsing: otim 30100
# parsing: usermove g8f6
# book move: BookMove [move=b1c3, frequency=1520]
move b1c3
# parsing: time 30600
#time : 306000
# parsing: otim 30100
# parsing: usermove g7g6
# book move: BookMove [move=g2g3, frequency=182]
move g2g3
# parsing: time 30900
#time : 309000
# parsing: otim 30300
# parsing: usermove f8g7
# book move: BookMove [move=f1g2, frequency=225]
move f1g2
# parsing: time 31200
#time : 312000
# parsing: otim 30400
# parsing: usermove d7d6
# book move: BookMove [move=e2e4, frequency=24]
move e2e4
# parsing: time 31400
#time : 314000
# parsing: otim 30500
# parsing: usermove e7e5
# book move: BookMove [move=g1e2, frequency=5]
move g1e2
# parsing: time 31700
#time : 317000
# parsing: otim 30600
# parsing: usermove e8g8
# book move: BookMove [move=e1g1, frequency=12]
move e1g1
# parsing: time 32000
#time : 320000
# parsing: otim 30200
# parsing: usermove c8e6
# book move: BookMove [move=d2d3, frequency=7]
move d2d3
# parsing: time 32300
#time : 323000
# parsing: otim 30300
# parsing: usermove c7c6
# transposition table initialized with 1048576 entries.
# time remaining: 323000, increment: 3000, allotted time: 15920
1 -183 1 2 c3d5
1 -1 3 4 c3b1
1 4 4 6 c3a4
1 17 4 10 f1e1
1 20 4 12 h2h3
1 22 4 14 h2h4
1 22 6 39 h2h4
2 7 6 78 h2h4 b8d7
2 12 12 230 d1b3 b7b5
2 12 12 246 d1b3 b7b5
3 17 21 368 d1b3 b7b5 h2h4
3 17 28 1500 d1b3 b7b5 h2h4
4 7 45 3541 d1b3 d8c7 h2h4 b8d7
4 7 79 5825 d1b3 d8c7 h2h4 b8d7
5 12 121 9522 d1b3 d8c7 h2h4 b8d7 a2a4
5 12 139 15133 d1b3 d8c7 h2h4 b8d7 a2a4
6 7 190 23677 d1b3 d8c7 h2h4 b8d7 a2a4 d7c5
6 9 293 38345 h2h4 b8d7 a2a4 a7a6 f2f4 h7h5
6 9 339 64469 h2h4 b8d7 a2a4 a7a6 f2f4 h7h5
7 10 440 97805 h2h4 b8d7 a2a4 a7a5 b2b3 d7c5 f2f4
7 10 640 172694 h2h4 b8d7 a2a4 a7a5 b2b3 d7c5 f2f4
8 7 817 243868 h2h4 b8d7 a2a4 a7a5 f2f4 d7c5 f4e5 d6e5
# hash probes: 628514
# hash hits: 58960 (9.38%)
# hash collisions: 31636 (5.03%)
# fail highs: 32795 (5.22%)
# fail lows: 2422 (0.39%)
# exact scores: 37 (0.01%)
move h2h4


# parsing: result 1-0 {Pepo2424 resigns}

This improvement was a lot of fun to implement and it’s a lot of fun to watch. chess4j’s online opponents seem to like it too:

Ara61 says: Thank you for the great game!

I’ve also made some great improvements to the engine’s evaluation routine, but that will be the subject of another post very soon! And, as always the latest development code is available from the project website. These changes are not included in the last official “release” but will be in the 3.0 release coming within the next few weeks.

Ok, “learn” is too strong a word. More accurately, chess4j now has a small opening book database populated with a little over 500 Kasparov games. Before doing any thinking on its own, chess4j will consult this database, and if a move is found it will play it. This has the effect of steering the game into a nice position that a Grandmaster might play fairly quickly, and it also adds some more variation to the opening moves as well.

500 games is not a lot, but at the moment the opening book is all contained in memory. That is, when the program starts it reads through those 500+ games, storing them in internal memory, and holding them in memory for the duration of the program’s execution. The next time it starts, it does it again.

I’m really pretty happy with the way this all came together. Here is the method that initializes this in memory opening book:

private static void initBook() throws Exception {
	OpeningBook book = OpeningBookInMemoryImpl.getInstance();
	FileInputStream fis = new FileInputStream(new File("pgn/Kasparov.pgn"));
	PGNIterator it = new PGNIterator(fis);
	PGNGame pgnGame;
	while ((pgnGame = != null) {

As you can see there are a few key classes that make this all work. First, we need an OpeningBook. OpeningBook is an interface, which in this case is implemented by the Singleton OpeningBookInMemoryImpl. I won’t go into the in memory implementation here, because in the future that will be replaced with something with a persistent datastore behind it (maybe MongoDB). But, I will show the interface it implements:

public interface OpeningBook {
	public void addToBook(PGNGame game);
	public List<BookMove> getMoves(Board board);
	public BookMove getMoveWeightedRandomByFrequency(Board board);

Pretty simple at the moment, and likely to be expanded. The key points are that you can add a move to the book, or get a list of BookMoves in a given position, or you can get a single BookMove using a weighted random selection algorithm.

Now that we have an OpeningBook, we need something that is capable of reading through a Portable Game Notation (PGN) file, producing a sequence of PGNGame objects.

PGNGame is really just a data object so I won’t show the code here. It really just encapsulates a game, which we can think of as a series of PGN tags, a list of moves, and a result (win, loss, draw).

The PGNIterator class is a little more interesting though. Since some of these PGN files get fairly large (40-50 mb is not unusual), it’s best to take a ‘streaming’ approach to processing them. Hence, if you look back at the initBook() method, you’ll notice the constructor for PGNIterator is given a FileInputStream. (It will accept any type of InputStream, which it uses internally to create a BufferedReader.)

Here is the next() method of PGNIterator :

public PGNGame next() throws IOException, ParseException, IllegalMoveException {
	PGNParser parser = new PGNParser();
	String nextGame = getNextPGN();
	if (nextGame != null) {
		return parser.parseGame(nextGame);
	return null;

I’m glossing over a few details here but hopefully that gets the point across. The call to getNextPGN() looks far enough ahead in the stream to capture the String representation of a game, or returns NULL if it can’t (probably because it hit the end of the file). It then uses this PGNParser to convert the String into a PGNGame. As you might imagine PGNParser uses some regular expression stuff that frankly made my head hurt a little. Finally, we saw above that the PGNGame is added to the opening book.

All of this is part of the 3.0 development so it’s not in the latest release, but the source code is available (along with several unit tests) so if you’re the developer type feel free to check it out from the project website on SourceForge .

I’m happy to announce the release of version 2.0 of my chess program, chess4j. This version is a good bit stronger than the previous release, though there is still much to do. The focus of this release has been to improve the efficiency of the search. These improvements include:

  • Improved Move Ordering
    • Previous PV move (this was already in place)
    • Hash move (this was already in place)
    • Winning and even captures, as sorted using a Static Exchange Evaluator
    • Killer move heuristic – 2 moves that must be noncaptures
    • Remaining non-capturing moves in any order
    • Remaining captures, sorted using the Static Exchange Evaluator
  • Addition of a quiescence (capture only) search after the full width search. Captures are sorted using the MVV/LVA algorithm
  • Extensions
    • Check Extensions
    • Pawn to 7th or 8th (promotion) extension
  • Null Move Pruning, using R=2
  • Principal Variation Search
  • Aspiration Windows of +/- 1/3 pawn
  • Late Move Reductions
  • Improved Node Rate by 10x (!!)
  • Very, very simple opening book. It’s only two moves deep, only added to add some variety to online play

So, how good is it? As I write this the program has an ICC Blitz rating (playing exclusively 5 3) of around 2050, and a Bullet rating (playing exclusively 2 1) of around 2100. So, not very strong as far as chess programs go, but good enough to beat most club players at rapid speeds. As far as test suites go, I’ve only really used the Win At Chess Suite at 10 seconds per problem and it’s scored 264/300.

What’s next? There is a lot to choose from, but I’ve decided to make the opening book the focus of the 3.x branch. I think I’ll start by creating a database of high caliber games, and some tooling to add or remove lines. I’d also like to have some sort of learning mechanism so the program learns moves over time, as well as learns to prevent disastrous moves.

Further down the road, the evaluator is certainly the weak point now. It really just measures material and some static piece placement for knights and pawns.

You can download chess4j from the project website: As always, the source code is also available.


Over the holidays I managed to find a little time to do some work on chess4j. It felt really good to dust the codebase off and make some improvements that have had real impact on playing strength.

Up to this point, I have not really focused on playing strength at all. The official release number is 1.2. The focus of the entire initial release (well, the entire 1.x series of releases) has been to create a Winboard compatible program that is rock solid in terms of reliability, and to do so in a test driven manner, meaning a high percentage of the code is covered by unit tests, with some functional tests thrown in for good measure. That goal was accomplished. chess4j has played literally thousands of games on the Internet Chess Club, and it never crashes. It loses a lot! But, it never crashes.

The goal of the 2.x series will be to implement some well known search improvements. This probably will not include parallel or distributed search just yet, just ways to make the single threaded search more efficient. The focus will be on intelligently ordering moves to reduce the branching factor, implementing a capture-only search towards the leaf nodes, extending promising lines, and pruning back less promising lines.

Over the holidays I accomplished the following:

  • added the ability to process test suites, which included a FEN parser, in order to help measure the impact of search improvements
  • implemented a transposition table using an “always replace” strategy.
  • modified the search loop to order moves instead of just playing them in the order they were generated (the only exception to this is it was playing the previous PV first). Now it plays the previous PV, followed by any move suggested by the transposition table, followed by promotions and then all captures, with captures sorted by the well known MVV/LVA algorithm.
  • made a couple of obvious speed/efficiency improvements

I haven’t gotten to the point where I need to be too scientific about measuring the impact of improvements yet. But, there’s no doubt those changes have had a dramatic impact. Before, chess4j’s blitz rating on ICC was typically in the 1100-1200 range. As I type this, it has a blitz rating of 1419. That’s still very, very weak, and far far below Prophet’s former strength, but the point is those changes are having the desired effect!

All of this has been committed to the Sourceforge account, so anyone is free to peek at the code. The official release doesn’t include these changes yet, and probably won’t for several months, but the code is committed for my developer friends who’d like to get a sneak peek at the changes coming in the 2.x line.

The project website:

chess4j 1.2 is in the wild. There hasn’t been any new development lately in terms of new features or playing strength, but I have put some effort into improving code quality using the excellent Sonar Source tools. Minimal cyclomatic complexity. No circular package dependencies. No critical or major violations. Over 80% test coverage and over 99% rules compliance (the only minor violations being some bogus magic number stuff).

Check it out! —

Sonar Stats

Next on the road map is to wrap up just a bit more testing, playing around with some new Java 7 features to create a parallel performance test, and finally to incorporate some Scala. Fun times ahead!

You can visit the chess4j website on SourceForge here: .