Automatic Jigsaw Puzzle Solving


The goal of this project is to create a system to assemble jigsaw puzzles from images of the pieces, possibly taking advantage of opportunities for parallel processing to speed up the process. Specifically, you will be provided with a number of images, each containing several pieces of the puzzle photographed against black velvet. Example. The goal is to extract the pieces from the images, and figure out how they fit together. The final output will be a single image showing the assembled scene (or as much of it as your program can produce). You may certainly employ non-imagelike intermediate representations to help in the solution of the problem, but you should produce an image that allows your solution to be evaluated visually. Note that at best, your system will only solve the puzzle down to a (4-way) rotation in the image plane, since I do not expect you to be able to determine which way is up (though you are welcome to try).

Currently, for development data, I have taken imagery for two 100-piece puzzles. They are in ~nelson/pics/puzzles/wizard_of_oz2, and ~nelson/pics/puzzles/duck_bunny2. Each directory contains 50 images, each containing 2 puzzle pieces on black velvet.

The directories without the "2" (~nelson/pics/puzzles/wizard_of_oz, and ~nelson/pics/puzzles/duck_bunny) are an earlier data set. They contain the same pieces, but 6 to an image. These pictures have some interlace jitter, the white balance was not locked, and the pieces are smaller. It is likely more difficult to solve the problem from this set, but you are welcome to try. As a bonus, this set contains two versions of each image numbered consecutively, taken at different exposures. The odd-numbered images are brighter than the even numbered ones.

As a culmination, we will, on a class day near the end of the semsester, hold a demonstration session, where the systems will have the opportunity to strut their stuff, both on develpment data and a previously unseen puzzle (of the same general difficulty as the development data). We might even have a race to see which system is the fastest. I will reserve a number of cluster nodes (hopefully representing at least 16 processors) on the designated day on which to run the demonstations, in case anyone parallelizes their system. Each team will also need to give a presentation on their effort (on a class after the competition)

As usual, each team is to create a single, coherent, written report on their approach to the project, describing in detail what approaches were used, what did not work, any comparative studies done, what considerations went into the design, where the processing time was spent, how the system would be expected to scale, etc. etc.




There are a number of aspects that need to be addressed. The first is the low-level image-processing aspect. The puzzle pieces need to be extracted from the images. I have tried to take images that will make this as easy as possible. The images are all shot with a very narrow field of view, from (nearly) directly overhead, and at the same scale and exposure. Thus all the colors and boundaries should match within the limits of the camera and image accuracy. (The shots actually turned out to be about 2 degrees from the vertical, but since cos of 2 degrees is .9993, the distortion is negligable.)

Because of the black background, simple thresholding will (almost) allow the puzzle pixels to be separated from the background. Most of the minor problems here can be fixed up by some simple local filtering (e.g. a 3x3 median filter). A connected components routine can be used to associate pixels belonging to individual pieces.

There are a few complexities you will need to deal with. First, the dynamic range of the camera is not all that might be hoped. If white areas are not saturated, some dark areas look pretty black, so the color information is not as good as is available to a person doing the same puzzle.

Second, thresholded images will not be completely clean. There may be small artifacts in the image (I know there are a few white lines near the lower edge of a few images, where the edge of a piece of paper crept in). These are easily removed by a threshold on the piece size. Boundaries are likely to be ragged. Median filtering (or other smoothing) helps to some extent. Following this by a dilation and and erosion of the thresholded binary image would probably produce pretty clean regions.

Third, you will need to rotate the pieces as well as translate them in order to fit them together. There is code in my libaries for rotating images, but there are pixel-level accuracy issues at the boundaries. The aspect ration of the images is not 1 to 1, and this must be corrected for, or rotated pieces will not match. The directories contain an image of a square object (calibrate_square.tiff) to allow you to determine what the true aspect ratio is.

I have written library code for all the operations mentioned above, including connected components, and many more, so you probably do not need to write or find much low level image processing code. You just need to figure out how to link and use the ipp libraries. There is a program "image_calc" in ~nelson/bin/PCLinux that will let you experiment with all sorts of image processing operations on the images, to see what gives good extraction. You can look at the source in ~nelson/programs/src/integrated/bin to see how to link and use the routines once you have figured out what needs to be done.




The second aspect is the representation and matching problem. You will need to represent the pieces in a way that permits matching operations to be performed efficiently on the color and/or pattern and/or shape.

For the most part, each piece interlocks with 4 others (except for the border pieces). In the wizard-of-oz puzzle, the pieces are laid out as a modified rectangular grid with 4 pieces coming together at corner junctions. The duck_bunny puzzle is not quite so simple, but there is still a regular underlying grid. The unseen test puzzle has a few additional distortions.

It might be advantageous to identify border pieces, just like human puzzlers often do, and start by assembling the border. Once this is in place, if the puzzle is built from the corners, each piece added is constrained on two sides, which greatly reduces the chance for error and the need for backtracking. To be efficient, the test for a fit probably needs to be constrained by some sort of normalization and/or indexing process so that you don't have to explicitly test each possible translation and rotation. Measurements of the "knobs" and "holes" might be useful for this, as might color classification (both strategies imployed by humans).

A strategy that is NOT allowed is somehow obtain an image of the completed puzzle (e.g. by assembling the puzzle by hand) and then match pieces against this image. This is puzzle solving without the box.




A third important aspect involves parallel processing and efficient algorithms for the overall assembly. Since the goal is to solve the puzzle as rapidly as possible, any use of parallel resources, and any heuristics to reduce the number of matches that need to be considered would owrk towards that goal. Human-like strategies of working on groups of pieces of similar color might be helpful. Of course some such strategies might not interact well with division over a moderate (16+) number of processors, so there are tradeoffs to be considered. There is also the issue of how much time is consumed in basic image processing versus matching. Image processing, especially of multiple images, is rather trivially parallelizable. If most of the work is in the production of the piece representations, then parallelizing the matching might be a moot point.


Finally there is the issue of presentation of the processing. Since there is to be a demonstration, it would be nice to have a live display illustrating the progress of the system as it progresses. Some graphical output showing sections as they are constructed, would be interesting. How to do so effectively when many machines are working on the problem at once is an interesting problem


Teams



Back to vision course main page