April 4th, 2010 @ 00:04

Might not have mentioned it until now, but since about one year and a half I’ve been a photography addict. Basically my girlfriend got a DSLR for Christmas, and I got one (a Canon EOS450D) for my birthday two months later. After having fun with my f/1.8 50mm lens and its nice depth of field effects and after being seen as a paparazzi with my 250mm telezoom, I thought I’d try something larger : panoramas.

I’m using Hugin (with autopano-sift-C for the keypoint detection and matching ; nona for the photometric/geometric remapping and enblend for the merging). Might sound complicated, but it’s basically just 3 clicks and a lot of processing time (if you don’t get into the details).

My first try was at the Carnegie Museum in Pittsburgh. No tripod allowed, so the camera was handheld. It looks quite beautiful imho, apart of the weird line at about 3/4 of height. I should probably rerun the stitcher.

Carnegie Museum Panorama

My next try was a panorama of CMU campus showing, among other things, the Fence, the University Center and the Gates building. I took the pictures at about 1pm, with a pretty nice sun and a few clouds. I first tried using the whole 120 pictures set, but it resulted in a bunch of geometric errors, mostly the leftmost flag and the pathway being broken :

CMU Panorama - ETOOMUCHPICS

I then selected a core of pictures, stitched them, and then added a few other to fix the missing parts, for a total of 21 base pictures. This resulted in the following panorama :

CMU Panorama (without sky)

I cropped the bottom part of the result to drop a broken pathway and a bunch of useless grass, plus another bunch of missing (i.e. black) parts. But still, it’s far from perfect, huge parts of the sky are missing, and there are some unpleasant bits (like the white line in the sky near one the trees). Luckily, I had an image restoration course (well, it was a more generic vision course, but it addressed this among other things), so that I know that there are some pretty efficient inpainting (the process of creating texture based on the surrounding pixels) methods, and nicely one of those is implemented as a GIMP plugin, the GIMP Resynthesizer. I first tried the manual way, setting the plugin parameters myself, but I found out it wasn’t quite the right way to do it :

WTF Gimp Inpainting ?

Then I discovered an option called “smart remove selection” which sets everything automagically, and it worked (though I still guess there are some pretty bad memory issues or so, since it was picking textures from outside of the selected radius). Using that option, I generated the missing sky parts and removed the ugly bits, and here we go :

CMU Panorama

Nice, heh ?

I should probably also mention that I also tried the pure CLI way (i.e. not running hugin GUI) and that it works great : autopano-sift-c takes a bunch of pictures and produces the keypoints and matches, autooptimizer optimizes the resulting homographies, pto2mk creates a Makefile which produces the final panorama (by running nona and enblend/enfuse). It’s not that much documented (took me a while to figure out that I could avoid the GUI and run most of the expensive computations remotely), but it works flawlessly.