Archive for the ‘Photo’ Category

July 30th, 2010 @ 05:51

While sorting my pictures this evening, I was wondering how much pictures I captured with my EOS 450D camera since I got it. I knew (thanks to a doc about flashes I read a few days ago) that was I was looking for was the shutter actuation count of the camera.
Googling pointed me to a bunch of Windows or OSX related links, such as the nontheless nice 40D Shutter Count software, or pointers to exiftool, which did not work with the pictures taken on my 450D.
I then remembered the greatness of gphoto2 and gave it a shot.

  • Plugged the camera through USB, restarted it
  • Checked that it was detected with gphoto2 --auto-detect
  • Checked the capabilities with gphoto2 -a
  • Looked at the config options with gphoto2 --list-options
  • Found the /main/status/shuttercounter option
  • Ran gphoto2 --get-config=/main/status/shuttercounter
  • And here it is : 27781 shutter actuations

That number seems quite reliable, since I can have about 15000 of those pictures on my hard drive disks and I used to drop about 2 pics on 3 in my early days (I’m a lot softer on the shutter now, after experiencing the joys of picture sorting :p).

Good to know I’m still far from the 100.000 actuations expected shutter life for 450D’s.

Well, thanks gphoto2 for your simplicity !

April 4th, 2010 @ 00:04

Might not have mentioned it until now, but since about one year and a half I’ve been a photography addict. Basically my girlfriend got a DSLR for Christmas, and I got one (a Canon EOS450D) for my birthday two months later. After having fun with my f/1.8 50mm lens and its nice depth of field effects and after being seen as a paparazzi with my 250mm telezoom, I thought I’d try something larger : panoramas.

I’m using Hugin (with autopano-sift-C for the keypoint detection and matching ; nona for the photometric/geometric remapping and enblend for the merging). Might sound complicated, but it’s basically just 3 clicks and a lot of processing time (if you don’t get into the details).

My first try was at the Carnegie Museum in Pittsburgh. No tripod allowed, so the camera was handheld. It looks quite beautiful imho, apart of the weird line at about 3/4 of height. I should probably rerun the stitcher.

Carnegie Museum Panorama

My next try was a panorama of CMU campus showing, among other things, the Fence, the University Center and the Gates building. I took the pictures at about 1pm, with a pretty nice sun and a few clouds. I first tried using the whole 120 pictures set, but it resulted in a bunch of geometric errors, mostly the leftmost flag and the pathway being broken :


I then selected a core of pictures, stitched them, and then added a few other to fix the missing parts, for a total of 21 base pictures. This resulted in the following panorama :

CMU Panorama (without sky)

I cropped the bottom part of the result to drop a broken pathway and a bunch of useless grass, plus another bunch of missing (i.e. black) parts. But still, it’s far from perfect, huge parts of the sky are missing, and there are some unpleasant bits (like the white line in the sky near one the trees). Luckily, I had an image restoration course (well, it was a more generic vision course, but it addressed this among other things), so that I know that there are some pretty efficient inpainting (the process of creating texture based on the surrounding pixels) methods, and nicely one of those is implemented as a GIMP plugin, the GIMP Resynthesizer. I first tried the manual way, setting the plugin parameters myself, but I found out it wasn’t quite the right way to do it :

WTF Gimp Inpainting ?

Then I discovered an option called “smart remove selection” which sets everything automagically, and it worked (though I still guess there are some pretty bad memory issues or so, since it was picking textures from outside of the selected radius). Using that option, I generated the missing sky parts and removed the ugly bits, and here we go :

CMU Panorama

Nice, heh ?

I should probably also mention that I also tried the pure CLI way (i.e. not running hugin GUI) and that it works great : autopano-sift-c takes a bunch of pictures and produces the keypoints and matches, autooptimizer optimizes the resulting homographies, pto2mk creates a Makefile which produces the final panorama (by running nona and enblend/enfuse). It’s not that much documented (took me a while to figure out that I could avoid the GUI and run most of the expensive computations remotely), but it works flawlessly.

March 24th, 2010 @ 00:53

Since Sunday I have been testing Picasa web face recognition on my set of pictures. After an hour of initial processing, I was presented an interface showing a list of cluster of faces, which I have to check, remove false-positives and name. While it seemed to work great for the very first clusters (which correctly grouped about 50 to 100 faces of the same persons), it quickly appeared that the whole thing was not that great.

Here are a few rants :

  • It seems to be heavily influenced by the face expression and angle (i.e. it’ll often make two clusters of faces of the same person depending on whether it looks tilted to the left or to the right).
  • It doesn’t reconsider the clustering after the initial processing : I’m pretty sure that after tagging a bunch of clusters of the same person, it could easily merge the remaining clusters into a single one.
  • It keeps giving me “communication errors”. I’m used to the “click and it’s immediately there” scenario with google services, and I have to say that this service is definitely not a good example. About two thirds of my actions result in such an error, which takes about 10 seconds to arrive ; successful actions also take several seconds (5 to 10) to complete, which is not really efficient when you have pictures of about 800 different persons which gives clusters of 1/2 faces you just want to ignore and that it takes about 20 second per cluster to get it actually ignored.

I know this is still in development, so the actual recognition problems are ok, but meh, the communication errors are really really annoying…

Update (03/25/10) : I don’t know if they fixed the problem or if it was just pure luck, but I was able to tag 1500 faces without a single problem in about half an hour. Yeepee !

March 22nd, 2010 @ 00:13

A few days ago, I discovered I had a miniUSB-A (or something along the line) plug on my beloved EOS 450D camera. Fun stuff, I could read pictures directly from the camera without having to swapping my SDHC card back and forth, at a pretty decent transfer rate (yet quite slower than through the cardreaded of my laptop).

I then discovered libghoto2, which is the library behind the Linux support of this feature. After a little bit of googling, I stumbled upon the gphoto2 website, where they mentionned that it was possible to command shooting from the computer and directly get the resulting image on the computer. Immediately tried and… it works great ! (though I haven’t figured yet how to get the picture on the sdcard instead of directly on the computer).

Not sure what I’m going to do with this, but I’m confident there is a bunch of hacking possibilities here. Maybe should I write a simple GUI on top of it someday to begin with. Or I could try building a motorized base and remote control the whole thing from my n900 (and attempt to do things like Paris 26 Gigapixels). Hmmmmm…