July 30th, 2010 @ 05:51

While sorting my pictures this evening, I was wondering how much pictures I captured with my EOS 450D camera since I got it. I knew (thanks to a doc about flashes I read a few days ago) that was I was looking for was the shutter actuation count of the camera.
Googling pointed me to a bunch of Windows or OSX related links, such as the nontheless nice 40D Shutter Count software, or pointers to exiftool, which did not work with the pictures taken on my 450D.
I then remembered the greatness of gphoto2 and gave it a shot.

  • Plugged the camera through USB, restarted it
  • Checked that it was detected with gphoto2 --auto-detect
  • Checked the capabilities with gphoto2 -a
  • Looked at the config options with gphoto2 --list-options
  • Found the /main/status/shuttercounter option
  • Ran gphoto2 --get-config=/main/status/shuttercounter
  • And here it is : 27781 shutter actuations

That number seems quite reliable, since I can have about 15000 of those pictures on my hard drive disks and I used to drop about 2 pics on 3 in my early days (I’m a lot softer on the shutter now, after experiencing the joys of picture sorting :p).

Good to know I’m still far from the 100.000 actuations expected shutter life for 450D’s.

Well, thanks gphoto2 for your simplicity !

July 22nd, 2010 @ 01:18

A school friend, namely p4bl0, mentioned the idea of maintaining blog posts with git and a set of hooks which would produce the blog html from the contents of the repo. I loved the idea, but thought I could push it a little further : a blog engine which would use no other storage than git, with the post subject and contents being the commit message subject and contents. A post-commit or post-receive hook then produces the html. As simple as that !

You can find the source in BloGit git repo, and see an example at BloGit example. To use the source, you first have to pack it (using the pack script), which will merge the raw_post and raw_produce, producing a single post script (which I also included at the end of this post), which you can simply put in an empty directory and run it. It will unpack the other script (produce), initialize the git repo, and set the hooks. It’ll then prompt you for your post title and then open an editor for you to set your post contents. Save the file, and you’re done with your first post : check the index.html file which has been produced in the same directory. You can write your own stylesheet in the blogit-style.css file. Further posts can be done with the same post script.

Yet, the best way is probably just to use the usual git workflow. To initialize the repo and all, run post --unpack, and to post post --raw or git commit --allow-empty (when using git commit, leave a blank line between the subject line and the rest of the post). You can also amend existing commits (using git commit --amend), use the GIT_AUTHOR_* environment variables to change the author, and so on. Since merge commits are skipped by the html generator, it should work just great for multi author blogging !

PS : I know this is JUST a git log pretty printer, and that the whole thing is pretty much trivial. I also know that using versionned files to store the posts would allow a lot of extra bonuses (such as automatically adding “Updated on …” mentions based on the commit log of each single file). I just thought the idea was fun :p There are probably a lot of things to improve, or a lot of smart git features to use there that I overlooked. Feel free to leave a line :)

Read the rest of this entry »

July 16th, 2010 @ 05:46

Since Thunderbird 3 (or maybe before that), when you encounter an invalid SSL certificate, the GUI doesn’t even offer you to add an exception as in Firefox. Counter-intuitively, the same GUI than in Firefox is actually, available you just have to dig it through the menus. Here is a step-by-step tutorial with screenshots (had to do them for a doc, thought I’d share).

Go to Account Settings

In Security menu select View Certificates (alternatively, you can get there through Edit, Preferences, Advanced, Certificates, View Certificates if you have no Security menu here)

In Servers tab, hit Add Exception

Enter the address:port of the service you are using (without any protocol://) and hit Get Certificate

Hit Confirm Security Exception, and you're done !

July 14th, 2010 @ 23:29

TLDR : directly skip to the last paragraph

For a few weeks now a couple of friends and I have been working on a media player for Maemo 5 with some extra bonuses (mostly track scoring and track prediction). Now that we have something pretty much functional, I was willing to produce some .deb packages for easy installation. I knew it wouldn’t be an easy task — and it definitely wasn’t.

Since I had no Scratchbox setup in the first place (64 bits distro, etc.), I first tried the py2deb/PyPackager approach, which let’s you create .deb packages directly on the tablet for python apps. Sadly, this approach requires you to sort your source tree in a very strict manner, which is as it will be on the device. Since I plan to rebuild my package often and I don’t want to work in an uglily organized environment, this is not the solution.

I thus set a debian virtualbox and installed Scratchbox on it, and tried to use a very simple cdbs build system, using python-distutils.mk (I already had a distutils build system working), and then building the .deb with dpkg-buildpackage. Sadly, for an unknown reason (to me) this was putting the python libs into /var/lib/pyshared/, which is definitely not in the n900 PYTHON_PATH, plus the package did not include the empty __init__.py files in the packages (it seems they were removed by python-support which is used by python-distutils.mk and are meant to be reconstructed upon install, though they weren’t), which lead to a completely broken package.

The next attempt I made was to use sbdmock, which is used by the official maemo.org autobuilder. Sadly, while it was successfully building other packages, it never succeeded at building my package because “pyversions”, which is required by python-distutils.mk, was missing (seems this is a known problem — though this might actually be fixed now and I might just be lacking the right builddep).

Instead of try to do it right, I should probably have done it fast. Since I’m using a debian-based distribution on my laptop, all I had to do was :

  • Install python2.5 in the right prefix (/usr), restore the /usr/bin/python symlink to python2.6
  • Write a simple homemade debian/rules file which runs distutils and then sorts everything to the right package prefix (this is actually even easier if you only have one package since there is no sorting involved ; I included a sample one at the end of the post)
  • Specify Architecture: all in debian/control (since this is a full python package, it should run on any platform)
  • Run dpkg-buildpackage. Enjoy !

Let’s note that there is something called stdeb which might be easier than this process. Too bad I only found it now :sad:

Read the rest of this entry »

July 9th, 2010 @ 20:14

For months I have been having (temporary) freezes upon X/gdm login, sometimes even before that, lasting anywhere between 1 and 5 minutes (that’s on my Dell Latitude E4200). Actually, I could still move the cursor, and at times a panel widget would refresh, but nothing else. Logging on a tty and running top would show X.org eating 100% CPU (pretty much expected actually). Investigations of /var/log/Xorg.0.log revealed that X was continuously doing EDID probing because of an unknown TV1 monitor, while dmesg confirmed that something was going on with that mysterious TV1 output :
[ 12.924] (II) intel(0): Printing probed modes for output TV1
[ 15.065] (II) intel(0): Printing probed modes for output TV1
[ 15.477] (II) intel(0): Printing probed modes for output TV1
[ 15.892] (II) intel(0): Printing probed modes for output TV1
...

[drm] TV-1: set mode NTSC 480i 0
[drm] TV-1: set mode NTSC 480i 0
[drm] TV-1: set mode NTSC 480i 0
...

After spending quite a bunch of time trying to tell X to just ignore, I took the easy way and… switched to Ubuntu. But yesterday, I asked drago01 (maintainer of Compiz packages in Fedora) about the issue, and he immediately recognised the problem, calling it the “Ghost TV problem”. According to him and Adam Jackson (ajax), this is a screw up from hardware vendors. The graphics chipset thinks there is that TV plugged while there is definitely none.

Anyway, he also pointed me to the fix. 4 lines in xorg.conf and you’re done (this is actually my whole xorg.conf :p)

Section "Monitor"
Identifier "TV1"
Option "Ignore" "1"
EndSection

April 23rd, 2010 @ 08:08

While surfing and jumping from link to link, I stumbled upon The Real Costs of “Free” Search Site Services. While the global idea of the article holds a point, I guess that the emphasis is definitely not put on the right parts.

The bold parts are all about the money that gets wasted by ads on a free web service. Let’s review the initial assertions and the conclusion :

For a company employing 10,000 people with an average loaded cost of $50/hour

Users would thus waste 1 hour and 34 minutes

In total, the cost of having employees look at and occasionally click on distractions costs the company $1.2 million per year.

What’s wrong here ? Well, first I guess that the main problem is that the stunning effect of the article is that $1.2 million per year, which looks huge at first sight, but if you look a little more, this $1.2 million a year is for a 10.000 employees company, so that’s actually $120 per employee per year, which is, as specified, about 2 hours and a half of work of that employee over the whole year.

Now let’s get around the fact that the employees sometimes have to go pee. I’d say that’s easily 5 minutes lost a day in non productive hygiene stuff, at least, which amounts for a total of about 20 hours a year per employee lost, which gives an awful $1012 per year per employee, or for our test company $10.1 million a year. Urgh ! Guess it’s about time to get the WC outside of the work area and have your employees swipe out & back in when they get there, to remove this illegitimate 1 grand you are giving out to each employee each year for nothing but pee !

If we also include the fact that anyway the employees won’t be 100% productive all the time, which leads to even more wasted money, that $1.2 million a year starts looking a little bit ridiculous. That’s the whole problem with that article, they used that 10.000 fold factor to have some significantly stunning figures to give, while completely omitting to mention how that scaled to.

Based on some rough estimations (about 3 weeks of holidays, working 5 days a week, 8 hours a day), our company is spending about $100.000 a year on each employee, or a total of $1 billion a year on human resources (on just those working in front of a computer screen). Two conclusions : that “wasted” time amounts for just 0.12% of the company HR expenses in that field, and that company is huge : 10.000 employees working on a computer, it must either be a large IT company or a really large non-it company, and in both cases, even if they probably don’t rely on free services (that would be silly), $1.2 million is probably just pocket money for them

Don’t get me wrong, stupid wastes should be avoided as much as possible, but please, pretty please, don’t try to hold your point using huge scale factors. Numbers lie, but when the lies are unveiled, the liars just look like fools. It just doesn’t help and could be avoided, especially when there are other much more robust arguments (here against ad-powered free services) around.

April 21st, 2010 @ 20:54

After sorting out my clutter issues and finally producing a video of a clutter animation, I thought I’d use it on the initial goal, that animation I had written ages ago. What I had sadly forecast occurred, the video dumping awfully slowed down the animation.

The whole problem is that, for now at least, I don’t think it’s possible to run the animation frame per frame rather than time based. So I thought “let’s just defer the whole video generation to after the end of the animation, and bufferize the frames meanwhile”. Well, this worked… until oomkiller jumped in and killed my process. Urgh.

So, I can’t bufferize the whole video, but I can’t push the frames to gstreamer in real time directly from the animation either. Well, all I need is parallelism then ! Push frames to a queue which is consumed by another execution unit (by pushing the frames to gstreamer). And since threading pretty much sucks in Python (well, it would definitely since we need real parallelism), let’s use the new multiprocessing framework from Python 2.6. Using it is pretty straightforward : create some parallel structures (queues, pipes), spawn a new process with it’s own main function, push to the structures from one process, read from another, and you’re done. The only thing I’m still wondering is why there is a close() function on Queues when there is no obvious way to detect from the other end that the queue has been closed (which I worked around by pushing a None message).

Well, now I have a smooth animation and a smooth video dump, my two cores being nicely fully used :)

The code is available below, with the interesting parts being StageRecorder.create_pipeline, StageRecorder.dump_frame, StageRecorder.stop_recording and StageRecorder.process_runner.

Read the rest of this entry »

April 20th, 2010 @ 19:15

A while back, I used clutter (a very nice and simple animation toolkit that basically let’s you easily work in a 3D environment with 2D objects) to do a little photo slideshow with a lot of customisations, but I never even showed it to the person it was aimed at because the whole thing was not satisfying enough (it either took ages to start or was not smooth and it was not easy to put a decent soundtrack when you can’t synchronize video & audio).

A simple solution would have been to do the rendering once and then just do the postproduction. I had quickly looked for a way to use a direct output of the animation to gstreamer (since there is gstreamer input support for clutter, this pretty much made sense), but there was none. Another option would have been to use a capture software, like Xvidcap, but this stuff is too heavy for my poor laptop. Consequently, I just gave up back then.

What I had completely overlooked is that clutter uses OpenGL for the rendering, so that all I had to do was to dump each frame myself using glReadPixels or using things like Yukon to do the dirty stuff. After a quick googling, I found this clutter mailing list thread about capturing the clutter output to a video file, which mentions the clutter_stage_read_pixels function, which does all the glReadPixels magic and even puts it in a more standard format. It also points to gnome-shell’s recorder stuff, which does the glReadPixels stuff and outputs it to a gstreamer pipeline, plus some extra fancy things (since they are doing screencasts of gnome-shell features, they draw the mouse cursor on top of each frame). So all I have to do now is put things together :)

One of the bad things I figured is that clutter_stage_read_pixels calls clutter_stage_paint, so mixing the gnome-shell recorder approach with clutter_stage_read_pixels results in a bad infinite loop if you don’t protect the whole thing. Even though this means painting things twice, I guess this is a much easier approach than having to use python-opengl or something along the line.

Another bad thing I encountered was that the Python bindings for clutter_stage_read_pixels are broken at the moment (pyclutter 1.0.2). The first problem is that the argument parsing part seems to be broken. Changing the PyArg_ParseTupleAndKeywords to a simple PyArg_ParseTuple gets things “working”, and gdb indicates a segfault in a PyDict_Check of the keywords argument :

Program received signal SIGSEGV, Segmentation fault.
0x00000032d34ecd9c in _PyArg_ParseTupleAndKeywords_SizeT (args=(0, 0, 500, 200), keywords=, format=
0x7ffff000d9ac "dddd:ClutterStage.read_pixels", kwlist=0x7ffff022f6c0) at Python/getargs.c:1409
1409 (keywords != NULL && !PyDict_Check(keywords)) ||
(gdb) bt
#0 0x00000032d34ecd9c in _PyArg_ParseTupleAndKeywords_SizeT (args=(0, 0, 500, 200), keywords=
, format=
0x7ffff000d9ac "iiii:ClutterStage.read_pixels", kwlist=0x7ffff022f6c0) at Python/getargs.c:1409

After asking on #clutter, ebassi immediately caught the problem, there was a missing “kwargs” bit in the python binding override definition, so that the kwargs were never actually passed to the C wrapper which was getting garbage instead.

The other problem was that the returned data was empty. This was simply due to the fact that the buffer returned by the C function was interpreted as a NULL-terminated string, which is wrong when you get such binary data. The fix was simply to specify the length to read to fill the string.

Both issues are now fixed in pyclutter git, and should be available on the next stable release.

The remainder of the port was pretty straightforward. The only problem was that I had no experience with gstreamer, which wasted me quite a lot of time. Here are a few things I discovered :

  • The --gst-debug-level command line argument is really really useful, especially on levels 3 and 4, it outputs a lot of valuable information on what’s going on and what’s not working.
  • The whole caps story is really important. After spending an hour trying to figure why my pads wouldn’t negotiate their caps, I found out that they couldn’t because I had a wrong cap (the endianness one), and after a few more hours I figured that I had to set the caps on each buffer, and that I actually only had to set caps on buffers.
  • Timestamps are not magically inferred (at least not without extra gstreamer elements) and should be set by hand using the buffer.timestamp python property (this is not quite well documented in the Python bindings documentation imho).

Well, that’s pretty much it. I used a Clutter python demo from fedora-tour and here is the result : Clutter Stage Recorder demo. The whole source is available below :)

Read the rest of this entry »

April 4th, 2010 @ 00:04

Might not have mentioned it until now, but since about one year and a half I’ve been a photography addict. Basically my girlfriend got a DSLR for Christmas, and I got one (a Canon EOS450D) for my birthday two months later. After having fun with my f/1.8 50mm lens and its nice depth of field effects and after being seen as a paparazzi with my 250mm telezoom, I thought I’d try something larger : panoramas.

I’m using Hugin (with autopano-sift-C for the keypoint detection and matching ; nona for the photometric/geometric remapping and enblend for the merging). Might sound complicated, but it’s basically just 3 clicks and a lot of processing time (if you don’t get into the details).

My first try was at the Carnegie Museum in Pittsburgh. No tripod allowed, so the camera was handheld. It looks quite beautiful imho, apart of the weird line at about 3/4 of height. I should probably rerun the stitcher.

Carnegie Museum Panorama

My next try was a panorama of CMU campus showing, among other things, the Fence, the University Center and the Gates building. I took the pictures at about 1pm, with a pretty nice sun and a few clouds. I first tried using the whole 120 pictures set, but it resulted in a bunch of geometric errors, mostly the leftmost flag and the pathway being broken :

CMU Panorama - ETOOMUCHPICS

I then selected a core of pictures, stitched them, and then added a few other to fix the missing parts, for a total of 21 base pictures. This resulted in the following panorama :

CMU Panorama (without sky)

I cropped the bottom part of the result to drop a broken pathway and a bunch of useless grass, plus another bunch of missing (i.e. black) parts. But still, it’s far from perfect, huge parts of the sky are missing, and there are some unpleasant bits (like the white line in the sky near one the trees). Luckily, I had an image restoration course (well, it was a more generic vision course, but it addressed this among other things), so that I know that there are some pretty efficient inpainting (the process of creating texture based on the surrounding pixels) methods, and nicely one of those is implemented as a GIMP plugin, the GIMP Resynthesizer. I first tried the manual way, setting the plugin parameters myself, but I found out it wasn’t quite the right way to do it :

WTF Gimp Inpainting ?

Then I discovered an option called “smart remove selection” which sets everything automagically, and it worked (though I still guess there are some pretty bad memory issues or so, since it was picking textures from outside of the selected radius). Using that option, I generated the missing sky parts and removed the ugly bits, and here we go :

CMU Panorama

Nice, heh ?

I should probably also mention that I also tried the pure CLI way (i.e. not running hugin GUI) and that it works great : autopano-sift-c takes a bunch of pictures and produces the keypoints and matches, autooptimizer optimizes the resulting homographies, pto2mk creates a Makefile which produces the final panorama (by running nona and enblend/enfuse). It’s not that much documented (took me a while to figure out that I could avoid the GUI and run most of the expensive computations remotely), but it works flawlessly.

March 31st, 2010 @ 20:01

Keyboard shortcuts are always a great matter of debate, and the whole problem is that most often they are chosen based on assumptions of the end user layout.

For instance, take this metacity commit : Change default cycle_group keybinding to Alt-grave. This change looks perfectly harmless, right ? Well not quite. It’s most likely based on the assumption that the end users has a qwerty keyboard layout (and it makes perfectly sense there). But let’s take an azerty layout. Grave is on the é/7 key, which is even farther from alt or tab than F6 is (well, not much I agree, but it might be even worse on other layouts). Is it really worth doing such a change then ?

Let’s also note that this also triggers a bad bug which gets alt+7 and alt+shift+7 to trigger the binding as well, while alt+grave is actually alt+altgr+7. This has been keeping me from nicely switching to my window n°7 in irssi for months (great thing that this window holds a really low traffic channel…).

All in all, I guess that the real problem is not that this change was made, but rather than we might need a system to have layout-dependant keybindings, or maybe hardware-location-based keybindings (i.e. that the key above the Tab key would trigger this keybinding independently of the layout).

Initially published on Mar 24, 2010 @ 8:22

Update : this change has been reverted for the GNOME 2.30 release. Even though I’m happy that the problem is “fixed”, it’s sad that the underlying problem (Alt+Shift+7 triggering Alt+`) is still there.