Archive for the ‘Python’ Category

July 22nd, 2010 @ 01:18

A school friend, namely p4bl0, mentioned the idea of maintaining blog posts with git and a set of hooks which would produce the blog html from the contents of the repo. I loved the idea, but thought I could push it a little further : a blog engine which would use no other storage than git, with the post subject and contents being the commit message subject and contents. A post-commit or post-receive hook then produces the html. As simple as that !

You can find the source in BloGit git repo, and see an example at BloGit example. To use the source, you first have to pack it (using the pack script), which will merge the raw_post and raw_produce, producing a single post script (which I also included at the end of this post), which you can simply put in an empty directory and run it. It will unpack the other script (produce), initialize the git repo, and set the hooks. It’ll then prompt you for your post title and then open an editor for you to set your post contents. Save the file, and you’re done with your first post : check the index.html file which has been produced in the same directory. You can write your own stylesheet in the blogit-style.css file. Further posts can be done with the same post script.

Yet, the best way is probably just to use the usual git workflow. To initialize the repo and all, run post --unpack, and to post post --raw or git commit --allow-empty (when using git commit, leave a blank line between the subject line and the rest of the post). You can also amend existing commits (using git commit --amend), use the GIT_AUTHOR_* environment variables to change the author, and so on. Since merge commits are skipped by the html generator, it should work just great for multi author blogging !

PS : I know this is JUST a git log pretty printer, and that the whole thing is pretty much trivial. I also know that using versionned files to store the posts would allow a lot of extra bonuses (such as automatically adding “Updated on …” mentions based on the commit log of each single file). I just thought the idea was fun :p There are probably a lot of things to improve, or a lot of smart git features to use there that I overlooked. Feel free to leave a line :)

(more…)

July 14th, 2010 @ 23:29

TLDR : directly skip to the last paragraph

For a few weeks now a couple of friends and I have been working on a media player for Maemo 5 with some extra bonuses (mostly track scoring and track prediction). Now that we have something pretty much functional, I was willing to produce some .deb packages for easy installation. I knew it wouldn’t be an easy task — and it definitely wasn’t.

Since I had no Scratchbox setup in the first place (64 bits distro, etc.), I first tried the py2deb/PyPackager approach, which let’s you create .deb packages directly on the tablet for python apps. Sadly, this approach requires you to sort your source tree in a very strict manner, which is as it will be on the device. Since I plan to rebuild my package often and I don’t want to work in an uglily organized environment, this is not the solution.

I thus set a debian virtualbox and installed Scratchbox on it, and tried to use a very simple cdbs build system, using python-distutils.mk (I already had a distutils build system working), and then building the .deb with dpkg-buildpackage. Sadly, for an unknown reason (to me) this was putting the python libs into /var/lib/pyshared/, which is definitely not in the n900 PYTHON_PATH, plus the package did not include the empty __init__.py files in the packages (it seems they were removed by python-support which is used by python-distutils.mk and are meant to be reconstructed upon install, though they weren’t), which lead to a completely broken package.

The next attempt I made was to use sbdmock, which is used by the official maemo.org autobuilder. Sadly, while it was successfully building other packages, it never succeeded at building my package because “pyversions”, which is required by python-distutils.mk, was missing (seems this is a known problem — though this might actually be fixed now and I might just be lacking the right builddep).

Instead of try to do it right, I should probably have done it fast. Since I’m using a debian-based distribution on my laptop, all I had to do was :

  • Install python2.5 in the right prefix (/usr), restore the /usr/bin/python symlink to python2.6
  • Write a simple homemade debian/rules file which runs distutils and then sorts everything to the right package prefix (this is actually even easier if you only have one package since there is no sorting involved ; I included a sample one at the end of the post)
  • Specify Architecture: all in debian/control (since this is a full python package, it should run on any platform)
  • Run dpkg-buildpackage. Enjoy !

Let’s note that there is something called stdeb which might be easier than this process. Too bad I only found it now :sad:

(more…)

April 21st, 2010 @ 20:54

After sorting out my clutter issues and finally producing a video of a clutter animation, I thought I’d use it on the initial goal, that animation I had written ages ago. What I had sadly forecast occurred, the video dumping awfully slowed down the animation.

The whole problem is that, for now at least, I don’t think it’s possible to run the animation frame per frame rather than time based. So I thought “let’s just defer the whole video generation to after the end of the animation, and bufferize the frames meanwhile”. Well, this worked… until oomkiller jumped in and killed my process. Urgh.

So, I can’t bufferize the whole video, but I can’t push the frames to gstreamer in real time directly from the animation either. Well, all I need is parallelism then ! Push frames to a queue which is consumed by another execution unit (by pushing the frames to gstreamer). And since threading pretty much sucks in Python (well, it would definitely since we need real parallelism), let’s use the new multiprocessing framework from Python 2.6. Using it is pretty straightforward : create some parallel structures (queues, pipes), spawn a new process with it’s own main function, push to the structures from one process, read from another, and you’re done. The only thing I’m still wondering is why there is a close() function on Queues when there is no obvious way to detect from the other end that the queue has been closed (which I worked around by pushing a None message).

Well, now I have a smooth animation and a smooth video dump, my two cores being nicely fully used :)

The code is available below, with the interesting parts being StageRecorder.create_pipeline, StageRecorder.dump_frame, StageRecorder.stop_recording and StageRecorder.process_runner.

(more…)

March 25th, 2010 @ 01:46

Today I’ve been looking at opencv-python for a quick project (I’d like to practice OpenCV a little bit). Installed the opencv-python package on Fedora 13, headed to the samples directory (/usr/share/opencv/samples/python/), started running one of them and… boum, segfault. Tried another one (the inpainting one), and it worked. A third one… segfault. Most of the samples in there segfaulted, mostly with SWIG errors about wrong parameters, always mentioning int64 (I’m using a x86_64 kernel & distribution).

After half an hour of failure on trying to get opencv.i686 work alongside my x86_64 python, I went back to the OpenCV website to check if there was some known heavy problems with x86_64 systems and… I discovered that :

Starting with release 2.0, OpenCV has a new Python interface. (The previous Python interface is described in SwigPythonInterface.)

Wait wait wait, you mean that during all this time I was running the OLD, pre-2.0 Python interface ? Why the hell does the opencv-python 2.0 package provides both the new and the old interfaces ? (well, I know the answer : backwards compatibility). Meh :( Anyway, I wish the old samples would get ported to the new interface… At the moment there’s no sample using it at all :/

March 22nd, 2010 @ 01:56

For a project midterm presentation, we were asked to produce a bunch of slides explaining our project architecture and implementation choices. Apart of the obvious things (libraries in use, network protocol…), I had no real clue on what I could put in, so I thought I’d just throw some UML-like diagrams and that it would be fine. The only detail was : how to produce these diagrams ?

Since the project code was written in Python, all the inheritance relations were already held by the code and could be introspected, so that it was theoretically possible to automatically produce the inheritance diagram. And it actually is, and is implemented by things like the Epydoc (a documentation generator for Python code) parser, as well as the diagram generation, which Epydoc also implements. The only thing is that I wasn’t satisfied by the Epydoc diagrams since they were limited to the inheritance relationships, while I was also willing to include usage relationships and display only the main methods and variables of my objects.

I thus wrote Umlpy, a UML-like class diagram generator for Python code, which depends on Epydoc (for the parser) and python-graphviz (for the graph generation, it produces nicely spaced graphs and can output jpg, png or pdf files, and probably more). It handles the aforementioned requirements through docstrings parsing and introspection. Check the Umlpy README file for more documentation on how to use it. It took me about 10 minutes to get to the result I was expecting (it’s basically about adding a little docstring for usage relationship, and copy-pasting a docstring on methods or variables you want to see on the diagram.

This wouldn’t be complete without the mandatory screenshot, and this example code results into this diagram :
Umlpy result example

April 12th, 2008 @ 08:24

Ever heard of Gogh? This is a really nice drawing tool for graphic tablets owner (such as those by Wacom). However it stopped working on my Hardy system after some recent update. After some investigation, it appeared that it was due to python-xml being moved away of Python path because it interfered with stuff in Python core ; the whole python-xml package being scheduled for deprecation as soon as the reverse depends would be cleared. I figured it out and dropped the reference to python-xml package in gogh/settingsmanager.py, using a method provided by Python core xml stuff instead. It seems that quite a bunch of people are currently using xml.doc.ext.PrettyPrint from pyxml to output an XML document to a file, so I figured that posting this little tip might help someone :)

So, let’s say you currently have something like this to output your xml document “doc” to a file at path “path”, doc being an xml.dom.minidom.Document object (or similar):

        f = open(path, "w")
        xml.dom.ext.PrettyPrint(doc, f)
        f.close()

All you need to change is PrettyPrint call to make use of the .toxml() method of your document object instead:

        f = open(path, "w")
        f.write(doc.toxml())
        f.close()

Here you go, hope this might save someone’s day someday :)

April 6th, 2008 @ 22:03

Building dynamic forms with Django newforms module is quite undocumented, though it’s quite easy to do. All you need to do is to hook up the __init__ function of the form, raise the __init__ to the parent forms.Form class and then add your dynamically generated fields to self.fields dict.

Here is a quick snippet demonstrating it, which will create a form with n integer fields named from 0 to (n – 1), but you will easily be able to heavily extend it.

from django import newforms as forms
 
class MyForm (forms.Form):
 
    def __init__ (self, n, *args, **kwargs):
        forms.Form.__init__ (self, *args, **kwargs)
        for i in range (0, n):
            field = forms.IntegerField (label = "%d" % i, required = True,
                                        min_value = 0, max_value = 200)
            self.fields["%d" % i] = field