April 20th, 2010 @ 19:15

A while back, I used clutter (a very nice and simple animation toolkit that basically let’s you easily work in a 3D environment with 2D objects) to do a little photo slideshow with a lot of customisations, but I never even showed it to the person it was aimed at because the whole thing was not satisfying enough (it either took ages to start or was not smooth and it was not easy to put a decent soundtrack when you can’t synchronize video & audio).

A simple solution would have been to do the rendering once and then just do the postproduction. I had quickly looked for a way to use a direct output of the animation to gstreamer (since there is gstreamer input support for clutter, this pretty much made sense), but there was none. Another option would have been to use a capture software, like Xvidcap, but this stuff is too heavy for my poor laptop. Consequently, I just gave up back then.

What I had completely overlooked is that clutter uses OpenGL for the rendering, so that all I had to do was to dump each frame myself using glReadPixels or using things like Yukon to do the dirty stuff. After a quick googling, I found this clutter mailing list thread about capturing the clutter output to a video file, which mentions the clutter_stage_read_pixels function, which does all the glReadPixels magic and even puts it in a more standard format. It also points to gnome-shell’s recorder stuff, which does the glReadPixels stuff and outputs it to a gstreamer pipeline, plus some extra fancy things (since they are doing screencasts of gnome-shell features, they draw the mouse cursor on top of each frame). So all I have to do now is put things together :)

One of the bad things I figured is that clutter_stage_read_pixels calls clutter_stage_paint, so mixing the gnome-shell recorder approach with clutter_stage_read_pixels results in a bad infinite loop if you don’t protect the whole thing. Even though this means painting things twice, I guess this is a much easier approach than having to use python-opengl or something along the line.

Another bad thing I encountered was that the Python bindings for clutter_stage_read_pixels are broken at the moment (pyclutter 1.0.2). The first problem is that the argument parsing part seems to be broken. Changing the PyArg_ParseTupleAndKeywords to a simple PyArg_ParseTuple gets things “working”, and gdb indicates a segfault in a PyDict_Check of the keywords argument :

Program received signal SIGSEGV, Segmentation fault.
0x00000032d34ecd9c in _PyArg_ParseTupleAndKeywords_SizeT (args=(0, 0, 500, 200), keywords=, format=
0x7ffff000d9ac "dddd:ClutterStage.read_pixels", kwlist=0x7ffff022f6c0) at Python/getargs.c:1409
1409 (keywords != NULL && !PyDict_Check(keywords)) ||
(gdb) bt
#0 0x00000032d34ecd9c in _PyArg_ParseTupleAndKeywords_SizeT (args=(0, 0, 500, 200), keywords=
, format=
0x7ffff000d9ac "iiii:ClutterStage.read_pixels", kwlist=0x7ffff022f6c0) at Python/getargs.c:1409

After asking on #clutter, ebassi immediately caught the problem, there was a missing “kwargs” bit in the python binding override definition, so that the kwargs were never actually passed to the C wrapper which was getting garbage instead.

The other problem was that the returned data was empty. This was simply due to the fact that the buffer returned by the C function was interpreted as a NULL-terminated string, which is wrong when you get such binary data. The fix was simply to specify the length to read to fill the string.

Both issues are now fixed in pyclutter git, and should be available on the next stable release.

The remainder of the port was pretty straightforward. The only problem was that I had no experience with gstreamer, which wasted me quite a lot of time. Here are a few things I discovered :

  • The --gst-debug-level command line argument is really really useful, especially on levels 3 and 4, it outputs a lot of valuable information on what’s going on and what’s not working.
  • The whole caps story is really important. After spending an hour trying to figure why my pads wouldn’t negotiate their caps, I found out that they couldn’t because I had a wrong cap (the endianness one), and after a few more hours I figured that I had to set the caps on each buffer, and that I actually only had to set caps on buffers.
  • Timestamps are not magically inferred (at least not without extra gstreamer elements) and should be set by hand using the buffer.timestamp python property (this is not quite well documented in the Python bindings documentation imho).

Well, that’s pretty much it. I used a Clutter python demo from fedora-tour and here is the result : Clutter Stage Recorder demo. The whole source is available below :)

#!/usr/bin/env python
# -*- coding: utf-8 -*-
 
import clutter
import gst
import gobject
 
DEFAULT_OUTPUT = "stage.ogg"
DEFAULT_PIPELINE_DESC = "videorate ! theoraenc ! oggmux"
 
class StageRecorderSrc (gst.Element):
    """Gstreamer element used to push our buffers into the pipeline"""
 
    __gstdetails__ = (
        "Clutter Stage Recorder Source plugin",
        "stagerecorder.py",
        "",
        "")
 
    _src_template = gst.PadTemplate ("src",
                                     gst.PAD_SRC,
                                     gst.PAD_ALWAYS,
                                     gst.caps_new_any ())
 
    __gsttemplates__ = (_src_template,)
 
    def __init__ (self, *args, **kwargs):
        gst.Element.__init__ (self, *args, **kwargs)
        self.src_pad = gst.Pad (self._src_template)
        self.src_pad.use_fixed_caps ()
        self.add_pad (self.src_pad)
 
gobject.type_register (StageRecorderSrc)
 
class StageRecorder (object):
    """Clutter stage recorder which dumps frames into a gstreamer pipeline"""
 
    stage = None
    output_filename = None
    pipeline_desc = None
 
    pipeline = None
    src = None
 
    stage_width = 0
    stage_height = 0
 
    clock_start = -1
 
    def __init__ (self,
                  stage,
                  pipeline_desc = DEFAULT_PIPELINE_DESC,
                  output_filename = DEFAULT_OUTPUT):
        self.stage = stage
        stage.connect ("destroy", self.stop_recording)
        stage.connect_after ("paint", self.dump_frame)
        stage.connect ("notify::width", self.update_size)
        stage.connect ("notify::height", self.update_size)
        self.pipeline_desc = pipeline_desc
        self.output_filename = output_filename
        self.dumping = False
        self.pipeline = None
        self.src = None
        self.clock_start = -1
 
    def create_pipeline (self):
        """Create the gstreamer pipeline and run it"""
        self.pipeline = gst.parse_launch (self.pipeline_desc)
        if not self.pipeline:
            raise RuntimeError, "Couldn't create pipeline"
        self.add_source ()
        self.add_sink ()
        self.pipeline.set_state (gst.STATE_PLAYING)
 
    def add_source (self):
        """Add our data source and its filters"""
        sink_pad = self.pipeline.find_unlinked_pad (gst.PAD_SINK)
        if not sink_pad:
            raise RuntimeError, "Pipeline has no unlinked sink pad"
 
        src_element = StageRecorderSrc ()
        self.pipeline.add (src_element)
        self.src = src_element.get_static_pad ("src")
 
        # The ffmpegcolorspace element is a generic converter; it will convert
        # our supplied fixed format data into whatever the encoder wants
        ffmpegcolorspace = gst.element_factory_make ("ffmpegcolorspace")
        if not ffmpegcolorspace:
            raise RuntimeError, "Can't create ffmpegcolorspace element"
        self.pipeline.add (ffmpegcolorspace)
 
        # No need to verticalflip here since clutter_stage_read_pixels did
 
        src_element.link (ffmpegcolorspace)
 
        src_pad = ffmpegcolorspace.get_static_pad ("src")
        if not src_pad:
            raise RuntimeError, "Can't get src pad to link into pipeline"
 
        if src_pad.link (sink_pad) != gst.PAD_LINK_OK:
            raise RuntimeError, "Can't link to sink pad"
 
    def add_sink (self):
        """Add the final filesink"""
        src_pad = self.pipeline.find_unlinked_pad (gst.PAD_SRC)
        if not src_pad:
            raise RuntimeError, "Pipeline has no unlinked src pad"
        filesink = gst.parse_launch ("filesink location=%s" \
                                        % self.output_filename)
        if not filesink:
            raise RuntimeError, "Can't create filesink element"
        self.pipeline.add (filesink)
 
        sink_pad = filesink.get_static_pad ("sink")
        if not sink_pad:
            raise RuntimeError, "Can't get sink pad to link pipeline output"
 
        if src_pad.link (sink_pad) != gst.PAD_LINK_OK:
            raise RuntimeError, "Can't link to sink pad"
 
    def dump_frame (self, stage):
        """Dump a frame to the gstreamer pipeline"""
        # Prevent an infinite loop (clutter.Stage.read_pixels
        # performs another paint)
        if not self.dumping:
            self.dumping = True
            if not self.pipeline:
                self.create_pipeline ()
            buffer = gst.Buffer (self.stage.read_pixels (0, 0,
                                                         self.stage_width,
                                                         self.stage_height))
            # Use clutter clock
            if self.clock_start == -1:
                self.clock_start = clutter.get_timestamp ()
            buffer.timestamp = clutter.get_timestamp () - self.clock_start
            buffer.timestamp *= 1000
            # Don't forget to set the right caps on the buffer
            self.set_caps_on (buffer)
            status = self.src.push (buffer)
            if status != gst.FLOW_OK:
                raise RuntimeError, "Error while pushing buffer : " + status
            self.dumping = False
 
    def update_size (self, stage = None, param = None):
        """Update the size of the gstreamer frames based on the stage size"""
        x1, y1, x2, y2 = self.stage.get_allocation_box ()
        self.stage_width = int (0.5 + x2 - x1);
        self.stage_height = int (0.5 + y2 - y1);
 
    def set_caps_on (self, dest):
        """Set the current frame caps on the specified object"""
        # The data is always native-endian xRGB; ffmpegcolorspace
        # doesn't support little-endian xRGB, but does support
        # big-endian BGRx.
        caps = gst.caps_from_string ("video/x-raw-rgb,bpp=32,depth=24,\
                                      red_mask=0xff000000,\
                                      green_mask=0x00ff0000,\
                                      blue_mask=0x0000ff00,\
                                      endianness=4321,\
                                      framerate=15/1,\
                                      width=%d,height=%d" \
                                        % (self.stage_width,
                                           self.stage_height))
        if dest:
            dest.set_caps (caps)
 
    def stop_recording (self, stage):
        self.pipeline.set_state (gst.STATE_NULL)
 
class clutterTest:
    def __init__(self):
        #create a clutter stage
        self.stage = clutter.Stage()
        recorder = StageRecorder (self.stage)
        self.stage.connect('destroy', clutter.main_quit)
        #set the stage size in x,y pixels
        self.stage.set_size(500,200)
        #define some clutter colors in rgbo (red,green,blue,opacity)
        color_black = clutter.Color(0,0,0,255) 
        color_green = clutter.Color(0,255,0,255)
        color_blue = clutter.Color(0,0,255,255)
        #set the clutter stages bg color to our black
        self.stage.set_color(color_black)
        #create a clutter label, is there documentation for creating a clutterlabel?
        self.label = clutter.Text()
        #set the labels font
        self.label.set_font_name('Mono 32')
        #add some text to the label
        self.label.set_text("Hello")
        #make the label green
        self.label.set_color(color_green )
        #put the label in the center of the stage
        (label_width, label_height) = self.label.get_size()
        label_x = (self.stage.get_width()/2) - label_width/2
        label_y = (self.stage.get_height()/2) - label_height/2
        self.label.set_position(label_x, label_y)
        #make a second label similar to the first label
        self.label2 = clutter.Text()
        self.label2.set_font_name('Mono 32')
        self.label2.set_text("World!")
        self.label2.set_color(color_blue )
        (label2_width, label2_height) = self.label2.get_size()
        label2_x = (self.stage.get_width()/2) - label2_width/2
        label2_y = (self.stage.get_height()/2) - label2_height/2
        self.label2.set_position(label2_x, label2_y)
        #hide the label2 
        self.label2.set_opacity(0)
        #create a timeline for the animations that are going to happen
        self.timeline = clutter.Timeline()
        self.timeline.set_duration(2000)
        self.timeline.connect('completed', self.quit)
        #how will the animation flow? ease in? ease out? or steady?
        #ramp_inc_func will make the animation steady
        labelalpha = clutter.Alpha(self.timeline,clutter.LINEAR)
        #make some opacity behaviours that we will apply to the labels
        self.hideBehaviour = clutter.BehaviourOpacity(255,0x00,labelalpha)
        self.showBehaviour = clutter.BehaviourOpacity(0x00,255,labelalpha)
        #add the items to the stage
        self.stage.add(self.label2)
        self.stage.add(self.label)
        #show all stage items and enter the clutter main loop
        self.stage.show_all()
        self.swapLabels()
        clutter.main()
 
    def quit (self, *args):
        self.stage.destroy ()
        clutter.main_quit ()
 
    def swapLabels(self):   
        #which label is at full opacity?, like the highlander, there can be only one
        if(self.label.get_opacity()>1 ):
            showing = self.label
            hidden = self.label2
        else:
            showing = self.label2
            hidden = self.label
        #detach all objects from the behaviors
        self.hideBehaviour.remove_all()
        self.showBehaviour.remove_all()
        #apply the behaviors to the labels
        self.hideBehaviour.apply(showing)
        self.showBehaviour.apply(hidden)
        #behaviours do nothing if their timelines are not running
        self.timeline.start()
 
 
if __name__=="__main__":
    test = clutterTest()