So I’m at the Boston Summit this weekend. Mo showed off her portable usability lab. She posted about the lab before here. Jason Clinton posted a summary of the summit session here

The DVR hardware in the setup ouputs four avi files–one for each camera. The first file has the audio encoded in it. Having four files is cumbersome, though. It’s much better to see the camera focused on the users face at the same time as video focused on the users hands, and at the same time as the view that shows the users screen.

That’s where gstreamer comes in. It’s possible to write a pipeline that can take the 4 videos and compose them together into one 4-way split screen.

In Mo’s post she showed an earlier pipeline I came up with, but it was very slow and lacked audio.

I’ve been reading up on gstreamer, searching the internets for example pipelines, etc, and now have a better pipeline. Someone here at the summit asked for me to check it into git, so I did that today in the usability-lab module.

8 Responses to “Video 4-way split screen gstreamer pipeline”


  1. [...] original post here: Ray Strode: Video 4-way split screen gstreamer pipeline Share and [...]

  2. nicu Says:

    I am still scared about that pipeline and the time you need to research to come up with it. GStreamer needs a friendlier way to build such things.

  3. halfline Says:

    I don’t necessarily disagree with you, but on the other hand I’m sure people already familiar with gstreamer could have done this with a lot less effort than it took me.

    I think there used to be an app that let you draw pipelines, I’m not sure what happened to that. Something like that, that guided the user and helped prevent users from assembling broken pipelines would help.

    Other things that would help,

    1) if the gst-inspect-0.10 -a output listed source and sink properties (The reason my original pipeline had 4 videoboxes carefully sized, and overlapped was because I didn’t know videomixer had xpos and ypos sink properties, or that sinks could have properties at all in fact)

    2) if the man page explained a little more in detail why the various elements were in its example pipelines

    3) if the man page listed some example pipelines that had explicit sources and sinks named. Gstreamer lets you just do “name.” instead of “name.sink_0″ or whatever. All the examples just show “name.” and so it wasn’t clear for me what to do if i wanted to make a specific source go to a specific sink.

    On the other side of the coin:

    1) I don’t think gst-launch-0.10 is really being pushed as a user tool. It probably would have been easier for me to pitivi. I didn’t because I wanted to learn about gstreamer and I thought a gst-launch command would be easier to batch process with.

    2) Many people who develop solutions probably don’t use gst-launch either. The python bindings are probably in some ways more straightforward than the pipeline syntax (they’re something I want to learn more about, too, at some point).

  4. liberforce Says:

    I think something similar was done a while back with the multifilesrc source by Danielle Madeley :
    http://dannipenguin.livejournal.com/210041.html?thread=763769

    The script:
    http://bulk-www.ucc.asn.au/~davyd/ucc-test/time-lapse.shtxt

    But I’m not sure multifilesrc is well maintained.

  5. halfline Says:

    Interesting. That pipeline is actually very similar to the first one I did (mentioned in Mo’s post linked above).


  6. [...] Ray Strode helped me write a gstreamer pipeline to construct the videos into a quad-screen video. (Ray has sinced worked out a much more efficient pipeline, and created a git repository on GNOME.org to make it [...]

  7. alex Says:

    Would that work for v4l2 devices (2 of them side by side)?
    This works but the output of one of the devices is choppy:

    gst-launch v4l2src device=/dev/video1 ! videobox left=-1 ! videomixer name=mix ! ffmpegcolorspace ! xvimagesink mix. v4l2src device=/dev/video0 ! videoscale ! videobox left=-640 ! mix.

    However, I am unable to get something based on your filter to work. Any hint?

    gst-launch -vv \
    v4l2src device=/dev/video0 name=upper_right_video \
    ! videoscale \
    ! ffmpegcolorspace \
    ! videomixer.mymix.sink_0. \
    v4l2src device=/dev/video1 name=upper_left_video \
    ! videoscale \
    ! ffmpegcolorspace \
    ! videomixer.mymix.sink_1. \
    videomixer name=mymix \
    sink_0::xpos=1 sink_0::ypos=0 sink_0::zorder=0 \
    sink_1::xpos=640 sink_1::ypos=0 sink_1::zorder=1 \
    ! ffmpegcolorspace \
    ! videoscale \
    ! xvimagesink mymix.


Comments are closed.