This is inspired by the ‘Last week in Krita’ and ‘Last week in KOffice’ series. Boud, the author of these series, mentioned that this is a nice way to “keep everyone who is interested, developers, contributors, users — everyone — up to date on what is happening in the […] community.” I do not promise I will be able to keep up with weekly updates AND still work on development :), but I will do my best to summarize what is happening on a frequent basis.
I guess starting the series with a recap is good. I have worked in this project for a little more than 5 weeks now, so this is going to be the “Week 1-6” edition. Two weeks ago I posted an introduction to the project, and the reaction so far has been mostly positive. I know of at least one KDE project that is trying QtGStreamer already, and this is good because it lets us tweak the API to real-world applications easily, as no official release has been done and there is still flexibility to do so. And because it is important for us to get more people on board we are usually hanging out in the #gstreamer IRC channel on Freenode: please join us there if you have a question, want to help us with development or just need a hand trying QtGStreamer in your application.
Since that last update we have added some of the basic blocks from the GStreamer API that were still missing, including support for Events, Buffers and TagLists. Some of these are already in the master repository and some are still in review, but we are generally in good shape and ready for prototyping. The plan is to work on the next two weeks on stabilizing what is already added and refactoring some of the code that deals with the bridge between Glib/GObject and Qt. George and I have some interesting ideas on how to deal with some of the challenges that face every GStreamer-binding project, as there are a lot of special cases and design choices in the framework that require authors of bindings to be creative and decide where to draw the line: we can give users full access to the engine at expense of making the API complex, or we can hand-hold the developer and maybe limit the inherent flexibility of the engine. It is a tenuous line, but by now we *think* we have a pretty good overview of the main use cases, and are in a good position to try some things.
I realized after reading the paragraph above that it might not make more sense to people not familiarized with how GStreamer is designed, so I will write a “5 minute overview” of the technology. If you have more time I recommend reading the first chapter of the online docs instead. But, in short, the GStreamer framework is pipeline based. The pipeline is composed of different elements, which are GStreamer’s building blocks. See the image below for the pipeline that can be assembled for a simple ogg player:
In this example each block is a GStreamer element. An element in its simplest form is a box that generates, processes or outputs data: it can have one or more “pads”, which are the ports that make the connections so data can flow from one element to the other. Pads can be source pads (produce data) or sink pads (consume data). There are over 1000 elements in a typical GStreamer installation, dealing with everything from getting data from cameras (a v4l2 element with a single source pad) to decoding VP8 streams (a vp8 decoder element) to displaying data on the screen (several specialized elements that only have one sink pad, including for example the xvvideosink one).
The pipeline concept is what makes GStreamer so flexible. It is not that different from Unix pipelines in design: the idea is that you build complex solutions starting from simple and discrete elements. As this is supposed to be a 5-minute introduction I will not get into much more detail, but it is important to mention that in GStreamer communication events between elements can flow upstream and downstream, and there are facilities that let the elements communicate back to the application and vice-versa using messages and queries, as well for assembling pipelines dynamically.
So in QtGStreamer we give you all the power to link these GStreamer elements as you wish. A real world example might help: let us say you want to display the image of your webcam in a Qt application window. In Linux you can do this by linking the v4l2src to a qwidgetsink, which as the name implies is a QtGStreamer element that can display images and inherits from QWidget.. Actual code for this is something like:
//Creates the pipeline
m_pipeline = QGst::Pipeline::create();
//Construct the source Video for Linux element
QGst::ElementPtr src = QGst::ElementFactory::make(“v4l2src”);
//Construct the QWidget sink element
m_sink = QGst::ElementFactory::make(“qwidgetvideosink”);
//Add both to the pipeline
//Link the source to the sink
//Tell the sink to use one widget defined in our UI file
//Start the pipeline
That’s it: video should now be flowing from your v4l2 drivers directly to your screen. The example above can be expanded: it is typical to include an element called “ffmpegcolorspace” in between the v4l2src and the qwidgetvideosink, to handle color space conversion if necessary. You could also add an element that duplicates the data before it reaches the qwidget, followed by a VP8 encoder element, a Matroska muxer and an element that writes data to a file, and your application would already compress and record WebM video. But this is a quick tutorial, so we will not expand on that right now. The idea is to show that you have a lot of flexibility and QtGStreamer already lets you construct any pipeline you wish using the make and link functions.
But constructing pipelines manually can be complex. This is where higher-level objects come handy: next week I plan to continue this series with an update on the progress of the project and at the same time will write about how GStreamer’s pre-made bins (which are complex elements that internally have their own collection of elements) can be used to easily play/decode/capture almost any kind of media on several platform, as they auto-discover the elements that need to be used and connect them automatically for you.
Disclaimer: the above description of GStreamer’s design is extremely simplified of course, so I recommend checking the documentation if you are really interested in finding out more about its internals.
8 responses to “Last week in QtGStreamer – a recap (and an overview of GStreamer)”
Very interesting. Just as I was about to develop an application requiring video input, and thought my only choice for a somewhat platform independent solution would be to use GStreamer.
Having Qt bindings for that would certainly ease my pain 🙂
I’ll check them out this weekend. Good luck!
Thanks for the update, I like the scope of your project. The flexibility to build your own pipeline within Qt allows customization for applications with specific needs, and support for the playbin concept lets developers get some typical use cases working with less code and complexity. It follows the gstreamer line of thinking. Is there any way that developers could take advantage of specific pipelines that others have already implemented? Mabye some sort of plugin mechanism whereby I could write a specific pipeline for my needs using QtGstreamer and then make that available to other developers in such a way that they can just use it the same way they might use a more generic playbin?
It is simpler than that, no plugins needed. You can easily pass a complete pipeline to another user as a simple string. QtGStreamer has methods to parse pipelines from strings, exactly like the gst-launch tool in GStreamer. So you can have code like:
QString desc(“v4l2src ! video/x-raw-yuv, framerate=15/1 ! gamma name=gamma ! videobalance name=videoBalance ! tee name=duplicate ! queue ! xvimagesink name=videosink duplicate. ! queue name=linkQueue ! ffmpegcolorspace ! xvimagesink name=videosink”);
m_bin = QGst::Bin::fromDescription(desc);
This creates the bin as described, no need to instantiate each element and link manually. So we can reuse common pipelines that you can find in the GStreamer docs or in source code for other apps, and also maybe exchange them via a wiki or other type of online documentation.
I just want to say that this project is really GREAT!
I’m developing an instant messaging program with webcam support with Qt and this makes it a lot easier!!
BTW, I’m developing with qtcreator and the application has to work under linux and windows, will there be any problem? anyway I hope to test it tomorrow…
oh! and I didn’t know of the “fromDescription(desc);” function! thats also great! I’m going to try it right now 🙂
Thanks for making this library
Well, I am also developing in Qt Creator, so this is not a problem. We have not tested it so far on Windows, but as GStreamer and Qt are available there it should be possible to use it as well. This is not our primary target, so we will probably only look at it after the Linux/Unix version is a little more complete.
I have been trying to compile it under windows, I installed all the dependencies (qt, gstreamer, cmake, boost, automoc, glib, bison, flex, doxygen and maybe more) but wen I run cmake-gui from the qt command prompt (otherwise it doesn’t find mingw) and after defining a lot of parameters that cmake does not find on its own, I’m stuck with this:
CMake Error: The following variables are used in this project, but they are set to NOTFOUND.
Please set them or make sure they are set and tested correctly in the CMake files:
used as include directory in directory C:/Qt/Projects/qtgstreamer/elements
anyway I will keep trying 🙂
George was also doing some investigation about compilaton on Windows today. He committed some changes that allow you to build statically, skip the examples or tests, and do not build the sink if the video include directory is not found. This last one seems to apply to you, so update your git checkout and let us know how it works. This blog is really not the best channel for communication, you can find us at the #qtgstreamer or #gstreamer channels on Freenode (IRC) while the mailing list is not ready.