Tuesday, May 11, 2010

gst-opencv design choices

While continuing wrapping new OpenCV functions into GstElements yesterday, I faced an interesting design choice on the mappings of OpenCV functions' parameters to GstElement's properties.

Take a look at cvSmooth docs. You can see that it has a type parameter, followed by param1, param2, param3 and param4 that have different semantics if different type is used. The question is how to expose those in the 'cvsmooth' GstElement?

I could think of 3 different choices here:

1) Go straightforward and use the same API as OpenCV
As a result, we should have an element with the properties named after the OpenCV parameters:
"cvsmooth type=blur param1=5 param2=3 param3=0.0 param4=0.0"

This results in a very not intuitive API, but we keep it aligned with OpenCV's, making it easy to people that already know one API to use the other one. The element docs would mostly point to OpenCV's docs. Resulting code is simple and easy to maintain.

2) Have multiple elements: cvsmoothblur, cvsmoothgaussian, cvsmooth...
We could have each smooth algorithm (type) into a separate element and have its properties reflect the semantics of this type. For example, we would have cvsmoothblur, cvsmoothmedian and one for each type. The properties of each one would named accordingly to its semantics, instead of some paramX.

This provides a nice API but might increase the number of elements for every function that has this type or a similar parameter. I don't know how common this is. This might be a good solution if there are a few of those. A downside is that switching the type has to use hot-swapping but I don't think this is a common use case.

3) Expose properties for each semantics and use them only if their type is selected.
We still keep it to one element, but we add one property for each semantic a parameter can assume. Those would only be used it we have its corresponding type is selected.

For example: param3 might be the "gaussian standard deviation" or the "color sigma" if type is gaussian or bilateral respectively. We add those 2 properties (standard-deviation and color-sigma) that are only going to be used if their types are selected.

This makes those lines possible:
"cvsmooth type=gaussian standard-deviation=5.0" or
"cvsmooth type=bilateral color-sigma=1.0"

Code is a little messier than options above.


Given those options, I really don't like option 3. I'm considering 1 or 2. From a quick look at some pages of OpenCV's transformations API I could see that this is not very common, and when it happens, only one parameter has a 'variable semantic', looks like I picked the trickiest one as my example.

So, which option would you chose?

Thursday, May 6, 2010

Hacking in gst-opencv

It has been years since I last used OpenCV. We (me and friends working on a lab at the university) used it to process images on batches or to process frames live from a webcam. Things could have been much easier if I knew GStreamer back then. Said so, I decided to take a look at gst-opencv to see what we already can do with it.

There are a few features wrapped as elements at this moment and they work quite well, but it could have a much larger feature set and it seems no one has been recently working on this. Given those and having a little spare time these days, I decided to start hacking on gst-opencv and trying to put it together with the other modules. I'd prefer to have a gst-opencv module, but adding it as a new plugin into gst-plugins-bad is also an option. What do you think?


Current features

[Edited: It seems the videos can only be seen directly on the post at blogspot]

Some nice stuff can already be done with the current elements. Let me show some.

I recorded this video outside some minutes ago:

video

We can use edgedetect on it and see its edges:
Command: gst-launch uridecodebin uri=youruri ! queue ! ffmpegcolorspace ! edgedetect ! ffmpegcolorspace ! theoraenc ! oggmux ! filesink location=result.ogg

video


Or we can segment it with pyramidsegment and have a nice effect (some people would enjoy this in PiTiVi?) or use it in machine vision stuff?
Command: gst-launch uridecodebin uri=youruri ! queue ! ffmpegcolorspace ! pyramidsegment ! ffmpegcolorspace ! theoraenc ! oggmux ! filesink location=result.ogg


video


OpenCV already ships some face detection profiles for you (at Ubuntu, it goes into /usr/share/opencv/haarcascades/), so you can use them with facedetect element, or train your own classifiers to use with it. I stood with the default and tried on some pictures, here are 2 of them:




I think it works pretty well :)
You can disable the circles and just get messages with the faces' positions and do whatever you want with them.

Other than those, there's also 'textwrite', 'templatematch' and 'faceblur' elements.


Current work

I've been working on a simple base class that will make it easier to map simple 1 to 1 OpenCV functions into elements and providing some common properties (like ROI and COI) and GstBuffer-IplImage conversion. This will help covering more functions and should be enough to get me acquainted again to the API, after it I can go for the fancier stuff.

For example, take cvSmooth function, we should only have to write code to map its parameters into properties and a simplified chain function that already works on IplImages instead of GstBuffers.


Repositories

gst-opencv's main repository is at github, I have my personal branches here. From time to time I ping Elleo to upgrade at github, but I hope we can get this upstream in the next weeks.