Hi,

thanks for explanation.


Does it sound good to try to create a data storage, where converted data is 
located per real frame, (which will be created on first filter).

Followed filters will use QVideoFrames with custom 
QAbstractVideoBuffer::UserHandle and id from the storage?


________________________________
From: Jason H <jh...@gmx.com>
Sent: Wednesday, January 9, 2019 4:28:01 PM
To: Val Doroshchuk
Cc: Qt development mailing list
Subject: Re: [Development] QAbstractVideoFilter, the pipeline and QImage


>> I really want a QAbstractVideoBuffer *QVideoFrame::buffer() which I can then 
>> cast to my custom type.

> Sorry, but just a note.
> QAbstractVideoBuffer is an abstraction that was supposed to be an impl of 
> accessing to data, nothing more.


>> I'm trying to implement a QAbstractVideo buffer that uses cv::Mat

> Why you need to use cv::Mat there?
> If needed, you could create QAbstractVideoBuffer::UserHandle and return an id 
> to access to cv's data without downloading/uploading within mapping.

Because the video frame needs to be converted to a format that OpenCV 
understands. Various OpenCV functions need particular pixel representation 
types: grayscale 8 bit, floating point, float or int array if 3,4 color 
channels. I basically want to do this conversion once, then pass that on to 
later stages in the pipeline. In general the OpenCV functions don't modify the 
frame itself, but produce information about the frame. The filters I have emit 
the results of the computation in some cases (a list of line segments for 
houghLines) while others, like sobel, create a new image. Depending on the 
pipeline, I might want multiple OpenCV representations. I generally use 
QPainter though, to draw on the frame after having gotten the information.

>>  I think there should be class(es) that converts a QVideoFrame to a cv::Mat, 
>> and one that converts from cv::Mat to QVideoFrame: filters: [toMat, blur, 
>> sobel, houghLines, toVideoFrame]

> Is it performance wise operation to convert QVideoFrame to cv::Mat and back 
> in this case?
> (qt_imageFromVideoFrame(QVideoFrame) can convert to QImage, and should be 
> relatively fast)

Well, I generally don't need to convert back, though this is possible. There 
are OpenCV functions that produce an image. I'm fine with using OpenCV to 
extract info and keep the drawing in Qt (QPainter). So there's no need for me 
to convert it back. Though I will say that the OpenCV functions are _very_ 
fast, supporting multiple acceleration methods (uses Eigen). I wrote my own 
implementations of some of them purely in Qt, and the OpenCV stuff puts them to 
shame. Image saving is the only thing where Qt is faster (IMHO, empirical 
evidence not yet collected).  One other technique I use is to scale the image 
down (say by 50% or 25%) which quarters or 16ths the time respectively, if I 
don't need pixel perfect accuracy.

I would expect some way to attach cv::Mats to a frame. That way I could look 
through what mats are available, and only pay a penalty when I have to. If a 
previous filter operation already produced a 8bit gray scale, I would just use 
that, and if it doesn't exist create it. Later filters could then use it, or 
create their own.

if (!frame.containsMat(CV_8U)) frame.insertMat(CV_8U, frame.toMat(CV_8U));

cv::Mat mat =  frame.mat(CV_8U)
...
frame.insertMat(CV_8U, mat)//if I need to save (like for sobel)

WRT the "Big Picture": I'm trying to a point where I can in QML, have filters, 
which are OpenCV functions programmed to a dynamic filter pipeline.  My 
approach is working but the cost of all the conversions is very expensive. 
We're talking 50msec per frame, which gets me down into 1 filter is 15pfs 
territory, 2 filters is 5 fps, etc. My source is 25-29.97FPS. The way I've been 
doing this is copying the QVideoFrame to QImage, then using that for a Mat.If I 
can just pay the conversion penalty once, I think that would go a long way in 
helping.

Maybe what I need to do is to make the cv::Map a QVariant and store it as 
metadata and use QVideoFrame availableMetaData()?



------------------------------------------------------------

From: Development <development-boun...@qt-project.org> on behalf of Jason H 
<jh...@gmx.com>
Sent: Tuesday, January 8, 2019 6:33:14 PM
To: Jason H
Cc: Qt development mailing list
Subject: Re: [Development] QAbstractVideoFilter, the pipeline and QImage


I'm still plugging away at this. My life is being made difficult by not being 
able to get a pointer to the buffer. For OpenCV, some routines want a color 
image, others want an 8 bit gray scale. It would be really great if I could use 
both of these at the same time.

For example, take the color video frame, make it gray scale, then run 
houghLlines on it, using that information to highlight the lines in the color 
frame. I tried to do this with a simple QMap<int, cv::Mat> but there's no way I 
can access it, because there's no QAbstractBuffer *QVideoFrame::buffer(). I 
might be able to hack it in using QAbstractPlanarVideoBuffer, but that feels 
very hacky (plane 0= color, plane2=B&W) in addition sometimes the type needs to 
change from quint8s to floats.

I feel like I'm really off in the weeds here and would like someone to chime 
in, if I'm completely missing something or if these are shortcomings in the Qt 
API?



Sent: Monday, January 07, 2019 at 5:22 PM
From: "Jason H" <jh...@gmx.com>
To: "Jason H" <jh...@gmx.com>
Cc: "Pierre-Yves Siret" <py.si...@gmail.com>, "Qt development mailing list" 
<development@qt-project.org>
Subject: Re: [Development] QAbstractVideoFilter, the pipeline and QImage

I'm trying to implement a QAbstractVideo buffer that uses cv::Mat (or my own 
custom type, CvMatVideoBuffer or ByteArrayVideoBuffer respectively), but I'm 
running into a mental block with how this should work. Only map() gives pixel 
data, I really want a QAbstractVideoBuffer *QVideoFrame::buffer() which I can 
then cast to my custom type.  Generally when I'm fighting Qt in this way, I'm 
doing something wrong.

I can convert between QImage/cv::Mat with:
cv::Mat qimage_to_mat_cpy(QImage const &img, bool swap)
{
    return qimage_to_mat_ref(const_cast<QImage&>(img), swap).clone();
}


QImage mat_to_qimage_ref(cv::Mat &mat, QImage::Format format)
{
    return QImage(mat.data, mat.cols, mat.rows, static_cast<int>(mat.step), 
format);
}

cv::Mat  qimage_to_mat_ref(QImage &img, int format)
{
    return cv::Mat(img.height(), img.width(), format, img.bits(), 
img.bytesPerLine());
}


Is there an example of how to "properly" use Qt's video pipleline filters, 
frames, and buffers with OpenCV?  I think there should be class(es) that 
converts a QVideoFrame to a cv::Mat, and one that converts from cv::Mat to 
QVideoFrame:
filters: [toMat, blur, sobel, houghLines, toVideoFrame]

Many thanks in advance.



Sent: Monday, January 07, 2019 at 10:57 AM
From: "Jason H" <jh...@gmx.com>
To: "Jason H" <jh...@gmx.com>
Cc: "Pierre-Yves Siret" <py.si...@gmail.com>, "Qt development mailing list" 
<development@qt-project.org>
Subject: Re: [Development] QAbstractVideoFilter, the pipeline and QImage

I have been thinking about this more, and I think we also need to convert when 
the pipeline switches between internal formats. This would allow standard 
filter tooklits to be "a thing" for Qt.

For example, if my pipeline filters are written to use QImage (because 
scanline() and pixel()) , and someone else's use cv::Mat (OpenCV), alternating 
between formats is not possible in the same pipeline. I think the panacia is to 
be able to convert not just at the end, but for any step:
[gauss, sobel, houghLines, final] -> formats: [QVideoFrame->cv::Mat, cv::Mat, 
cv::Mat->QImage, QImage->QVideoFrame] where each format step is the 
(inputFormat -> outputFormat)

Just my 0.02BTC.



Sent: Wednesday, January 02, 2019 at 12:33 PM
From: "Jason H" <jh...@gmx.com>
To: "Pierre-Yves Siret" <py.si...@gmail.com>
Cc: "Qt development mailing list" <development@qt-project.org>
Subject: Re: [Development] QAbstractVideoFilter, the pipeline and QImage

Thanks for pointing that out. I guess that could work. It's not as elegant as 
what I want, where everyhing presents the same way. Now each and every filter 
has to have

if (flags & QVideoFilterRunnable::LastInChain) {
   ... generate the frame for the backend per the surfaceFormat
}


As there are many surfaceFormats, that if(){} block is huge, and duplicated in 
each filter. True, I can create a "final filter" that does this to avoid all 
that boilerpate code that takes the frame and converts it back to what it needs 
to be. But what I suggested was Qt should provide this automatically both in 
the filter chain. The difference is this:


VideoOutput {
   filters: [sobel, houghLines]
}


VideoOutput {
   filters: [sobel, houghLines, final]
}

Ideally that final filter checks the frame matches what it expects and only if 
it does not, performs a conversion.  Maybe there's a way to register a 
conversion from a custom type to a QVideoFrame?
Also, if the VideoOutput is not needed* the final filter need not be invoked.

By not needed, I mean the video output element is not visable, or it's area is 
0. Sometimes, we want to provide intel about the frames, without affecting 
them. Currently, this is inherently syncronous, which negatively impacts frame 
rate.
I should be able to use two (or more) VideoOutputs, one for real-time video 
display and another for info-only filter pipeline, and these can be distributed 
across cpu cores. Unfortuantely, the VideoOutput takes over the video source 
forcing source-output mappings to be 1:1. It would be really nice if it could 
be 1:N. I experimented with this, and the first VideoOutput is the only one to 
receive a frame from a source, and the only one with an active filter pipeline. 
How could I have 3 VideoOutputs, each with it's own filter pipeline and 
visualize them simulatneously?

Camera { id: camera }


VideoOutput {  // only this one works. If I move this after the next one, then 
that one works.
   filters: [sobel, houghLines]
   source: camera
}


VideoOutput {
   filters: [sobel, houghLines, final]
   source: camera
}

So to sum this up:
- Qt should provide automatic frame reconstruction for the final frame (that 
big if(){} block) (it should be boilerplate)
- A way to register custom format to QVideoFrame reconstruction function
- Allow for multiple VideoOutputs (and filter pipelines) from the same source
-- Maybe an element for no video output pipeline?

Am wrong in thinking any of that doesn't already exist or is a good idea?


Sent: Saturday, December 22, 2018 at 5:10 AM
From: "Pierre-Yves Siret" <py.si...@gmail.com>
To: "Jason H" <jh...@gmx.com>
Cc: "Qt development mailing list" <development@qt-project.org>
Subject: Re: [Development] QAbstractVideoFilter, the pipeline and QImage


The filter pipeline starts with a file or camera device, and various filters 
are applied sequentially to frames. However I spend a lot of time converting 
frames to QImages for analysis and painting. I'm hoping there's a faster way to 
do this. Some of the filters alter the frame, some just provide information 
about the frame.

But each time, I have to unpack a QVideoFrame's pixels and make sure the filter 
can process that pixel format, or convert it to one format that it expects. I'm 
getting processing times of 55msec on my mackbook pro, which give me 18FPS from 
a 25FPS video, so I'm dropping frames.  I am starting to think the ideal would 
be to have some "Box of Pixels" data structure that both QImage and QVideoFrame 
can use. But for now, I convert each frame to a QImage at each stage of the 
pipeline.

I'm not that versed in image manipulation but isn't that the point of the 
QVideoFilterRunnable::LastInChain flag ?
Quoting the doc:
"flags contains additional information about the filter's invocation. For 
example the LastInChain flag indicates that the filter is the last in a 
VideoOutput's associated filter list. This can be very useful in cases where 
multiple filters are chained together and the work is performed on image data 
in some custom format (for example a format specific to some computer vision 
framework). To avoid conversion on every filter in the chain, all intermediate 
filters can return a QVideoFrame hosting data in the custom format. Only the 
last, where the flag is set, returns a QVideoFrame in a format compatible with 
Qt."

You could try using just one pixel format and use that in all your filters 
without reconverting it at each step.
 _______________________________________________ Development mailing list 
Development@qt-project.org https://lists.qt-project.org/listinfo/development
_______________________________________________
Development mailing list
Development@qt-project.org
https://lists.qt-project.org/listinfo/development

Reply via email to