On 08/31/2018 01:37 AM, Lucas Magalhães wrote:
> On Wed, Aug 22, 2018 at 3:49 AM Hans Verkuil <hverk...@xs4all.nl> wrote:
>>
>> My basic idea was that you use a TPG state structure that contains the
>> desired output: the sensor starts with e.g. 720p using some bayer 
>> pixelformat,
>> the debayer module replaces the pixelformat with e.g. PIX_FMT_RGB32, a
>> grayscale filter replaces it with PI_FMT_GREY, and that's what the TPG for 
>> the
>> video device eventually will use to generate the video.
>>
>> This assumes of course that all the vimc blocks only do operations that can
>> be handled by the TPG. Depending on what the blocks will do the TPG might 
>> need
>> to be extended if a feature is missing.
>>
> Hi Hans,
> 
> I start to work on this task but I have another question. I understand that 
> the
> final image should have the correct format as if the frame was passing through
> the whole topology. But the operations itself doesn't needed to be done on 
> each
> entity. For example, a scaled image will have deformations that will not be
> present if it is generated on the end of the pipeline with the final size.
> You just need the format, size and properties to be correct, do I got it 
> right?

Yes. Although this example is unfortunate since the TPG can actually scale:
with tpg_reset_source you define the width/height of the 'source', and with
tpg_s_crop_compose you can define the crop and compose rectangles, which in
turn translates to scaling. The TPG has a poor man's scaler, so if you scale
up by a factor of 2, you will in fact see those deformations.

But if you have a complex pipeline with e.g. two scalers with additional
processing in between, then that cannot be modeled accurately with the TPG.
So be it. There is a balance between accuracy and performance, and I think
this is a decent compromise.

Regards,

        Hans

Reply via email to