On Fri, Aug 24, 2018 at 2:33 AM Nicolas Dufresne <nico...@ndufresne.ca> wrote:
>
> Le jeudi 23 août 2018 à 10:05 +0200, Paul Kocialkowski a écrit :
> > Hi,
> >
> > On Wed, 2018-08-22 at 14:33 -0300, Ezequiel Garcia wrote:
> > > On Wed, 2018-08-22 at 16:10 +0200, Paul Kocialkowski wrote:
> > > > Hi,
> > > >
> > > > On Tue, 2018-08-21 at 17:52 +0900, Tomasz Figa wrote:
> > > > > Hi Hans, Paul,
> > > > >
> > > > > On Mon, Aug 6, 2018 at 6:29 PM Paul Kocialkowski
> > > > > <paul.kocialkow...@bootlin.com> wrote:
> > > > > >
> > > > > > On Mon, 2018-08-06 at 11:23 +0200, Hans Verkuil wrote:
> > > > > > > On 08/06/2018 11:13 AM, Paul Kocialkowski wrote:
> > > > > > > > Hi,
> > > > > > > >
> > > > > > > > On Mon, 2018-08-06 at 10:32 +0200, Hans Verkuil wrote:
> > > > > > > > > On 08/06/2018 10:16 AM, Paul Kocialkowski wrote:
> > > > > > > > > > On Sat, 2018-08-04 at 15:50 +0200, Hans Verkuil wrote:
> > > > > > > > > > > Regarding point 3: I think this should be documented next 
> > > > > > > > > > > to the pixel format. I.e.
> > > > > > > > > > > the MPEG-2 Slice format used by the stateless cedrus 
> > > > > > > > > > > codec requires the request API
> > > > > > > > > > > and that two MPEG-2 controls (slice params and 
> > > > > > > > > > > quantization matrices) must be present
> > > > > > > > > > > in each request.
> > > > > > > > > > >
> > > > > > > > > > > I am not sure a control flag (e.g. 
> > > > > > > > > > > V4L2_CTRL_FLAG_REQUIRED_IN_REQ) is needed here.
> > > > > > > > > > > It's really implied by the fact that you use a stateless 
> > > > > > > > > > > codec. It doesn't help
> > > > > > > > > > > generic applications like v4l2-ctl or qv4l2 either since 
> > > > > > > > > > > in order to support
> > > > > > > > > > > stateless codecs they will have to know about the details 
> > > > > > > > > > > of these controls anyway.
> > > > > > > > > > >
> > > > > > > > > > > So I am inclined to say that it is not necessary to 
> > > > > > > > > > > expose this information in
> > > > > > > > > > > the API, but it has to be documented together with the 
> > > > > > > > > > > pixel format documentation.
> > > > > > > > > >
> > > > > > > > > > I think this is affected by considerations about codec 
> > > > > > > > > > profile/level
> > > > > > > > > > support. More specifically, some controls will only be 
> > > > > > > > > > required for
> > > > > > > > > > supporting advanced codec profiles/levels, so they can only 
> > > > > > > > > > be
> > > > > > > > > > explicitly marked with appropriate flags by the driver when 
> > > > > > > > > > the target
> > > > > > > > > > profile/level is known. And I don't think it would be sane 
> > > > > > > > > > for userspace
> > > > > > > > > > to explicitly set what profile/level it's aiming at. As a 
> > > > > > > > > > result, I
> > > > > > > > > > don't think we can explicitly mark controls as required or 
> > > > > > > > > > optional.
> > > > >
> > > > > I'm not sure this is entirely true. The hardware may need to be
> > > > > explicitly told what profile the video is. It may even not be the
> > > > > hardware, but the driver itself too, given that the profile may imply
> > > > > the CAPTURE pixel format, e.g. for VP9 profiles:
> > > > >
> > > > > profile 0
> > > > > color depth: 8 bit/sample, chroma subsampling: 4:2:0
> > > > > profile 1
> > > > > color depth: 8 bit, chroma subsampling: 4:2:0, 4:2:2, 4:4:4
> > > > > profile 2
> > > > > color depth: 10–12 bit, chroma subsampling: 4:2:0
> > > > > profile 3
> > > > > color depth: 10–12 bit, chroma subsampling: 4:2:0, 4:2:2, 4:4:4
> > > > >
> > > > > (reference: https://en.wikipedia.org/wiki/VP9#Profiles)
> > > >
> > > > I think it would be fair to expect userspace to select the right
> > > > destination format (and maybe have the driver error if there's a
> > > > mismatch with the meta-data) instead of having the driver somewhat
> > > > expose what format should be used.
> > > >
> > > > But maybe this would be an API violation, since all the enumerated
> > > > formats are probably supposed to be selectable?
> > > >
> > > > We could also look at it the other way round and consider that selecting
> > > > an exposed format is always legit, but that it implies passing a
> > > > bitstream that matches it or the driver will error (because of an
> > > > invalid bitstream passed, not because of a "wrong" selected format).
> > > >
> > >
> > > The API requires the user to negotiate via TRY_FMT/S_FMT. The driver
> > > usually does not return error on invalid formats, and simply return
> > > a format it can work with. I think the kernel-user contract has to
> > > guarantee if the driver accepted a given format, it won't fail to
> > > encoder or decode.
> >
> > Well, the issue here is that in order to correctly enumerate the
> > formats, the driver needs to be aware of:
> > 1. in what destination format the bitstream data is decoded to;
>
> This is covered by the state-full specification patch if I remember
> correctly. So the driver, if it's a multi-format, will first return all
> possible formats, and later on, will return the proper subset. So let's
> take an encoder, after setting the capture format, the enumeration of
> the raw formats could then be limited to what is supported for this
> output. For an H264 encoder, what could also affect this list is the
> profile that being set. For decoder, this list is reduced after
> sufficient headers information has been given. Though for decoder it's
> also limited case, since it only apply to decoder that do a conversion
> before releasing the output buffer (like CODA does).
>
> What isn't so nice with this approach, is that it adds an implicit
> uninitialized state after open() which isn't usual to the V4L2 API and
> might not be that trivial or nice to handle in drivers.

I don't think we defined it this way and as you pointed, it's against
the general V4L2 API semantics. After open, there is a default coded
format set and ENUM_FMT/FRAME_SIZES returns what's available for the
default format.

>
> > 2. what format convesions the VPU can do.
>
> whenever possible, exposing the color conversion as a seperate m2m
> driver is better approach. Makes the driver simpler and avoids having
> to add this double enumeration support.

How about the overhead of having the decoded data stored to memory
once, read once and then written yet again, even though the hardware
supports doing it on the fly?

>
> >
> > Step 1 is known by userspace but is only passed to the driver with the
> > bitstream metadata from controls. This is much too late for trimming the
> > list of supported formats.
> >
> > I'm not sure that passing an extra information to the driver early to
> > trim the list would make sense, because it comes to down to telling the
> > driver what format to target and then asking the driver was format it
> > supports, which is rather redundant.
> >
> > I think the information we need to expose to userspace is whether the
> > driver supports a destination format that does not match the bitstream
> > format. We could make it part of the spec that, by lack of this
> > indication, the provided bitstream is expected to match the format that
> > was selected.
>
> I'm not sure why you consider this too late. With decoder, the OUTPUT
> and CAPTURE stream is asynchronous. So we start streaming on the OUTPUT
> until the driver notify (V4L2_EVENT_SOURCE_CHANGE). We then enumerate
> the formats again at that moment, and then configure and start the
> CAPTURE.

Yeah, I think we may eventually need this kind of initialization
sequence, similar to stateful hardware...

Best regards,
Tomasz

Reply via email to