On Tue, 28 Mar 2017 13:27:49 +0100
Mark Thompson <s...@jkqxz.net> wrote:

> On 28/03/17 11:18, wm4 wrote:
> > On Mon, 27 Mar 2017 21:38:12 +0100
> > Mark Thompson <s...@jkqxz.net> wrote:
> [...]
> >> +int avcodec_probe_hw(AVCodecContext *avctx, AVBufferRef *hw_frames_ref);  
> > 
> > This API still makes me a bit uneasy. Regular get_format hwaccel init
> > relies on the fact that the decoder has set some fields from parsing
> > the bit stream, like the current sw_pixfmt or frame size. These aren't
> > necessarily available on a "fresh" AVCodecContext (not even if
> > extradata is available).
> > 
> > So normally, the only time this will work properly is in the get_format
> > callback. Which is ok, but which I think isn't the entire intention? I
> > think I'd be fine with allowing this call only during get_format or so.  
> 
> It's certainly the intent that this is usable outside get format.  In the 
> most general case, probing with a codec, a profile, and maybe some dimensions 
> can test the hardware support without any stream at all.

The profile isn't necessarily set at this point, though. (Actually
nothing needs to be set for h264 decoding if the input is Annex B
format.)

> > Or maybe it's supposed to return "approximate" results outside of
> > get_format in the first place.
> > 
> > (For encoding on the other hand it's probably fine - though I'm not
> > sure if this is supposed to be called before or after opening the
> > context.)  
> 
> Before.  I didn't provide a VAAPI implementation for encode because with the 
> current design it requires some nontrivial rearrangement of code to work 
> (stuff done by the current per-codec init functions needs to move to a 
> callback so that it's usable at other times - I'll do this if once we have 
> some agreement on the API).
> 
> > As a minor nit, this can't distinguish between required initial pool
> > size and upper bound on pool size. Users who want to preallocate frames
> > with e.g. vdpau would want to know the latter. (Well, I don't want
> > that, and see no reason why anyone would want it, so I officially don't
> > care about this argument.)  
> 
> My intent is that for now the decoder would always allocate the returned 
> initial_pool_size + extra_hw_frames, so a user would do the same.  (They 
> control extra_hw_frames so how many more they have is completely up to them, 
> but it works the same whether they are doing the allocation or lavc is.)

Yeah, this comment of mine was just a distraction anyway, sorry about
that.

> > I'll make full use of my permission to bikeshed, so here is a different
> > API that impractically changes everything:
> > 
> > Proposal 1:
> > 
> > Add new init_hw_device and init_hw_frames callbacks:
> > 
> >   /*
> >    * Create a hw device and set ctx->hw_device_ctx. This might be used
> >    * for hardware decoding. If ENOSYS is returned, the next hwaccel
> >    * method is tried, and if all fail, software decoding is used.
> >    *
> >    * If hw_device_ctx is already set, this callback is not used, and
> >    * does not need to be set.
> >    *
> >    * If internal reinit is done (like on flush or if the codec
> >    * parameters change), hw_device_ctx is cleared, and this callback
> >    * can be called again.
> >    */
> >   int (*init_hw_device)(AVCodecContext *ctx, AVHWDeviceType type);
> > 
> >   /*
> >    * When this is called, ctx->hw_frames_ctx is already setup in a basic
> >    * way (such as frame dimensions and surface format). This callback
> >    * can be used to refine the frame parameters further.
> >    */
> >   int (*init_hw_frames)(AVCodecContext *ctx);  
> 
> Hmm, that only covers the inline decode case, but it does do it rather well.

init_hw_device, sure.

init_hw_frames could be used by "full" hw decoders if applicable.

Yeah, this doesn't care about encoding, if you mean that. (Haven't
thought about that.)

> > Proposal 2:
> > 
> > Add a mechanism to let the caller setup parameters in advance, and
> > letting the hwaccel code use these parameters without needing callbacks
> > back to user code.
> > 
> > Proposal 2a:
> > 
> > Add these fields to AVCodecContext:
> > 
> >   AVDictionary *hw_frames_params;
> >   void *hw_frames_opaque_params; // AVDictionary can't have pointers
> >   AVHWDeviceType hw_frames_opaque_params_type;
> > 
> > Proposal 2b:
> > 
> > Add this field to AVCodecContext:
> > 
> >   /*
> >    * When allocating a new hw_frames_ctx, use the parameters found in
> >    * this context as defaults (if the device type matches). The pixel
> >    * format and size will be changed, and the number of initial pool
> >    * frames can be extended.
> >    */
> >   AHWFramesContext *hw_frames_template;  
> 
> I think this proposal is horrible.

I think 2b is actually not that terrible, but point taken. It's
probably also not upwards compatible enough for whatever future things
we might want to do.

> > Proposal 3:
> > 
> > Use your API, and restrict when/where it's allowed to be called.  
> 
> Proposal 13:
> 
> Do both 1 and 3.  1 as above for inline decode, then:
> 
> int avcodec_probe_hw(AVCodecParameters *par, AVSomething *returned_stuff);
> 
> for the non-inline cases (both encode and decode).
> 
> Unclear what type the something should be - AVHWFramesContext is less useful 
> here, though it would be neat for decode when you already have the dimensions.
> 

Hm that sounds good to me too. Not sure what AVSomething would be -
a new separate struct just for that? Or would it in this case be
enough to return a simple boolean "supported"/"unsupported"? Also, I
think this needs a hw device ctx?

Let's hear what elenril says (hopefully sooner than 3 weeks).
_______________________________________________
libav-devel mailing list
libav-devel@libav.org
https://lists.libav.org/mailman/listinfo/libav-devel

Reply via email to