On Sat, May 9, 2020 at 5:09 PM Mark Thompson <s...@jkqxz.net> wrote:
>
> On 08/05/2020 21:26, Hendrik Leppkes wrote:
> > On Fri, May 8, 2020 at 5:51 PM <artem.ga...@gmail.com> wrote:
> >>
> >> From: Artem Galin <artem.ga...@intel.com>
> >>
> >> Added AVD3D11FrameDescriptors array to store array of single textures in 
> >> case if there is no way
> >> to allocate array texture with BindFlags = D3D11_BIND_RENDER_TARGET.
> >>
> >> Signed-off-by: Artem Galin <artem.ga...@intel.com>
> >> ---
> >>  libavutil/hwcontext_d3d11va.c | 26 ++++++++++++++++++++------
> >>  libavutil/hwcontext_d3d11va.h |  9 +++++++++
> >>  2 files changed, 29 insertions(+), 6 deletions(-)
> >>
> >> ...
> >> diff --git a/libavutil/hwcontext_d3d11va.h b/libavutil/hwcontext_d3d11va.h
> >> index 9f91e9b1b6..295bdcd90d 100644
> >> --- a/libavutil/hwcontext_d3d11va.h
> >> +++ b/libavutil/hwcontext_d3d11va.h
> >> @@ -164,6 +164,15 @@ typedef struct AVD3D11VAFramesContext {
> >>       * This field is ignored/invalid if a user-allocated texture is 
> >> provided.
> >>       */
> >>      UINT MiscFlags;
> >> +
> >> +    /**
> >> +     * In case if texture structure member above is not NULL contains the 
> >> same texture
> >> +     * pointer for all elements and different indexes into the array 
> >> texture.
> >> +     * In case if texture structure member above is NULL, all elements 
> >> contains
> >> +     * pointers to separate non-array textures and 0 indexes.
> >> +     * This field is ignored/invalid if a user-allocated texture is 
> >> provided.
> >> +     */
> >> +    AVD3D11FrameDescriptor *texture_infos;
> >>  } AVD3D11VAFramesContext;
> >>
> >
> >
> > I'm not really a fan of this. Only supporting array textures was an
> > intentional design decision back when D3D11VA was defined, because it
> > greatly simplified the entire design - and as far as I know the
> > d3d11va decoder, for example, doesnt even support decoding into
> > anything else.
>
> For an decoder, yes, because the set of things to render to can easily be 
> constrained.
>
> For an encoder, you want to support more cases then just textures generated 
> by a decoder, and ideally that would allow arbitrary textures with the right 
> properties so that the encoder is not weirdly gimped (compare NVENC, which 
> does accept any texture).  The barrier to making that work is this horrible 
> texture preregistration requirement where we need to be able to find all of 
> the textures which might be used up front, not the single/array texture 
> difference.  While changing the API here is not fun, following the method 
> used for the same problem with D3D9 surfaces seems like the simplest way to 
> make it all work nicely.
>

If that is the goal, wouldn't it be ideal for an encoder to work just
with a device context, and not require a frame context?

- Hendrik
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Reply via email to