On 18/05/15 15:46, Tom Butterworth wrote:
Just to be clear, as one respondent seemed to misunderstand, these
compressed texture formats are passed to the graphics layer (OpenGL or
DirectX) as a final buffer for display. Unlike hardware decoding of eg
H.264 they don't return as buffers in a traditional pixel format (so they
won't be involved with HWAccel). The formats are not limited to the S3
types - other formats exist, some only for use on particular desktop or
mobile platforms (BC7, PVRTC, etc).

The advantage they offer over traditional pixel formats is the reduced
bandwidth requirement from main memory to the GPU, and the subsequent
reduced graphics memory usage. Decoding at draw time is a fairly cheap
operation for the GPU. By using these formats, systems are able to play
more streams of higher resolution video than they could with RGB or Y'CbCr
formats.

Those formats are on the codec side of the boundary between raw-pixel and encoded data IMHO.

 From the point of view of simplicity within libavcodec and simplicity for
API users, this would be my preferred choice:


At lest two API users express extreme dislike with adding more pixel formats.

2. Introduce a single opaque pixel format: this has the advantage of
not having to update API at every new format, but leaves users in the
dark to what the texture actually is, and requires to know what he or
she is actually doing.


This is not helpful because knowing "what he or she is actually doing" for
formats such as TXD (which can contain several compressed texture types)
would require parsing encoded frames which defeats the purpose of using
libavcodec in the first place.

For other codecs it would require API clients maintain a hard-coded mapping
between codecs and compressed texture formats, so they would not
automatically support new codecs of these types added to libavcodec.

The same mapping would be needed for the pixel formats. Doesn't matter where the information is presented, you have to read it.

Keep in mind that we have a good chunk of users that do not want to use swscale in any form and do hate having to cope with additional pixel formats. Both playback and and transcoding oriented people.

From what I can see you'd like to have a mean to playback by rendering directly to the video memory and that means you want to do the decoding on GPU.

Here my proposal to consider compressed textures as codecs.

A) HAP as codec

- The texture formats have a separate codec for each.
- The HAP codec would chain the texture codec as done for codecs using mjpeg internally.
- The textured codec can leverage hwaccel to output GLBuffer opaque AVFrame.
- Usual boilerplate functions are provided as per hwaccel 1.2 strategy.

This way our usual suspects can simply ignore the whole deal and have normal pixel AVFrame while the advanced users would get a mean to plug in the decoding

B) HAP as demuxer

- The HAP demuxer can be chained as we do already for stuff such as hls
- The output AVPacket will contain the raw texture
- The user can opt to wire it to its own decoder or the libav provided ones.
- HWAccel strategies can be adopted as well.


(A) is sort of easier if HAP can be stored many different main container formats but requires to go the hwaccel route.
(B) makes really easy to provide the texture for any use.


That said both can be implemented and used to provide all the flexibility w/out having people complaining that they have to be forced to use anything but avformat and avcodec.

lu
_______________________________________________
libav-devel mailing list
libav-devel@libav.org
https://lists.libav.org/mailman/listinfo/libav-devel

Reply via email to