On 2019-04-24 7:34 a.m., Ахмед Анам via Libav-user wrote:
Hello.
I studied */hw-decode.c/* example and in this example a decoder is
choosen based on an input video file (calling function
/*av_find_best_stream(input_ctx, AVMEDIA_TYPE_VIDEO, -1, -1, &decoder,
0),* /where /*decoder */is a local variable of type /*AVCodec**/).
I try to decode H264 videostream using hw-acceleration. To find a proper
decoder I use /*avcodec_find_decoder(codec_id) *///function. Now for
decoding h264 stream I use /*AV_CODEC_ID_H264 */as the argument for this
function**/*. */Is it correct? I don't see the difference in CPU and
GPU utilization for hardware-accelerated decoding. Even for multiple
streams.
AV_CODEC_ID_H264 is correct.
To set up a hardware decoder, you need to set
AVCodecContext.hw_device_ctx. In hw_decode.c, that happens in the
hw_decoder_init function with av_hwdevice_ctx_create.
FFmpeg takes care of setting up the per-frame hardware context as you
decode.
The choice of which hardware decoder is used is up to you
(AVHWDeviceType enum:
https://ffmpeg.org/doxygen/trunk/hwcontext_8h.html). In the example,
av_hwdevice_find_type_by_name is called to get the type from the first
command line argument.
--
С уважением, Ахмед Анам.
Best regards, Akhmed Anam.
_______________________________________________
Libav-user mailing list
Libav-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/libav-user
To unsubscribe, visit link above, or email
libav-user-requ...@ffmpeg.org with subject "unsubscribe".
--
Philippe Gorley
Free Software Consultant | Montréal, Qc
Savoir-faire Linux
_______________________________________________
Libav-user mailing list
Libav-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/libav-user
To unsubscribe, visit link above, or email
libav-user-requ...@ffmpeg.org with subject "unsubscribe".