Thanks, patch looks fine to me now.
While testing, I came across some very weird behavior:

The following commandline:
./ffmpeg -hwaccel cuvid -c:v h264_cuvid -i test.mkv -an -sn -c:v h264_nvenc 
-preset slow -qp 22 -bf 0 -f null -

Sometimes, it runs into the following error:
[h264_nvenc @ 0x3f7d8c0] Failed locking bitstream buffer: invalid param (8)

Omitting "-hwaccel cuvid" prevents it from happening.
And, that's the weird parts, omitting "-bf 0" also prevents it from happening.
The weird thing about that is, 0 is the default. And some quickly added debug 
printfs show that indeed the affected values do not change.
avctx->max_b_frames is 0 in both cases.

Reverting this patch also prevents it from happening, but I somehow doubt it's 
the fault of this patch, specially as just running that command over and over 
again makes it work in ~50% of the cases.


Also, the same command line, but with -bf 1 or any higher number causes:
[h264_nvenc @ 0x2c788c0] EncodePicture failed!: no encode device (1)

This is not a new issue, it is happening ever since. It only happens when 
-hwaccel cuvid is specified.
It's not an issue with the CUDA Frame Input code in nvenc either, as passing 
cuda frames via -vf hwupload_cuda works flawlessly.
It only happens in the combination of direct cuda frame input from cuvid to 
nvenc.
Like I said, this is not a regression from this patch, just wanted to bring it 
to attention, as it somehow feels like 
a driver issue to me.

With this new -bf related issue, I'm not so sure about that anymore though, and 
I'm wondering if something in ffmpeg corrupts memory somewhere, somehow, when 
-bf is set.
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel

Reply via email to