Re: [FFmpeg-user] MPEG 'broken_link' flag -- How do I set it?

2024-03-25 Thread Mark Thompson

On 24/03/2024 15:48, Mark Filipak wrote:

I cut at the end of an open GOP. When I did that, FFmpeg did not set the MPEG 
'broken_link' flag to '1'.

The frame following the cut has to be flagged:
'closed_gop' = '0' because it's an open GOP, also, and
'broken_link' = '1'.


How do I do that? I searched but did not find.


The ffmpeg utility does not edit the internals of the bitstream when cutting 
(packets are effectively opaque to it); you would need a BSF to make this 
change.

Making such a BSF should be straightforward: use CBS BSF with a single 
update_fragment function which edits the GOP header on any I frame with a 
discontinuity before it.

- Mark
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] vaapi crop and video size filters

2021-01-20 Thread Mark Thompson

On 20/01/2021 03:06, owen s wrote:

with software libx264 using crop filter
crop=width:heigh:x:y, i could crop around a specific point. using
h264_vaapi and -vf crop=width:height
doesn't control the crop location.

also with libx264 -video_size=widthxheight worked, but with h264_vaapi, the
video resolution comes out to be the max size of the video framebuffer
65535x65535 in my case.

Can I set where the h264_vaapi crop x,y position and how do I control the
resolution after using h264_vaapi?

log file: http://0x0.st/-ifF.txt


This crops your 2560x1440 input to 2560x1440 so it is unsurprising that nothing 
happens.

More generally, cropping is supported by VAAPI filters but not by the encoder 
(due to internal API constraints which probably derive from actual hardware 
constraints).

Therefore, you should crop either before upload (if you are uploading) or before 
applying a filter (when staying in VAAPI surfaces).  If you aren't filtering at 
all (pure decode->encode), then you need to insert a null scale before the 
encode (which is an extra copy, but can't be avoided when the encoder doesn't 
support that input).

So:

ffmpeg -i ... -vf crop=...,format=nv12,hwupload -c:v h264_vaapi ...

ffmpeg -hwaccel vaapi -hwaccel_output_format vaapi -i ... -vf 
crop=...,scale_vaapi=... -c:v h264_vaapi ...

- Mark
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] vaapi_h264 encoding very low bitrate

2021-01-20 Thread Mark Thompson

On 19/01/2021 16:07, owen s wrote:

I am running this command with
ffmpeg -y -loglevel debug \
-vaapi_device /dev/dri/renderD128 \
-loop 1 -r 1 -i ./image.jpg -pix_fmt vaapi_vld \
-b:v 18000k -minrate 18000k \
-vf 'format=nv12|vaapi,hwupload,scale_vaapi=w=1280:h=720' \
-rc_mode 3 \
-r 30 -g 60 \
-c:v h264_vaapi -f flv /dev/null

I am getting high encoding speed of greater than 3x but the bitrate is
terrible 350Kbits/s

Why is that when I can get better encoding speeds using cpu? log file
http://0x0.st/-i2j.txt


Given that you're encoding the same image repeatedly, most of the frames will be a 
trivial "this frame is the same as the previous one" which can be signalled 
with very few bits.

If you want to push the bitrate up then either don't use the same frame 
repeatedly as input or decrease the gop size (-g option) so that it is forced 
to intra-code more frames (I would guess that at -g 1 it will probably be able 
to hit the 18M target).

- Mark
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Converting H.264 High L5.0 to H.264 High L5.1

2021-01-20 Thread Mark Thompson

On 19/01/2021 14:47, alfonso santimone wrote:

Hi all,
how can i convert a H.264 High L5.0 .mov video to a H.264 High L5.1 or L4.0
.mov video without loosing to much quality?
I've tried...

ffmpeg -i input.mov -c:v libx264 -profile:v high -level:v 4.0 -c:a copy
output.mov

but from a 02:46 (mm:ss) movie 1.8Gb big i get a 90Mb movie.
So I guess a lot of quality is lost.
How can I improve it?


What is the end-goal here?

The level of an H.264 stream is an indication set by the encoder of the 
resources which will be required to decode it.  For example, level 4 at high 
profile indicates that it should be usable by a decoder which supports frames 
of at least 2097152 pixels and can process 62914560 pixels per second and 
2500 coded input bits per second - that's enough for 1920x1080 at 30fps, 
but not larger or at higher framerate.

If your input stream does conform to the level you want to set (whether higher 
or lower than the current level) then you can rewrite just the level field 
while copying everything else by doing:

ffmpeg -i input.mov -c:v copy -bsf:v h264_metadata=level=4 -c:a copy output.mov

This will still work if your stream doesn't actually conform to the lower 
level, but it might confuse some players so you should be careful with testing 
in that case.

If you actually do need to conform to a lower level than your input stream 
currently does then you will need to reencode as you've already suggested - in 
that case, you will want to look at the bitrate or crf options to libx264 to 
improve the quality.

- Mark
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Archlinux update breaks kmsgrab: Failed to get plane resources: Permission denied

2020-02-09 Thread Mark Thompson
On 06/02/2020 06:11, Kai Hendry wrote:
> Hi there,
> 
> For sometime I've been happily using kmsgrab to make screencasts on my 
> Archlinux Xorg desktop.
> 
> https://github.com/kaihendry/recordmydesktop2.0/blob/9825a44d886318d78463c0a602681c0c7931cf83/x11capture#L71
> 
> But then it broke after a reboot as described here:
> https://www.reddit.com/r/ffmpeg/comments/ez4u5z/failed_to_get_plane_resources_permission_denied/
> 
> Downgrading ffmpeg or linux didn't appear to solve the problem. Perhaps it 
> was some Intel driver update? I'm not sure what package to try downgrade or 
> to bisect this breaking dist-upgrade and reboot, since there has been so many 
> updates...

That sounds like it could be some sort of security feature.  kmsgrab is 
definitely kindof naughty and low-level in the way it captures the screen 
(which is how it works independently of the window system), so I wouldn't be 
surprised if something tries to stop it.

> After some frustration I revisited the ffmpeg wiki to find the "Capture the 
> screen from the first active KMS plane" (using -device /dev/dri/card0 instead 
> of -vaapi_device /dev/dri/renderD128) which does work... for a few *minutes* 
> and then bombs out:
> 
> [AVHWFramesContext @ 0x559c5137c400] Failed to create surface from DRM 
> object: 2 (resource allocation failed).
> [Parsed_hwmap_0 @ 0x559c513653c0] Failed to map frame: -5.

This looks like some sort of memory problem.  Two ideas:

* Memory leak.  Does any memory use increase while running the program?  (Might 
be some sort of GPU memory rather than program memory; not sure where you'd 
look for that but hopefully it gets noted somewhere...)

* The buffer no longer existing on the expected device.  Do you have multiple 
GPUs with PRIME offload?  If so, I guess the behaviour around moving the 
buffers between devices might have changed in a way which breaks somehow (and 
I've no clue how this might be fixed if so, but it would be interesting to know 
about).

> Error while filtering: Input/output error
> Failed to inject frame into filter network: Input/output error
> Error while processing the decoded data for stream #1:0
> [AVIOContext @ 0x559c51361500] Statistics: 0 seeks, 15 writeouts
> [h264_vaapi @ 0x559c513492c0] Freed output buffer 0
> [aac @ 0x559c51360580] Qavg: 227.062
> [aac @ 0x559c51360580] 2 frames left in the queue on closing
> Conversion failed!
> 
> Full log is here:
> https://s.natalian.org/2020-02-06/1580968072.mp4.log
> 
> The output file appears unrecoverable: "moov atom not found" :( 

Orthogonal to the real problem, but if you write to a streamable container 
(MPEG-TS, say) then you should at least have a usable stream up to that point.

> So any tips to get kmsgrab working on my system again? 
> Btw I found kmsgrab *much better* than x11grab since with VAAPI it doesn't 
> appear to make my T480s overheat. So I really want to restore the gloriously 
> efficient kmsgrab UX I was using until some package update broke it.

You're using the iHD driver for the VAAPI part there - assuming you aren't 
using an Ice Lake, is it any better with i965 (the older VAAPI driver) instead?

- Mark
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] kmsgrab on non primary card, libva/hwmap advice?

2020-02-01 Thread Mark Thompson
On 30/01/2020 11:00, test wrote:
> Hi All. Thanks in advance for any assistance provided.
> 
> I've been trying to get kmsgrab to work on my second rx570.
> Aim is to record multiple framebuffers on both cards. Both cards are
> maxed in terms of CRTC's, so switching all to one card isn't an
> option. x11grab has not been performant enough.
> 
> I have three "cards":
> 
> /dev/dri/card0 -> intel iGPU
> /dev/dri/card1 -> rx 570
> /dev/dri/card2 -> rx 570
> 
> I have card1 working fine, it's the primary card. I haven't tried card0.
> 
> /home/test/.local/bin/ffmpeg  -framerate 60 -device /dev/dri/card1  -f
> kmsgrab  -i - -vf 'hwmap=derive_device=vaapi,hwdownload,format=bgr0'
> /home/test/card1.mp4
> 
> Whilst trying for card2 and a similar command I get errors.
> I've tried advice under https://trac.ffmpeg.org/wiki/Hardware/VAAPI
> regarding device selection.
> The error doesn't change?
libva has some weird issues with KMS devices which aren't master, and the 
behaviour might well change between cards in one session depending on which one 
it decided was the main one.

I think you're on the right track with this:

> ./ffmpeg -init_hw_device
> vaapi=foo:/dev/dri/by-path/pci-\:02\:00.0-render
> -filter_hw_device foo -framerate 60 -device /dev/dri/card2  -f kmsgrab
>  -i - -vf 'hwmap=
> derive_device=vaapi,hwdownload,format=bgr0' /home/test/card2.mp4
but it isn't quite there.  The derive_device option asks it to make a new VAAPI 
device from the KMS/DRM device you have on input rather than using the one on 
the render device which you created first.

So, try removing the derive_device option from hwmap:

./ffmpeg -init_hw_device vaapi=foo:/dev/dri/by-path/pci-\:02\:00.0-render 
-filter_hw_device foo -framerate 60 -device /dev/dri/card2 -f kmsgrab -i - -vf 
'hwmap,hwdownload,format=bgr0' /home/test/card2.mp4

which will then map to your already-created VAAPI render device.

(I assume you've checked carefully that that PCI path matches the right card; 
it certainly looks plausible to me.)

- Mark
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Stereoscopic hwaccel

2019-10-02 Thread Mark Thompson
On 17/09/2019 17:33, JackDesBwa wrote:
> Hi,
> 
> I do stereoscopic (3d) photography for a few years and I just start to
> experiment on stereoscopic videos.
> The filter_complex argument of ffmpeg allows me to do the editing I have in
> mind more precisely and less painfully than I was able to do with regular
> non-linear editors before.
> Now that the basic editing tools seem to be right for my video projects, I
> want to accelerate rendering with hardware, and here starts my question.
> 
> *How to have frame-packing information in h264 streams when generated with
> hardware acceleration?*
> 
> ...
> Accelerating with vaapi:
> ffmpeg -vaapi_device /dev/dri/renderD128 -hwaccel vaapi
> -hwaccel_output_format vaapi -i aligned/l_01.mp4 -i aligned/r_01.mp4
> -filter_complex '[0][1]framepack=sbs,format=nv12|vaapi,hwupload' -c:v
> h264_vaapi -qp 22 -profile:v high out_vaapi.mp4

Try this with 
.

- Mark
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Ffmpeg compilation issues

2018-10-27 Thread Mark Thompson
On 27/10/18 17:24, Ronak wrote:
> Hi all,
> 
> I'm trying to build the latest HEAD version of Ffmpeg on Linux on the 
> following platform.
> 
> Linux 4.9.124-0.1.ac.198.71.329.metal1.x86_64 #1 SMP Thu Aug 30 20:39:05 UTC 
> 2018 x86_64 x86_64 x86_64 GNU/Linux
> 
> But, I'm getting compilation problems:
> 
> libavcodec/v4l2_m2m_enc.c: In function ‘v4l2_set_ext_ctrl’:
> libavcodec/v4l2_m2m_enc.c:51: warning: braces around scalar initializer
> libavcodec/v4l2_m2m_enc.c:51: warning: (near initialization for ‘ctrls.count’)
> libavcodec/v4l2_m2m_enc.c:55: error: ‘struct v4l2_ext_controls’ has no member 
> named ‘ctrl_class’
> libavcodec/v4l2_m2m_enc.c:60: error: ‘struct v4l2_ext_control’ has no member 
> named ‘value’
> libavcodec/v4l2_m2m_enc.c: In function ‘v4l2_get_ext_ctrl’:
> libavcodec/v4l2_m2m_enc.c:71: warning: braces around scalar initializer
> libavcodec/v4l2_m2m_enc.c:71: warning: (near initialization for ‘ctrls.count’)
> libavcodec/v4l2_m2m_enc.c:76: error: ‘struct v4l2_ext_controls’ has no member 
> named ‘ctrl_class’
> libavcodec/v4l2_m2m_enc.c:89: error: ‘struct v4l2_ext_control’ has no member 
> named ‘value’
> make: *** [libavcodec/v4l2_m2m_enc.o] Error 1
> 
> It looks like Ffmpeg is not finding the correct unions in the file: 
> /usr/include/linux/videodev2.h.
> 
> What do I have to set to make compilation succeed?

This was broken by a change which suppressed an invalid warning.  Fix here: 
.

Thanks for the report!

- Mark
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] kmsgrab on Intel 8th generation aka h/w accelerated screen capture

2018-06-25 Thread Mark Thompson
On 25/06/18 10:42, Moritz Barsnick wrote:
> On Mon, Jun 25, 2018 at 09:10:29 +0800, Kai Hendry wrote:
>> However since kmsgrab requires CAP_SYS_ADMIN as it points out to its
>> friendly documentation
>> https://www.ffmpeg.org/ffmpeg-devices.html#kmsgrab I believe I need to
>> run it with `sudo`.
> 
> Actually, the documentation says: "If you don’t understand what all of
> that means, you probably don’t want this." ;-)
> 
> My understanding (<- that's a disclaimer ;-)):
> 
> CAP_SYS_ADMIN is a Linux capability. Such a capability is something you
> (or the system administrator) grants a program. In effect, this
> particular capability is somewhat like root permissions, but sudo just
> won't suffice anymore, a program *must* have this capability in order
> to access DRM (which kmsgrab uses). I consider this unfortunate, that a
> program has to be granted general CAP_SYS_ADMIN, but *sigh*.

The test you need to pass for kmsgrab to work is here: 
.

When capturing inside X, the X server is already the DRM master (which has the 
Highlander nature) and therefore you need to have CAP_SYS_ADMIN instead to pass 
the test.  (It isn't needed in all cases - for example, you can capture the 
text console and some framebuffer programs without CAP_SYS_ADMIN because they 
do not take DRM master.)  The root user always has CAP_SYS_ADMIN, so sudo is 
always sufficient to allow kmsgrab to work.

For the pulse side of this, I believe the problem is that the pulseaudio daemon 
enforces that you can't record output owned by a different user.  Recording 
under sudo therefore doesn't work, because root is not the same user as you.  
You might be able to overcome this with some configuration setting for 
pulseaudio, but I'm not sure exactly what that will be.

With that in mind:

> Here you find an explanation of how to grant such a capability:
> https://stackoverflow.com/q/26504457/3974309
> 
> Basically:
> $ sudo setcap cap_sys_admin+ep /path/to/ffmpeg
> 
> Afterwards, run ffmpeg *without* sudo.

I expect this answer will work, because you get CAP_SYS_ADMIN for kmsgrab but 
are still your own user for pulseaudio.

Thanks,

- Mark
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] VAAPI encoding error

2018-06-18 Thread Mark Thompson
On 18/06/18 21:32, Victor Helmholtz wrote:
> Hi,
> 
> I am trying to encode raw yuv file using VAAPI hardware acceleration on a 
> machine with i7-6700 CPU running Debian 9 “Stretch” but I am getting error “A 
> hardware frames reference is required to associate the encoding device.”. I 
> have searched internet but couldn’t find anything related to this error 
> message. I have attached log with the error. Could anyone suggest solution 
> for this problem?
> 
> Thanks
> Victor
> 
> 
> $ ffmpeg -loglevel debug -c:v rawvideo -pix_fmt yuv420p -video_size 1920x1080 
> -i test.yuv -vaapi_device /dev/dri/renderD128 -vf 'format=yuv420p,hwupload' 
> -c:v h264_vaapi test.h264

This command doesn't work for me on any driver I know of because hardware H.264 
encoders generally use NV12 rather than YUV420P - changing "format=yuv420p" to 
"format=nv12" does work on both Intel / i965 and AMD / Mesa.  However, it 
doesn't fail in the way you show below so there is another problem:

> ffmpeg version 3.2.10-1~deb9u1 Copyright (c) 2000-2018 the FFmpeg developers
>   built with gcc 6.3.0 (Debian 6.3.0-18) 20170516
>   configuration: --prefix=/usr --extra-version='1~deb9u1' 
> --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu 
> --incdir=/usr/include/x86_64-linux-gnu --enable-gpl --disable-stripping 
> --enable-avresample --enable-avisynth --enable-gnutls --enable-ladspa 
> --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca 
> --enable-libcdio --enable-libebur128 --enable-libflite --enable-libfontconfig 
> --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm 
> --enable-libmp3lame --enable-libopenjpeg --enable-libopenmpt --enable-libopus 
> --enable-libpulse --enable-librubberband --enable-libshine --enable-libsnappy 
> --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora 
> --enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libwavpack 
> --enable-libwebp --enable-libx265 --enable-libxvid --enable-libzmq 
> --enable-libzvbi --enable-omx --enable-openal --enable-opengl --enable-sdl2 
> --enable-libdc1394 --enable-libiec61883 --enable-chromaprint --enable-frei0r 
> --enable-libopencv --enable-libx264 --enable-shared
>   WARNING: library configuration mismatch

This - you have an old dynamically-linked ffmpeg binary which doesn't match the 
shared libraries it's being used with.  I'm not entirely sure why it gives the 
error it does, but that is effectively not a supported configuration so please 
fix it before continuing.

>   avutil  configuration: --prefix=/usr --extra-version=2+b2 
> --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu 
> --incdir=/usr/include/x86_64-linux-gnu --enable-gpl --disable-stripping 
> --enable-avresample --enable-avisynth --enable-gnutls --enable-ladspa 
> --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca 
> --enable-libcdio --enable-libflite --enable-libfontconfig 
> --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm 
> --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg 
> --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librubberband 
> --enable-librsvg --enable-libshine --enable-libsnappy --enable-libsoxr 
> --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame 
> --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp 
> --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq 
> --enable-libzvbi --enable-omx --enable-openal --enable-opengl --enable-sdl2 
> --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint 
> --enable-frei0r --enable-libopencv --enable-libx264 --enable-shared
> ...
>   libavutil  55. 34.101 / 55. 78.100
>   libavcodec 57. 64.101 / 57.107.100
>   libavformat57. 56.101 / 57. 83.100
>   libavdevice57.  1.100 / 57. 10.100
>   libavfilter 6. 65.100 /  6.107.100
>   libavresample   3.  1.  0 /  3.  7.  0
>   libswscale  4.  2.100 /  4.  8.100
>   libswresample   2.  3.100 /  2.  9.100
>   libpostproc54.  1.100 / 54.  7.100
> Splitting the commandline.
> Reading option '-loglevel' ... matched as option 'loglevel' (set logging 
> level) with argument 'debug'.
> Reading option '-c:v' ... matched as option 'c' (codec name) with argument 
> 'rawvideo'.
> Reading option '-pix_fmt' ... matched as option 'pix_fmt' (set pixel format) 
> with argument 'yuv420p'.
> Reading option '-video_size' ... matched as AVOption 'video_size' with 
> argument '1920x1080'.
> Reading option '-i' ... matched as input url with argument 'test.yuv'.
> Reading option '-vaapi_device' ... matched as option 'vaapi_device' (set 
> VAAPI hardware device (DRM path or X11 display name)) with argument 
> '/dev/dri/renderD128'.
> Reading option '-vf' ... matched as option 'vf' (set video filters) with 
> argument 'format=yuv420p,hwupload'.
> Reading option '-c:v' ... matched as option 'c' (codec name) with argument 
> 'h264_vaapi'.
> Reading option 'test.h264' ... matched as 

Re: [FFmpeg-user] vaapi: Impossible to convert between the formats

2018-06-17 Thread Mark Thompson
On 17/06/18 21:30, Carl Eugen Hoyos wrote:
> 2018-06-17 22:00 GMT+02:00, Mark Thompson :
> 
>> Intel devices do not support MPEG-4 part 2 at all.  If you are using
>> the Mesa driver on an AMD device, some MPEG-4 part 2 streams
>> may be supported if you set the environment variable
>> VAAPI_MPEG4_ENABLED to 1, but it's not enabled by default
>> because the implementation is incomplete due to API constraints.
> 
> I know this mail isn't helpful but reading above and remembering how
> nice vdpau worked (already) many years ago and remembering how
> people said at the time that vaapi is much superior over vdpau I still
> wonder if we (FFmpeg) shouldn't have fought vaapi much stronger...

I doubt it would be particularly hard to make it work.  Since Intel didn't 
implement MPEG-4 part 2 on their supported devices they just cobbled together 
some set of stream/frame properties which looked sufficient for decoding and  
made that the API.  When AMD / Mesa tried to implement it they made some of it 
work but found that the API was incomplete, and then they gave up.  If you (or 
anyone else, including AMD) wants it to work then it would be straightforward 
(if somewhat tedious) to add the necessary properties to VAAPI / libva to get 
to the point that it matches the working VDPAU behaviour.

- Mark
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Green line with vaapi scaling

2018-06-17 Thread Mark Thompson
On 13/06/18 21:00, André Hänsel wrote:
>> Can you retest with git head?
>>
>> Build FFmpeg from source and retest.
> 
> I don't think I can do that easily, but I found a static build on the FFmpeg 
> website. It's supposed to support VAAPI:
> 
> # ./ffmpeg -hwaccels
> 
> ffmpeg version N-46272-g3a56ade1f-static https://johnvansickle.com/ffmpeg/  
> Copyright (c) 2000-2018 the FFmpeg developers
>   built with gcc 6.3.0 (Debian 6.3.0-18+deb9u1) 20170516
>   configuration: --enable-gpl --enable-version3 --enable-static 
> --disable-debug --disable-ffplay --disable-indev=sndio --disable-outdev=sndio 
> --cc=gcc-6 --enable-fontconfig --enable-frei0r --enable-gnutls --enable-gray 
> --enable-libaom --enable-libfribidi --enable-libass --enable-libvmaf 
> --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb 
> --enable-libopencore-amrwb --enable-libopenjpeg --enable-librubberband 
> --enable-libsoxr --enable-libspeex --enable-libvorbis --enable-libopus 
> --enable-libtheora --enable-libvidstab --enable-libvo-amrwbenc 
> --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 
> --enable-libxml2 --enable-libxvid --enable-libzimg
>   libavutil  56. 18.102 / 56. 18.102
>   libavcodec 58. 20.100 / 58. 20.100
>   libavformat58. 17.100 / 58. 17.100
>   libavdevice58.  4.100 / 58.  4.100
>   libavfilter 7. 25.100 /  7. 25.100
>   libswscale  5.  2.100 /  5.  2.100
>   libswresample   3.  2.100 /  3.  2.100
>   libpostproc55.  2.100 / 55.  2.100
> Hardware acceleration methods:
> vdpau
> vaapi
> 
> 
> However, when I try this command:
> 
> # ./ffmpeg -loglevel trace -vaapi_device /dev/dri/renderD128 -hwaccel vaapi 
> -i bbb.mp4 out.mp4
> 
> ffmpeg version N-46272-g3a56ade1f-static https://johnvansickle.com/ffmpeg/  
> Copyright (c) 2000-2018 the FFmpeg developers
>   built with gcc 6.3.0 (Debian 6.3.0-18+deb9u1) 20170516
>   configuration: --enable-gpl --enable-version3 --enable-static 
> --disable-debug --disable-ffplay --disable-indev=sndio --disable-outdev=sndio 
> --cc=gcc-6 --enable-fontconfig --enable-frei0r --enable-gnutls --enable-gray 
> --enable-libaom --enable-libfribidi --enable-libass --enable-libvmaf 
> --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb 
> --enable-libopencore-amrwb --enable-libopenjpeg --enable-librubberband 
> --enable-libsoxr --enable-libspeex --enable-libvorbis --enable-libopus 
> --enable-libtheora --enable-libvidstab --enable-libvo-amrwbenc 
> --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 
> --enable-libxml2 --enable-libxvid --enable-libzimg
>   libavutil  56. 18.102 / 56. 18.102
>   libavcodec 58. 20.100 / 58. 20.100
>   libavformat58. 17.100 / 58. 17.100
>   libavdevice58.  4.100 / 58.  4.100
>   libavfilter 7. 25.100 /  7. 25.100
>   libswscale  5.  2.100 /  5.  2.100
>   libswresample   3.  2.100 /  3.  2.100
>   libpostproc55.  2.100 / 55.  2.100
> Splitting the commandline.
> Reading option '-loglevel' ... matched as option 'loglevel' (set logging 
> level) with argument 'trace'.
> Reading option '-vaapi_device' ... matched as option 'vaapi_device' (set 
> VAAPI hardware device (DRM path or X11 display name)) with argument 
> '/dev/dri/renderD128'.
> Reading option '-hwaccel' ... matched as option 'hwaccel' (use HW accelerated 
> decoding) with argument 'vaapi'.
> Reading option '-i' ... matched as input url with argument 'bbb.mp4'.
> Reading option 'out.mp4' ... matched as output url.
> Finished splitting the commandline.
> Parsing a group of options: global .
> Applying option loglevel (set logging level) with argument trace.
> Applying option vaapi_device (set VAAPI hardware device (DRM path or X11 
> display name)) with argument /dev/dri/renderD128.
> [AVHWDeviceContext @ 0x6189cc0] No VA display found for device: 
> /dev/dri/renderD128.
> Device creation failed: -22.
> Failed to set value '/dev/dri/renderD128' for option 'vaapi_device': Invalid 
> argument
> Error parsing global options: Invalid argument
> 
> 
> According to strace it doesn't even try to open the device.

libva generally depends on the dynamic linker to work - it loads the driver at 
runtime.

It looks like this is built with CONFIG_VAAPI but not HAVE_VAAPI_X11 or 
HAVE_VAAPI_DRM, and then statically linked with libva.  That's not going to be 
able to get a VAAPI device or load a driver, so it won't work - see 
.
  (A library built in this way could work with external support for the 
loading, but given a static executable there is no way to do this.)

- Mark
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Green line with vaapi scaling

2018-06-17 Thread Mark Thompson
On 13/06/18 19:22, André Hänsel wrote:
> When I scale with scale_vaapi, it results in a green line at the bottom of
> the image, see attachment.
> 
> Command line:
> ffmpeg -hwaccel vaapi -vaapi_device /dev/dri/renderD128
> -hwaccel_output_format vaapi -i bbb.mp4 -vf
> "scale_vaapi=w=240:h=135:format=yuv420p,hwdownload,format=yuv420p" -frames 1
> out.png
> 
> It probably matters that I'm scaling to a height that is not divisible by 2
> or 16. The input video here is Big Buck Bunny 1080p.

Yeah, I don't think scaling a chroma-subsampled format to an odd dimension is 
going to act consistently.  This probably isn't an ffmpeg issue - it will 
depend on the VAAPI driver and hardware (I can reproduce the green line you 
have with the latest Intel i965 driver on gen9 hardware, it goes away if I 
scale to height 134 or 136 instead).

> Extra question: Why do I need the second "format=yuv420p" after the
> hwdownload filter? If I omit it, FFmpeg just gets stuck.

The filter negotiation for the output doesn't have information to realise that 
the output format needs to be the hardware format of the input frame to 
hwdownload, because at the point that formats are negotiated that isn't known 
and so hwdownload declares that it might output any software format.  As a 
result, the avfilter setup picks something appropriate to the following 
component (in this case the PNG encoder), and that isn't necessarily something 
which hwdownload can actually use so it throws an error in that case.

- Mark
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] vaapi: Impossible to convert between the formats

2018-06-17 Thread Mark Thompson
On 13/06/18 11:29, André Hänsel wrote:
> I'm trying to transcode an MPEG4 avi to MP4 with VAAPI, but I'm getting an
> error:
> 
> Impossible to convert between the formats supported by the filter
> 'Parsed_null_0' and the filter 'auto_scaler_0'
> Error reinitializing filters!
> Failed to inject frame into filter network: Function not implemented
> 
> The failing command is:
> ffmpeg -hwaccel vaapi -vaapi_device /dev/dri/renderD128
> -hwaccel_output_format vaapi -i test.avi -c:v h264_vaapi out.mp4
> 
> Other combinations work just fine, like hardware decoding that same video:
> ffmpeg -hwaccel vaapi -vaapi_device /dev/dri/renderD128 -i test.avi -c:v
> libx264 out.mp4
> 
> Hardware transcoding an MPEG2 input video works fine as well:
> ffmpeg -hwaccel vaapi -vaapi_device /dev/dri/renderD128
> -hwaccel_output_format vaapi -i test.mpg -c:v h264_vaapi out.mp4
> 
> Hardware transcoding an H264 video works, too:
> ffmpeg -hwaccel vaapi -vaapi_device /dev/dri/renderD128
> -hwaccel_output_format vaapi -i test.mp4 -c:v h264_vaapi out.mp4
> 
> How should I go about debugging this further?

Intel devices do not support MPEG-4 part 2 at all.  If you are using the Mesa 
driver on an AMD device, some MPEG-4 part 2 streams may be supported if you set 
the environment variable VAAPI_MPEG4_ENABLED to 1, but it's not enabled by 
default because the implementation is incomplete due to API constraints.

For encoding of streams which may be decoded in hardware or software (but you 
don't know in advance), see 
 - in particular, 
something like the third example command in that section is probably what you 
want.

- Mark
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] h264 pic_init_qp_minus26 out of range

2018-05-26 Thread Mark Thompson
On 24/05/18 12:52, Phillipe Laterrade wrote:
> Setting the qpmin parameter to 12 changes (and reduce) the 
> pic_init_qp_minus26...
> Is it the right way to fix this issue?

Well, what is going wrong with the existing values?  They are allowed by the 
standard so all decoders supporting >8-bit profiles have to handle then - if 
you have something which doesn't then you might want to report a bug against it 
so it can be fixed.

As a workaround for a broken device which doesn't handle the full range of QP, 
then yes setting qmin to constrain the encoder making the stream is probably a 
reasonable strategy.

- Mark


> -Message d'origine-
> De : ffmpeg-user [mailto:ffmpeg-user-boun...@ffmpeg.org] De la part de Mark 
> Thompson
> Envoyé : jeudi 24 mai 2018 12:22
> À : ffmpeg-user@ffmpeg.org
> Objet : Re: [FFmpeg-user] h264 pic_init_qp_minus26 out of range
> 
> On 24/05/18 10:46, Phillipe Laterrade wrote:
>> Hello,
>>
>>  
>>
>> I'm experiencing a problem when transcoding to h264 :
>>
>> The pic_init_qp_minus26 parameter is out of range (-28,-30,-31..) on 
>> some files but not on the whole transcoded files.
> 
> The range of pic_init_qp_minus26 is [-26 - 6 * bit_depth_luma_minus8, +25].  
> Does your file have a bit depth greater than 8?
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] h264 pic_init_qp_minus26 out of range

2018-05-24 Thread Mark Thompson
On 24/05/18 10:46, Phillipe Laterrade wrote:
> Hello,
> 
>  
> 
> I'm experiencing a problem when transcoding to h264 :
> 
> The pic_init_qp_minus26 parameter is out of range (-28,-30,-31..) on some
> files but not on the whole transcoded files.

The range of pic_init_qp_minus26 is [-26 - 6 * bit_depth_luma_minus8, +25].  
Does your file have a bit depth greater than 8?

- Mark
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Unable to build ffmpeg-4.0 on fedora 20 release

2018-04-27 Thread Mark Thompson
On 27/04/18 02:28, JD wrote:
> On 04/26/2018 06:56 PM, Kieran O Leary wrote:
>> On Fri, 27 Apr 2018, 01:29 JD,  wrote:
>>
>>> I would like to send the make output file to someone
>>> who might want to look at all the warnings (deprecated calls), and errors.
>>> Would someone please tell me where to upload the file or who to send it to.
>>
>> Can you not paste it into an email,or perhaps attach it if it is huge and
>> maybe copy paste the last few relevant lines into the body of the email?
> OK, here are the errors:
> 
> libavutil/hwcontext_vaapi.c:1169:5: error: unknown type name ‘VABufferInfo’
>  VABufferInfo buffer_info;
>  ^
> libavutil/hwcontext_vaapi.c: In function ‘vaapi_unmap_to_drm_abh’:
> libavutil/hwcontext_vaapi.c:1189:5: error: implicit declaration of function 
> ‘vaReleaseBufferHandle’ [-Werror=implicit-function-declaration]
>  vas = vaReleaseBufferHandle(hwctx->display, mapping->image.buf);
>  ^
> libavutil/hwcontext_vaapi.c: In function ‘vaapi_map_to_drm_abh’:
> libavutil/hwcontext_vaapi.c:1246:25: error: request for member ‘mem_type’ in 
> something not a structure or union
>  mapping->buffer_info.mem_type =
>  ^
> libavutil/hwcontext_vaapi.c:1284:5: error: implicit declaration of function 
> ‘vaAcquireBufferHandle’ [-Werror=implicit-function-declaration]
>  vas = vaAcquireBufferHandle(hwctx->display, mapping->image.buf,
>  ^
> libavutil/hwcontext_vaapi.c:1296:32: error: request for member ‘handle’ in 
> something not a structure or union
>     mapping->buffer_info.handle);
>     ^
> libavutil/hwcontext_vaapi.c:1300:37: error: request for member ‘handle’ in 
> something not a structure or union
>  .fd   = mapping->buffer_info.handle,
>  ^
> cc1: some warnings being treated as errors
> make: *** [libavutil/hwcontext_vaapi.o] Error 1
> 
> I know fc20 is unsupported, I do not want to upgrade it.

Fixed for 4.0.1: 
.

If you don't need VAAPI then you can work around it using --disable-vaapi (or 
--disable-autodetect).

Thanks,

- Mark
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] nvenc on GT1030

2018-03-24 Thread Mark Thompson
On 23/03/18 21:37, Piergiorgio Sartor wrote:
> Hi all,
> 
> I've recently got a Nvidia GT1030 graphic card,
> which should be Pascal generation, supporting
> HEVC encoding.
> 
> The system is Fedora 27, up to date, with
> official Nvidia driver and CUDA libraries form
> rpmfusion repository.
> Also ffmpeg is from there.
> 
> Now, I'm trying to encode an h264 video into HEVC,
> but ffmpeg complains the device is not nvenc capable.
The message is accurate.  The very lowest end Pascal chip (GP108, used in the 
GT 1030 and MX150) does not support NVENC at all.

- Mark
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] h264_vaapi missing frames

2018-03-23 Thread Mark Thompson
On 23/03/18 13:15, Carl Eugen Hoyos wrote:
> 2018-03-23 14:12 GMT+01:00, Mark Thompson <s...@jkqxz.net>:
>> On 23/03/18 01:28, Kai Hendry wrote:
>>> Thank you Moritz! Damn, I feel like a fool. ;)
>>>
>>> Unfortunately Mark's suggestion doesn't seem to have an impact.
>>>
>>> As you hopefully can see here:
>>> https://s.natalian.org/2018-03-23/1521768226.mp4
>>>
>>> The mouse still doesn't move smoothy across the screen. Hence I feel
>>> it's dropping frames!
>>>
>>> https://s.natalian.org/2018-03-23/1521768226.mp4.log
>>> ...
>>> *** 38 dup!
>>> ...
>>> *** 33 dup!
>>> ...
>>> *** 40 dup!
>>> ...
>>> *** 39 dup!
>>> ...
>>> *** 40 dup!
>>> ...
>>> *** 38 dup!
>>> ...
>>> *** 40 dup!
>>> ...
>>> *** 39 dup!
>>> ...
>>> *** 39 dup!
>>> ...
>>> *** 39 dup!
>>> frame=  439 fps= 30 q=-0.0 Lsize=1870kB time=00:00:14.58
>>> bitrate=1050.4kbits/s dup=381 drop=34 speed=0.985x
>>
>> This is still duplicating a very large number of frames for video sync.  As
>> suggested previously, please try with video sync disabled ("-vsync 0") or
>> without audio.
> 
> To the best of my knowledge, this would produce an invalid output file:
> The input is not cfr, FFmpeg's mov output does not support vfr.

Indeed, hence "try".  It may produce an output with bad properties, but if it 
contains all of the desired frames then it's clear that the problem lies 
somewhere in timestamps and video sync, and therefore that efforts should be 
focussed there to get a correct and complete output.

- Mark
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] h264_vaapi missing frames

2018-03-23 Thread Mark Thompson
On 23/03/18 01:28, Kai Hendry wrote:
> Thank you Moritz! Damn, I feel like a fool. ;)
> 
> Unfortunately Mark's suggestion doesn't seem to have an impact.
> 
> As you hopefully can see here:
> https://s.natalian.org/2018-03-23/1521768226.mp4
> 
> The mouse still doesn't move smoothy across the screen. Hence I feel
> it's dropping frames!
> 
> https://s.natalian.org/2018-03-23/1521768226.mp4.log
> ...
> *** 38 dup!
> ...
> *** 33 dup!
> ...
> *** 40 dup!
> ...
> *** 39 dup!
> ...
> *** 40 dup!
> ...
> *** 38 dup!
> ...
> *** 40 dup!
> ...
> *** 39 dup!
> ...
> *** 39 dup!
> ...
> *** 39 dup!
> frame=  439 fps= 30 q=-0.0 Lsize=1870kB time=00:00:14.58 
> bitrate=1050.4kbits/s dup=381 drop=34 speed=0.985x

This is still duplicating a very large number of frames for video sync.  As 
suggested previously, please try with video sync disabled ("-vsync 0") or 
without audio.

- Mark
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] h264_vaapi missing frames

2018-03-21 Thread Mark Thompson
On 20/03/18 10:24, Kai Hendry wrote:
> Hi Mark,
> 
> On 20 March 2018 at 17:58, Mark Thompson <s...@jkqxz.net> wrote:
>> Show your command line?  The pts values in that file are quite uniform, 
>> suggesting that you've forced the output to be treated as if it is 30/1001 
>> fps even if it isn't.  The status line also says "frame=  529 ... dup=430", 
>> which I think means it has captured 100 frames and made 430 duplicates to 
>> keep the framerate.
> 
> Sorry, I've just discovered -report.
> 
> https://s.natalian.org/2018-03-20/1521541273.mp4.log
> https://s.natalian.org/2018-03-20/1521541273.mp4
> 
> https://github.com/kaihendry/recordmydesktop2.0/blob/vaapi/x11capture
> is the script in question.
> 
> Does that better tell you what I'm doing wrong?

The -re option might be causing problems there - it is meant to be used for 
pretending that a file is a real-time source, not for actual real-time sources 
(see the documentation at <http://ffmpeg.org/ffmpeg.html#Advanced-options>, 
which warns against doing exactly this).  With that option removed I would 
expect it to use the real timestamps from x11grab which hopefully gives better 
results.

If that doesn't work then I suggest reducing the test case to find which parts 
are causing the problem - e.g. try without audio, using libx264 instead of 
VAAPI, without extra video sync ("-vsync 0").

- Mark


Aside:  the command you have there does colour-conversion on the CPU (x11grab 
makes bgr0, the format filter there makes it convert to nv12 before upload).  
It is possible to do the conversion on the GPU instead by putting a conversion 
after hwupload, which is often faster: "-vf 'hwupload,scale_vaapi=format=nv12'" 
(though note that this has less control over colourspaces, so the colours can 
be a bit wrong in some cases).
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] h264_vaapi missing frames

2018-03-20 Thread Mark Thompson
On 20/03/18 02:30, Kai Hendry wrote:
> Hi there,
> 
> With my brand new Intel 8th gen laptop (Intel Corporation UHD Graphics
> 620 (rev 07) with 8 core Intel(R) Core(TM) i7-8550U CPU @ 1.80GHz on
> Archlinux, I thought I'd experiment with hardware accelerated capture
> as opposed to my normal 2-step approach of
> 1) recording to mkv
> https://github.com/kaihendry/recordmydesktop2.0/blob/master/x11capture
> 2) converting mkv to mp4
> https://github.com/kaihendry/recordmydesktop2.0/blob/master/htmlvideo
> 
> 
> However with -vaapi_device /dev/dri/renderD128 -movflags +faststart
> -vf 'format=nv12|vaapi,hwupload' -c:v h264_vaapi it seems to skip
> frames.
> 
> https://s.natalian.org/2018-03-20/1521512464.mp4.log
> https://s.natalian.org/2018-03-20/1521512464.mp4
> 
> 
> Ffprobe seems to say it's 30fps:
>  Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p,
> 2560x1440, 722 kb/s, 29.97 fps, 29.97 tbr, 30k tbn, 59.94 tbc
> (default)
> 
> But during playback, it would appear not!
> 
> Is there a good way to test or know I've dropped frames in my screen capture?
> 
> How can I prevent frame dropping?

Show your command line?  The pts values in that file are quite uniform, 
suggesting that you've forced the output to be treated as if it is 30/1001 fps 
even if it isn't.  The status line also says "frame=  529 ... dup=430", which I 
think means it has captured 100 frames and made 430 duplicates to keep the 
framerate.

- Mark
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Ffmpeg Webm vp8/vp9 encoding supporting Gpus

2018-03-01 Thread Mark Thompson
On 01/03/18 11:57, oktay eşgül wrote:
> Hi,
> 
> I am trying to encode webrtc vidyo call records.Currently,we have Nvidia
> 1080ti GPu and using cuda.
> 
> Yet ,current gpu do not support webm encoding. Need to figure out
> applicable hardware to be able to use webm encoding.
> 
> As far as I understand,VAAPi and Intel kaby lake (vp9) and cherryview(vp8)
>  seems can be used for this purpose.
> 
> The first question ,is there any gpu which support both vp8/vp9 webm
> encoding?
> 
> The second one is that is there someonr who has hands on experience with
> vaapi and Intel gpus.

See  - you want a Kaby Lake or 
later to have support for both VP8 and VP9.

For details on how to use it with VAAPI, see 
.

- Mark
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg3.4 regression

2018-02-16 Thread Mark Thompson
On 16/02/18 15:03, Piotr Oniszczuk wrote:
> 
> 
>>
>> VDPAU is fully supported via the hwaccel API and is usable in the ffmpeg 
>> program via the -hwaccel option.  The (deprecated for years) standalone 
>> VDPAU decoder which you are trying to use here was removed from master with 
>> the major version bump leading to ffmpeg 4.  It is plausible that it was 
>> broken before that point (because it never really fit as a decoder and I 
>> doubt it ever got much testing once the hwaccel was introduced), but you 
>> will likely have a hard time getting anyone to care about such a 
>> long-deprecated feature which doesn’t exist any more on master.
>>
> 
> Mark,
> 
> Probably we have a bit misunderstanding here…
> 
> My issue with h_t_t_p://warped.inet2.org/sample3.mkv is fact mythtv playback 
> issue causes by ffmpeg3.4.1 regression (as recently mythtv upgrade build-in 
> ffmpeg from 3.2.1 to 3.4.1)
> IIRC mythtv uses hwaccel API for HW decoded vdpau playback. 
> 
> In discussion here I’m using ffplay to decouple ffmpeg from mythtv and 
> demonstrate that root cause of failed playback is 3.4.1 regression - not 
> something else (i.e. mythtv).

Well, I suggest trying to replicate it in a setup which uses the same 
components as the case you are interested in rather than completely different 
ones.  If it uses the AVHWAccel API then "ffmpeg -hwaccel vdpau" should be able 
to reproduce the same behaviour.

> If You have better idea to address this regression - I’m open to any 
> suggestion.
> 
> Currently we have proof that most recent ffmpeg release (3.4.2) has 
> regression (as 3.2.1 plays perfectly).
> 
> Is this enough to look on problem? 

Your sample works for me?

Master:

$ ./ffmpeg_g -y -v 0 -i sample3.mkv -an -frames:v 100 out_sw.yuv
$ ./ffmpeg_g -y -v 0 -hwaccel vaapi -hwaccel_output_format yuv420p -i 
sample3.mkv -an -frames:v 100 out_vaapi.yuv
$ DISPLAY=:0 ./ffmpeg_g -y -v 0 -hwaccel vdpau -hwaccel_output_format yuv420p 
-i sample3.mkv -an -frames:v 100 out_vdpau.yuv
$ cmp out_sw.yuv out_vaapi.yuv 
$ cmp out_sw.yuv out_vdpau.yuv 
$ ./ffmpeg_g 
ffmpeg version N-90065-g8a8d0b319a Copyright (c) 2000-2018 the FFmpeg developers
...

3.4 branch:

$ ./ffmpeg_g -y -v 0 -i sample3.mkv -an -frames:v 100 out_sw.yuv
$ ./ffmpeg_g -y -v 0 -hwaccel vaapi -hwaccel_output_format yuv420p -i 
sample3.mkv -an -frames:v 100 out_vaapi.yuv
$ DISPLAY=:0 ./ffmpeg_g -y -v 0 -hwaccel vdpau -hwaccel_output_format yuv420p 
-i sample3.mkv -an -frames:v 100 out_vdpau.yuv
$ cmp out_sw.yuv out_vaapi.yuv 
$ cmp out_sw.yuv out_vdpau.yuv 
$ ./ffmpeg_g 
ffmpeg version n3.4.2 Copyright (c) 2000-2018 the FFmpeg developers
...

(Both VAAPI and VDPAU running on an AMD Polaris 11 / RX 460.)


- Mark
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg3.4 regression

2018-02-16 Thread Mark Thompson
On 16/02/18 12:55, Piotr Oniszczuk wrote:
> Carl,
> Thx for prompt replay.
> Pls see in-line
> 
>> Wiadomość napisana przez Carl Eugen Hoyos  w dniu 
>> 16.02.2018, o godz. 11:22:
>>
>> 2018-02-16 11:00 GMT+01:00 Piotr Oniszczuk :
>>
>>> -I have some mkv videos which are playing perfectly well with
>>> ffmpeg 3.2.1 - but fails with ffmpeg 3.4
>>
>> Please test current FFmpeg git head, nothing else is supported here.
> 
> It looks like I can’t test latest git due lack of support for vdpau.
> Just compiled current git and it looks it is not supporting vdpau as having 
> —enable-vdpau in config gives me in list of supported codecs with h264_nvenc 
> but no h264_vdpau).
> Is it true that vdpau isn’t supported after 3.4.2?

VDPAU is fully supported via the hwaccel API and is usable in the ffmpeg 
program via the -hwaccel option.  The (deprecated for years) standalone VDPAU 
decoder which you are trying to use here was removed from master with the major 
version bump leading to ffmpeg 4.  It is plausible that it was broken before 
that point (because it never really fit as a decoder and I doubt it ever got 
much testing once the hwaccel was introduced), but you will likely have a hard 
time getting anyone to care about such a long-deprecated feature which doesn't 
exist any more on master.

Thanks,

- Mark
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] [Parsed_overlay_opencl_5 @ 0x4f22e80] Failed to finish command queue:

2018-01-10 Thread Mark Thompson
at is yuv420p.
> [graph 0 input from stream 0:0 @ 0x5467340] TB:0.11 FRAME_RATE:25.00 
> SAMPLE_RATE:nan
> [Parsed_scale_3 @ 0x5414180] w:1920 h:1080 fmt:yuv420p sar:1/1 -> w:426 h:240 
> fmt:yuv420p sar:640/639 flags:0x2
> [hwupload @ 0x54102c0] Surface format is yuv420p.
> [Parsed_overlay_opencl_5 @ 0x5415580] [framesync @ 0x54156d0] Selected 
> 1/9 time base
> [Parsed_overlay_opencl_5 @ 0x5415580] [framesync @ 0x54156d0] Sync level 2
> [Parsed_overlay_opencl_5 @ 0x5415580] Using kernel overlay_no_alpha.
> [Parsed_overlay_opencl_5 @ 0x5415580] Failed to finish command queue: -36.
> Error while filtering: Input/output error
> Failed to inject frame into filter network: Input/output error
> Error while processing the decoded data for stream #0:0
> [AVIOContext @ 0x537f680] Statistics: 0 seeks, 0 writeouts
> [AVIOContext @ 0x5324100] Statistics: 6377616 bytes read, 2 seeks
> Conversion failed!
> 
> 
> 
> 
> 
> 
> 
> 
> MARK**
> 
> 
> 
> 
> 
> 
> At 2018-01-09 21:02:43, "Mark Thompson" <s...@jkqxz.net> wrote:
>> On 09/01/18 06:01, 郭浩 wrote:
>>> i want to use overlay_opencl, and i build the ffmpeg with opencl success! 
>>> but i run the command , it alway report "[Parsed_overlay_opencl_5 @ 
>>> 0x4f22e80] Failed to finish command queue:",i am not sure my command is 
>>> correct , can someone tell me how this should be corrected? Thanks
>>>
>>>
>>> this is my cmd:
>>> cmd:
>>> -
>>> ./ffmpeg -init_hw_device opencl=cl:0.0 -filter_hw_device cl -i 
>>> "udp://230.0.0.100:53007?overrun_nonfatal=1" -filter_complex 
>>> "color=c=black@1:s=1280x720,hwupload[bg];[0:v]setpts=PTS-STARTPTS,scale=w=426:h=240,hwupload[v0];[bg][v0]overlay_opencl=x=0:y=0
>>>  ,hwdownload " -c:v hevc_nvenc -b:v 4M -r 25 -g 25 -f mpegts 
>>> "udp://230.0.0.240:53001?pkt_size=1316=192.168.172.115_nonfatal=1_size=286720=64=4096000"
>>> -
>>
>> Does it work if you remove all of the other external stuff (replace the udp 
>> with normal files and nvenc with a software encoder)?
>>
>> I don't have that other stuff available for testing, but the following 
>> command without them does work for me (using Beignet):
>>
>> ./ffmpeg_g -init_hw_device opencl=cl:0.0 -filter_hw_device cl -i 
>> input_1080p.mp4 -filter_complex 
>> "color=c=black@1:s=1280x720,hwupload[bg];[0:v]setpts=PTS-STARTPTS,scale=w=426:h=240,hwupload[v0];[bg][v0]overlay_opencl=x=0:y=0
>>  ,hwdownload " -c:v libx264 -b:v 4M -r 25 -g 25 -f mpegts out.ts
>>
>> A few other thoughts:
>> * overlay_opencl in isolation isn't necessarily faster or even using less 
>> CPU than normal overlay when you take into account the extra upload/download 
>> steps needed (it's more intended for cases when you have interop and can 
>> keep things on the GPU).
>> * Format selection is kindof tricky, because the negotiation can't see 
>> through properly to the underlying hardware formats.  You could try adding 
>> "format=yuv420p" (or another choice, maybe nv12) before the hwupload 
>> instances, though I'm not sure why your setup would give a different answer 
>> to mine there.
>> * A log with "-v debug" will have a bit more information about the OpenCL 
>> device and setup - please post that if you still can't get it to work.
>>
>> - Mark
>>
>>> report:
>>> ---
>>> --enable-doc --enable-postproc --enable-bzlib --enable-zlib 
>>> --enable-parsers --enable-libx264 --enable-libx265 --enable-libmp3lame 
>>> --enable-libfdk-aac --enable-libspeex --enable-pthreads 
>>> --extra-libs=-lpthread --enable-decoders --enable-encoders 
>>> --enable-avfilter --enable-muxers --enable-demuxers --enable-nvenc 
>>> --enable-cuvid --enable-cuda --enable-libnpp --enable-opencl 
>>> --enable-decoders
>>>   libavutil  56.  7.100 / 56.  7.100
>>>   libavcodec 58.  9.100 / 58.  9.100
>>>   libavformat58.  3.100 / 58.  3.100
>>>   libavdevice58.  0.100 / 58.  0.100
>>>   libavfilter 7. 11.101 /  7. 11.101
>>>   libswscale  5.  0.101 /  5.  0.101
>>>   libswresample   3.  0.101 /  3.  0.101
>>>   libpostproc55.  0.100 / 55.  0.100
>>> Input #0, mpegts, from 'udp://230.0.0.100:53007?overrun_nonfatal=1':
>>>   Duration: N/A, start: 2197.197333, bitrate: N/A
>>>   Program 1 
>>> Metadata:
>>>   service_name: ffmpeg
>>>   service_provider: ffmpeg
>>> Stream #0:0[0x100]: Video: h264 (High) ([27][0][0][0] / 0x001B), 
>>> yuv420p(progressive), 428x240 [SAR 320:321 DAR 16:9], 25 fps, 25 tbr, 90k 
>>> tbn, 50 tbc
>>> Stream #0:1[0x101]: Audio: aac (LC) ([15][0][0][0] / 0x000F), 48000 Hz, 
>>> stereo, fltp, 58 kb/s
>>> Stream mapping:
>>>   Stream #0:0 (h264) -> setpts (graph 0)
>>>   hwdownload (graph 0) -> Stream #0:0 (hevc_nvenc)
>>>   Stream #0:1 -> #0:1 (aac (native) -> mp2 (native))
>>> Press [q] to stop, [?] for help
>>> [Parsed_overlay_opencl_5 @ 0x4f22e80] Failed to finish command queue: -36.
>>> Error while filtering: Input/output error
>>> Failed to inject frame into filter network: Input/output error
>>> Error while processing the decoded data for stream #0:0
>>> Conversion failed!
>>> --
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] [Parsed_overlay_opencl_5 @ 0x4f22e80] Failed to finish command queue:

2018-01-09 Thread Mark Thompson
On 09/01/18 06:01, 郭浩 wrote:
> i want to use overlay_opencl, and i build the ffmpeg with opencl success! but 
> i run the command , it alway report "[Parsed_overlay_opencl_5 @ 0x4f22e80] 
> Failed to finish command queue:",i am not sure my command is correct , can 
> someone tell me how this should be corrected? Thanks
> 
> 
> this is my cmd:
> cmd:
> -
> ./ffmpeg -init_hw_device opencl=cl:0.0 -filter_hw_device cl -i 
> "udp://230.0.0.100:53007?overrun_nonfatal=1" -filter_complex 
> "color=c=black@1:s=1280x720,hwupload[bg];[0:v]setpts=PTS-STARTPTS,scale=w=426:h=240,hwupload[v0];[bg][v0]overlay_opencl=x=0:y=0
>  ,hwdownload " -c:v hevc_nvenc -b:v 4M -r 25 -g 25 -f mpegts 
> "udp://230.0.0.240:53001?pkt_size=1316=192.168.172.115_nonfatal=1_size=286720=64=4096000"
> -

Does it work if you remove all of the other external stuff (replace the udp 
with normal files and nvenc with a software encoder)?

I don't have that other stuff available for testing, but the following command 
without them does work for me (using Beignet):

./ffmpeg_g -init_hw_device opencl=cl:0.0 -filter_hw_device cl -i 
input_1080p.mp4 -filter_complex 
"color=c=black@1:s=1280x720,hwupload[bg];[0:v]setpts=PTS-STARTPTS,scale=w=426:h=240,hwupload[v0];[bg][v0]overlay_opencl=x=0:y=0
 ,hwdownload " -c:v libx264 -b:v 4M -r 25 -g 25 -f mpegts out.ts

A few other thoughts:
* overlay_opencl in isolation isn't necessarily faster or even using less CPU 
than normal overlay when you take into account the extra upload/download steps 
needed (it's more intended for cases when you have interop and can keep things 
on the GPU).
* Format selection is kindof tricky, because the negotiation can't see through 
properly to the underlying hardware formats.  You could try adding 
"format=yuv420p" (or another choice, maybe nv12) before the hwupload instances, 
though I'm not sure why your setup would give a different answer to mine there.
* A log with "-v debug" will have a bit more information about the OpenCL 
device and setup - please post that if you still can't get it to work.

- Mark

> report:
> ---
> --enable-doc --enable-postproc --enable-bzlib --enable-zlib --enable-parsers 
> --enable-libx264 --enable-libx265 --enable-libmp3lame --enable-libfdk-aac 
> --enable-libspeex --enable-pthreads --extra-libs=-lpthread --enable-decoders 
> --enable-encoders --enable-avfilter --enable-muxers --enable-demuxers 
> --enable-nvenc --enable-cuvid --enable-cuda --enable-libnpp --enable-opencl 
> --enable-decoders
>   libavutil  56.  7.100 / 56.  7.100
>   libavcodec 58.  9.100 / 58.  9.100
>   libavformat58.  3.100 / 58.  3.100
>   libavdevice58.  0.100 / 58.  0.100
>   libavfilter 7. 11.101 /  7. 11.101
>   libswscale  5.  0.101 /  5.  0.101
>   libswresample   3.  0.101 /  3.  0.101
>   libpostproc55.  0.100 / 55.  0.100
> Input #0, mpegts, from 'udp://230.0.0.100:53007?overrun_nonfatal=1':
>   Duration: N/A, start: 2197.197333, bitrate: N/A
>   Program 1 
> Metadata:
>   service_name: ffmpeg
>   service_provider: ffmpeg
> Stream #0:0[0x100]: Video: h264 (High) ([27][0][0][0] / 0x001B), 
> yuv420p(progressive), 428x240 [SAR 320:321 DAR 16:9], 25 fps, 25 tbr, 90k 
> tbn, 50 tbc
> Stream #0:1[0x101]: Audio: aac (LC) ([15][0][0][0] / 0x000F), 48000 Hz, 
> stereo, fltp, 58 kb/s
> Stream mapping:
>   Stream #0:0 (h264) -> setpts (graph 0)
>   hwdownload (graph 0) -> Stream #0:0 (hevc_nvenc)
>   Stream #0:1 -> #0:1 (aac (native) -> mp2 (native))
> Press [q] to stop, [?] for help
> [Parsed_overlay_opencl_5 @ 0x4f22e80] Failed to finish command queue: -36.
> Error while filtering: Input/output error
> Failed to inject frame into filter network: Input/output error
> Error while processing the decoded data for stream #0:0
> Conversion failed!
> --
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] HEVC Conformance Test Failed

2017-11-29 Thread Mark Thompson
On 29/11/17 12:58, Carl Eugen Hoyos wrote:
> 2017-11-29 11:19 GMT+01:00 Zhenan Lin :
> 
>> CONFWIN_A_Sony_1/CONFWIN_A_Sony_1/CONFWIN_A_Sony_1.bit
>> HM 16.9: a3ce3f936ff69ff1ec2621a622dd37ac
>> FFmpeg: c0a13e81b3a68c4263f240eb99a281b0
> 
> Needs "-vf crop=412:236:2:0" for bit-identical output.
> Not sure if we want to support this, in any case, no
> warning is shown, so there is a bug.

"-flags unaligned" does this in the decoder (and is used in the FATE tests 
which include this file).

I think unaligned top/left cropping is also considered insane unless/until some 
real encoder actually uses it - this single conformance file is essentially the 
only instance of it seen in the wild.

- Mark
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] RTSP stream fps 29.97 instead of 30

2017-10-05 Thread Mark Thompson
On 05/10/17 07:48, Carl Eugen Hoyos wrote:
> 2017-10-04 23:20 GMT+02:00 Mark Thompson <s...@jkqxz.net>:
>> On 03/10/17 21:22, Jonathan Viney wrote:
>>> On Tue, Oct 3, 2017 at 1:56 PM, Mark Thompson <s...@jkqxz.net> wrote:
>>>
>>>> On 03/10/17 00:56, Jonathan Viney wrote:
> 
>>>> parameter-sets=Z2QAM6w0yAPABD/8BbgICAoAAAfSAAHUwdDAAGP/
>>>> gAAMf+Nd5caGAAMf/AAAY/8a7y4b04A=,aO48MA==
>>>>
>>>> This SPS in the SDP decodes with:
>>>>
>>>> @126   VUI: timing_info_present_flag 1 (
>>>> 1)
>>>> @127   VUI: num_units_in_tick
>>>> 00101001 (1001)
>>>> @159   VUI: time_scale
>>>>  111010100110 (6)
>>>> @191   VUI: fixed_frame_rate_flag1 (
>>>> 1)
>>>>
>>>> That is, the H.264 stream indeed has a fixed framerate of 6/1001/2 ~
>>>> 29.97fps, and that is what ffmpeg is reading - the RTSP layer is likely
>>>> lying about it.  (You may be able to count frames to check that.)
>>>>
>>>> If you want an exactly 30fps output then you should tell the camera to
>>>> generate a 30fps stream rather than the 3/1001fps it currently is.
> 
>>> Thanks Mark. That's interesting, I'll see if there is a way to adjust the
>>> stream from the camera.
>>>
>>> How did you decode the data from the sprop-parameter-sets value?
>>
>> It's just the SPS and PPS base64 coded: decode the base64,
>> add start codes, feed it to any H.264 stream parser.
> 
> My question would have been:
> Where can I find any H.264 stream parser?

<http://ffmpeg.org/pipermail/ffmpeg-devel/2017-September/216249.html> - the 
trace_headers filter will generate traces like this.

Unfortunately that doesn't actually work for the stream fragment here because 
outer layers in ffmpeg don't like a stream containing no slices - I instead 
used the JM reference decoder in this case, which is happy to parse an isolated 
NAL unit.

- Mark
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] RTSP stream fps 29.97 instead of 30

2017-10-04 Thread Mark Thompson
On 03/10/17 21:22, Jonathan Viney wrote:
> On Tue, Oct 3, 2017 at 1:56 PM, Mark Thompson <s...@jkqxz.net> wrote:
> 
>> On 03/10/17 00:56, Jonathan Viney wrote:
>>> Hi,
>>>
>>> I am pulling an RTSP stream from an Axis 4K IP camera. The stream from
>> the
>>> camera is 30fps, but the resulting stream from ffmpeg is 29.97fps. Here
>> is
>>> the command:
>>>
>>> ffmpeg -rtsp_transport tcp -i rtsp://10.9.9.1:554/axis-media/media.amp
>>> -loglevel debug
>>>
>>> The rtsp log shows a framerate of 30:
>>>
>>> a=framerate:30.00
>>>
>>> The full log output is below. Is there a way to force the framerate to 30
>>> fps?
>>>
>>> This video undergoes a second pass where it gets re-encoded, so we could
>>> adjust the frame rate there if necessary. But it would be preferable for
>> it
>>> to be right at this step.
>>>
>>> ...
>>> [rtsp @ 0x40d0d80] SDP:
>>> v=0
>>> o=- 8374289283112756277 1 IN IP4 10.9.9.1
>>> s=Session streamed with GStreamer
>>> i=rtsp-server
>>> t=0 0
>>> a=tool:GStreamer
>>> a=type:broadcast
>>> a=range:npt=now-
>>> a=control:rtsp://10.9.9.1:554/axis-media/media.amp
>>> m=video 0 RTP/AVP 96
>>> c=IN IP4 0.0.0.0
>>> b=AS:24
>>> a=rtpmap:96 H264/9
>>> a=fmtp:96
>>> packetization-mode=1;profile-level-id=640033;sprop-
>> parameter-sets=Z2QAM6w0yAPABD/8BbgICAoAAAfSAAHUwdDAAGP/
>> gAAMf+Nd5caGAAMf/AAAY/8a7y4b04A=,aO48MA==
>>
>> This SPS in the SDP decodes with:
>>
>> @126   VUI: timing_info_present_flag 1 (
>> 1)
>> @127   VUI: num_units_in_tick
>> 00101001 (1001)
>> @159   VUI: time_scale
>>  111010100110 (6)
>> @191   VUI: fixed_frame_rate_flag1 (
>> 1)
>>
>> That is, the H.264 stream indeed has a fixed framerate of 6/1001/2 ~
>> 29.97fps, and that is what ffmpeg is reading - the RTSP layer is likely
>> lying about it.  (You may be able to count frames to check that.)
>>
>> If you want an exactly 30fps output then you should tell the camera to
>> generate a 30fps stream rather than the 3/1001fps it currently is.
>>
>> - Mark
>>
> 
> Thanks Mark. That's interesting, I'll see if there is a way to adjust the
> stream from the camera.
> 
> How did you decode the data from the sprop-parameter-sets value?

It's just the SPS and PPS base64 coded: decode the base64, add start codes, 
feed it to any H.264 stream parser.

- Mark
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] RTSP stream fps 29.97 instead of 30

2017-10-02 Thread Mark Thompson
On 03/10/17 00:56, Jonathan Viney wrote:
> Hi,
> 
> I am pulling an RTSP stream from an Axis 4K IP camera. The stream from the
> camera is 30fps, but the resulting stream from ffmpeg is 29.97fps. Here is
> the command:
> 
> ffmpeg -rtsp_transport tcp -i rtsp://10.9.9.1:554/axis-media/media.amp
> -loglevel debug
> 
> The rtsp log shows a framerate of 30:
> 
> a=framerate:30.00
> 
> The full log output is below. Is there a way to force the framerate to 30
> fps?
> 
> This video undergoes a second pass where it gets re-encoded, so we could
> adjust the frame rate there if necessary. But it would be preferable for it
> to be right at this step.
> 
> ...
> [rtsp @ 0x40d0d80] SDP:
> v=0
> o=- 8374289283112756277 1 IN IP4 10.9.9.1
> s=Session streamed with GStreamer
> i=rtsp-server
> t=0 0
> a=tool:GStreamer
> a=type:broadcast
> a=range:npt=now-
> a=control:rtsp://10.9.9.1:554/axis-media/media.amp
> m=video 0 RTP/AVP 96
> c=IN IP4 0.0.0.0
> b=AS:24
> a=rtpmap:96 H264/9
> a=fmtp:96
> packetization-mode=1;profile-level-id=640033;sprop-parameter-sets=Z2QAM6w0yAPABD/8BbgICAoAAAfSAAHUwdDAAGP/gAAMf+Nd5caGAAMf/AAAY/8a7y4b04A=,aO48MA==

This SPS in the SDP decodes with:

@126   VUI: timing_info_present_flag 1 (  1) 
@127   VUI: num_units_in_tick  
00101001 (1001) 
@159   VUI: time_scale 
111010100110 (6) 
@191   VUI: fixed_frame_rate_flag1 (  1) 

That is, the H.264 stream indeed has a fixed framerate of 6/1001/2 ~ 
29.97fps, and that is what ffmpeg is reading - the RTSP layer is likely lying 
about it.  (You may be able to count frames to check that.)

If you want an exactly 30fps output then you should tell the camera to generate 
a 30fps stream rather than the 3/1001fps it currently is.

- Mark
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] memory leak with vaapi decode since ffmpeg_vaapi: Convert to use hw_frames_ctx only

2017-05-28 Thread Mark Thompson
On 28/05/17 12:22, Andy Furniss wrote:
> ==13089== HEAP SUMMARY:
> ==13089== in use at exit: 1,641,516,251 bytes in 201,131 blocks
> ==13089==   total heap usage: 2,075,668 allocs, 1,874,537 frees, 
> 68,823,339,225 bytes allocated
> ==13089==
> ==13089== 368 bytes in 1 blocks are possibly lost in loss record 908 of 932
> ==13089==at 0x4C28B06: malloc 
> (/home/andy/Src2016/valgrind-3.12.0/coregrind/m_replacemalloc/vg_replace_malloc.c:299)
> ==13089==by 0xFBED2DE: vlVaCreateBuffer 
> (/mesa/src/gallium/state_trackers/va/buffer.c:56)
> ==13089==by 0x603BB82: vaCreateBuffer (/libva/va/va.c:1116)
> ==13089==by 0xED2C21: ff_vaapi_decode_make_param_buffer 
> (/ffmpeg/libavcodec/vaapi_decode.c:40)
> ==13089==by 0xAFDA07: vaapi_h264_start_frame 
> (/ffmpeg/libavcodec/vaapi_h264.c:286)
> ==13089==by 0x853E0B: decode_nal_units (/ffmpeg/libavcodec/h264dec.c:690)
> ==13089==by 0x853E0B: h264_decode_frame 
> (/ffmpeg/libavcodec/h264dec.c:1008)
> ==13089==by 0xA0A54F: frame_worker_thread 
> (/ffmpeg/libavcodec/pthread_frame.c:199)
> ==13089==by 0x8D12433: start_thread (in /lib/libpthread-2.22.so)
> ==13089==by 0x901106C: clone (in /lib/libc-2.22.so)
> ==13089==
> ==13089== 22,968 bytes in 11 blocks are possibly lost in loss record 922 of 
> 932
> ==13089==at 0x4C28B06: malloc 
> (/home/andy/Src2016/valgrind-3.12.0/coregrind/m_replacemalloc/vg_replace_malloc.c:299)
> ==13089==by 0xFBED2DE: vlVaCreateBuffer 
> (/mesa/src/gallium/state_trackers/va/buffer.c:56)
> ==13089==by 0x603BB82: vaCreateBuffer (/libva/va/va.c:1116)
> ==13089==by 0xED2D46: ff_vaapi_decode_make_slice_buffer 
> (/ffmpeg/libavcodec/vaapi_decode.c:86)
> ==13089==by 0xAFCCBB: vaapi_h264_decode_slice 
> (/ffmpeg/libavcodec/vaapi_h264.c:380)
> ==13089==by 0x853CCD: decode_nal_units (/ffmpeg/libavcodec/h264dec.c:703)
> ==13089==by 0x853CCD: h264_decode_frame 
> (/ffmpeg/libavcodec/h264dec.c:1008)
> ==13089==by 0xA0A54F: frame_worker_thread 
> (/ffmpeg/libavcodec/pthread_frame.c:199)
> ==13089==by 0x8D12433: start_thread (in /lib/libpthread-2.22.so)
> ==13089==by 0x901106C: clone (in /lib/libc-2.22.so)
> ==13089==
> ==13089== 3,042,432 (800,640 direct, 2,241,792 indirect) bytes in 10,008 
> blocks are definitely lost in loss record 925 of 932
> ==13089==at 0x4C2A898: calloc 
> (/home/andy/Src2016/valgrind-3.12.0/coregrind/m_replacemalloc/vg_replace_malloc.c:711)
> ==13089==by 0xFBED2C0: vlVaCreateBuffer 
> (/mesa/src/gallium/state_trackers/va/buffer.c:49)
> ==13089==by 0x603BB82: vaCreateBuffer (/libva/va/va.c:1116)
> ==13089==by 0xED2C21: ff_vaapi_decode_make_param_buffer 
> (/ffmpeg/libavcodec/vaapi_decode.c:40)
> ==13089==by 0xAFDBA9: vaapi_h264_start_frame 
> (/ffmpeg/libavcodec/vaapi_h264.c:299)
> ==13089==by 0x853E0B: decode_nal_units (/ffmpeg/libavcodec/h264dec.c:690)
> ==13089==by 0x853E0B: h264_decode_frame 
> (/ffmpeg/libavcodec/h264dec.c:1008)
> ==13089==by 0xA0A54F: frame_worker_thread 
> (/ffmpeg/libavcodec/pthread_frame.c:199)
> ==13089==by 0x8D12433: start_thread (in /lib/libpthread-2.22.so)
> ==13089==by 0x901106C: clone (in /lib/libc-2.22.so)
> ==13089==
> ==13089== 4,483,216 (800,640 direct, 3,682,576 indirect) bytes in 10,008 
> blocks are definitely lost in loss record 927 of 932
> ==13089==at 0x4C2A898: calloc 
> (/home/andy/Src2016/valgrind-3.12.0/coregrind/m_replacemalloc/vg_replace_malloc.c:711)
> ==13089==by 0xFBED2C0: vlVaCreateBuffer 
> (/mesa/src/gallium/state_trackers/va/buffer.c:49)
> ==13089==by 0x603BB82: vaCreateBuffer (/libva/va/va.c:1116)
> ==13089==by 0xED2C21: ff_vaapi_decode_make_param_buffer 
> (/ffmpeg/libavcodec/vaapi_decode.c:40)
> ==13089==by 0xAFDA07: vaapi_h264_start_frame 
> (/ffmpeg/libavcodec/vaapi_h264.c:286)
> ==13089==by 0x853E0B: decode_nal_units (/ffmpeg/libavcodec/h264dec.c:690)
> ==13089==by 0x853E0B: h264_decode_frame 
> (/ffmpeg/libavcodec/h264dec.c:1008)
> ==13089==by 0xA0A54F: frame_worker_thread 
> (/ffmpeg/libavcodec/pthread_frame.c:199)
> ==13089==by 0x8D12433: start_thread (in /lib/libpthread-2.22.so)
> ==13089==by 0x901106C: clone (in /lib/libc-2.22.so)
> ==13089==
> ==13089== 6,872,207 bytes in 150 blocks are possibly lost in loss record 928 
> of 932
> ==13089==at 0x4C28B06: malloc 
> (/home/andy/Src2016/valgrind-3.12.0/coregrind/m_replacemalloc/vg_replace_malloc.c:299)
> ==13089==by 0xFBED2DE: vlVaCreateBuffer 
> (/mesa/src/gallium/state_trackers/va/buffer.c:56)
> ==13089==by 0x603BB82: vaCreateBuffer (/libva/va/va.c:1116)
> ==13089==by 0xED2DA5: ff_vaapi_decode_make_slice_buffer 
> (/ffmpeg/libavcodec/vaapi_decode.c:100)
> ==13089==by 0xAFCCBB: vaapi_h264_decode_slice 
> (/ffmpeg/libavcodec/vaapi_h264.c:380)
> ==13089==by 0x853CCD: decode_nal_units (/ffmpeg/libavcodec/h264dec.c:703)
> ==13089==by 0x853CCD: h264_decode_frame 
> (/ffmpeg/libavcodec/h264dec.c:1008)
> ==13089==by 0xA0A54F: 

Re: [FFmpeg-user] CodecPrivateData is empty when h264_vaapi encoder used

2017-02-28 Thread Mark Thompson
On 28/02/17 18:52, w_boba wrote:
> Mark,
> 
> As I have mentioned in my first post, I am working with ffmpeg version
> n3.2.4-4-g36fff6c, which is near-current, and commit you have mentioned is
> dated Dec 2016. I believe that commit is already included in my branch, last
> commit I see there is dated Feb 2017. 
> 
> W.

You believe incorrectly: the branchpoint was before the commit in question.

$ git log HEAD | grep 51020ad
commit 51020adcecf4004c1586a708d96acc6cbddd050a
$ git log 36fff6c | grep 51020ad
$ 
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] CodecPrivateData is empty when h264_vaapi encoder used

2017-02-28 Thread Mark Thompson
On 28/02/17 17:25, w_boba wrote:
> I forgot to add: "CodecPrivateData" attribute is "required" for SmoothStream
> according to MS document:
> 
> https://msdn.microsoft.com/en-us/library/ff728116%28v=vs.95%29.aspx 

I'm not familiar with this format at all, but it looks like it will be 
generating that field from the stream extradata?  Try applying 
,
 or try with any version after that change.

- Mark
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] VAAPI decoding/encoding of several video streams in parallel

2016-12-20 Thread Mark Thompson
On 20/12/16 18:21, Anton Sviridenko wrote:
> I want to use hardware acceleration for processing multiple realtime
> videostreams (coming over RTSP).
> I've read this thread -
> http://ffmpeg.org/pipermail/ffmpeg-user/2016-December/034530.html
> 
> and have some questions related to scaling this everything to several
> video streams:
> 
> 1) Is it possible at all to use VAAPI acceleration for several
> independent video streams simultaneously?

Yes, the kernel drivers deal with all of the detail here - it's exactly the 
same as using your GPU to run multiple OpenGL programs at the same time.

> 2) How should I initialize VAAPI related stuff? Do I have to create
> separate hwframe context for each stream?

Not necessarily, it depends on what you want to do.

Some things to consider:
* A hwframe pool needs to be fixed-size to use as output from a decoder or 
filter (note the render_targets argument to vaCreateContext(), which you supply 
the surfaces member of AVVAAPIFramesContext to), so can be exhausted.  Decoders 
and encoders may both hold on to frames for some length of time (to use as 
reference frames, to wait for the stream delay), so a pool used by multiple of 
them needs to be large enough to not run out even when they sit on some of the 
surfaces for a while.
* All surfaces in a single hwframe context are the same size and format.  While 
it's perfectly valid to decode a frame onto a surface which is larger than the 
frame, it does waste memory so you may want to make the surfaces of the correct 
size when that is known.
* A filter or encoder should only be given input which matches the hwframe 
context you declared as its input when you created it.  This is primarily an 
API restriction and some other cases do work some of the time, but keeping to 
it will avoid any surprises.

The easiest way to do it is probably to follow what ffmpeg itself does: make a 
single hwframe context for the output of each decoder or other operation, and 
then give that to whatever the next thing is which will consume those frames.  
This won't necessarily be sufficient in all cases - if you have something more 
complex with output from multiple decoders being combined somehow then you'll 
need to think about it more carefully keeping the restrictions above in mind.

> 3) Can I use single hwdevice and vaapi_context instances for all
> streams or there should be own instance for each decoded/encoded
> stream?

Typically you will want to make one device and then use it everywhere.  
Multiple devices should also work, but note that different devices can't 
interoperate at all (so a decoder, scaler and encoder working with the same 
surfaces and hwframe context need to be using the same device, say).

You need to make exactly one struct vaapi_context for each decoder (with 
configuration appropriate to that decoder).


- Mark
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] The transcoded file can't see the video played by the JW player which used the VAAPI to transcode file to the FLV format.

2016-12-05 Thread Mark Thompson
On 05/12/16 12:51, Archer Chang wrote:
> I use the VAAPI HW ACCEL to transcode file to the FLV format that the
> command as follows.
> 
> ffmpeg -vaapi_device /dev/dri/renderD128 -hwaccel vaapi
> -hwaccel_output_format vaapi -hwaccel_lax_profile_check -i infile.mkv  -vf
> "format=nv12|vaapi,hwupload" -c:v h264_vaapi -c:a aac -ac 2 -ar 44100
> *outfile1.flv*
> 
> The transcoded file(outfile.flv) only can hear the audio but no video that
> played by the JW player.  But when I transcode the file to the MKV and make
> the stream copy to the FLV format and
> this file can be played normally by the JW player. The command as follows.
> 
> ffmpeg -vaapi_device /dev/dri/renderD128 -hwaccel vaapi
> -hwaccel_output_format vaapi -hwaccel_lax_profile_check -i infile.mkv  -vf
> "format=nv12|vaapi,hwupload" -c:v h264_vaapi -c:a aac -ac 2 -ar 44100
>  outfile.mkv
> ffmpeg -i outfile.mkv  -c:v copy -c:a copy  *outfile2.flv*
> 
> I try to use the flv parser utility to find out the difference between the
> outfile1.flv and the outfile2.flv.
> Seems the Video Tag1 were different. the content as follows.
> 
> outfile1.flv (transcode using the ffmpeg directly with VAAPI)
> 09 00 00 05 00 00 00 00 00 00 00 17 00 00 00 00
> 
> outfile2.flv(mke the stream copy from transcoded file)
> 09 00 00 30 00 00 00 00 00 00 00 17 00 00 00 00
> 01 6E 00 28 FF E1 00 1A 67 6E 00 28 A6 CD 94 07
> 80 22 7E 5C 04 40 00 00 FA 40 00 2E E0 03 C6 0C
> 65 80 01 00 06 68 EB E3 CB 22 C0
> 
> I not familiar with the video transcoding. I trace the flvenc.c seems the
> par->extradata_size would be 0 when use the VAAPI to transcode the file to
> the FLV directly,
> And it has some value which makes stream copy or use SW transcode
> 
> ffmpeg  -i infile.mkv   -c:v libx264 -c:a aac -ac 2 -ar 44100  *outfile.flv
>  (can be played  normally by JW Player)*

Is this JW player something which can only play streams with global headers, 
then?

If so, try applying 

 and see if it works.

(This is on the assumption that you are using a packed-header VAAPI driver like 
i965, if you are using a whole-stream driver like mesa/gallium then it isn't 
really fixable.)

- Mark
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] VAAPI Decoding/Encoding with C code

2016-12-02 Thread Mark Thompson
On 02/12/16 14:05, Victor dMdB wrote:
> Thanks for the response Mark!
> 
> I might be misreading the source code, but for decoding, doesn't
> vaapi_decode_init
> do all the heavy lifting? 
> It seems to already call the functions and sets the variables you
> mentioned.
> 
> So might the code be just:
> 
> avcodec_open2(codecCtx, codec,NULL);
> vaapi_decode_init(codecCtx); 

vaapi_decode_init() is a function inside the ffmpeg utility, not any of the 
libraries.  You need to implement what it does in your own application.

You can copy/adapt the relevant code directly from the ffmpeg utility into your 
application if you want (assuming you comply with the licence terms).

- Mark


> On Fri, 2 Dec 2016, at 09:03 PM, Mark Thompson wrote:
>> On 02/12/16 10:57, Victor dMdB wrote:
>>> I was wondering if there were any examples of implementations with
>>> avformatcontext?
>>>
>>> I've looked at the source of ffmpeg vaapi implementation:
>>> https://www.ffmpeg.org/doxygen/trunk/ffmpeg__vaapi_8c_source.html
>>>
>>> and there is a reference to the cli values here
>>> https://ffmpeg.org/pipermail/ffmpeg-user/2016-May/032153.html
>>>
>>> But I'm not really sure how one actually implements the it within either
>>> decoding or encoding pipeline?
>>
>> Start by making an hwdevice.  This can be done with
>> av_hwdevice_ctx_create(), or if you already have a VADisplay (for
>> example, to do stuff in X via DRI[23]) you can use
>> av_hwdevice_ctx_alloc() followed by av_hwdevice_ctx_init().
>>
>> For a decoder:
>>
>> Make the decoder as you normally would for software.  You must set an
>> AVCodecContext.get_format callback.
>> Start feeding data to the decoder.
>> Once enough there is enough data to determine the output format, the
>> get_format callback will be called (this will always happen before any
>> output is generated).
>> The callback has a set of possible formats to use, this will contain
>> AV_PIX_FMT_VAAPI if your stream is supported (note that not all streams
>> are supported for a given decoder - for H.264 the hwaccel only supports
>> YUV 4:2:0 in 8-bit depth).
>> Make an hwframe context for the output frames and a struct vaapi_context
>> containing a decode context*.  See ffmpeg_vaapi.c:vaapi_decode_init() and
>> its callees for this part.
>> Attach your new hwframe context (AVCodecContext.hw_frames_ctx) and decode
>> context (AVCodecContext.hwaccel_context) to the decoder.
>> Once you return from the callback, decoding continues and will give you
>> AV_PIX_FMT_VAAPI frames.
>> If you need the output frames in normal memory rather than GPU memory,
>> you can copy them back with av_hwframe_transfer_data().
>>
>> For an encoder:
>>
>> Find an hwframe context to use as the encoder input.  For a transcode
>> case this can be the one from the decoder above, or it could be output
>> from a filter like scale_vaapi.  If only have frames in normal memory,
>> you need to make a new one here.
>> Make the encoder as you normally would (you'll need to get the codec by
>> name (like "h264_vaapi"), because it will not choose it by default with
>> just the ID), and set AVCodecContext.hw_frames_ctx with your hwframe
>> context.
>> Now feed the encoder with the AV_PIX_FMT_VAAPI frames from your hwframe
>> context.
>> If you only have input frames in normal memory, you will need to upload
>> them to GPU memory in the hwframe context with av_hwframe_transfer_data()
>> before giving them to the encoder.
>>
>>
>> - Mark
>>
>>
>> * It is intended that struct vaapi_context will be deprecated completely
>> soon, and this part will not be required (lavc will handle that context
>> creation).
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] VAAPI Decoding/Encoding with C code

2016-12-02 Thread Mark Thompson
On 02/12/16 10:57, Victor dMdB wrote:
> I was wondering if there were any examples of implementations with
> avformatcontext?
> 
> I've looked at the source of ffmpeg vaapi implementation:
> https://www.ffmpeg.org/doxygen/trunk/ffmpeg__vaapi_8c_source.html
> 
> and there is a reference to the cli values here
> https://ffmpeg.org/pipermail/ffmpeg-user/2016-May/032153.html
> 
> But I'm not really sure how one actually implements the it within either
> decoding or encoding pipeline?

Start by making an hwdevice.  This can be done with av_hwdevice_ctx_create(), 
or if you already have a VADisplay (for example, to do stuff in X via DRI[23]) 
you can use av_hwdevice_ctx_alloc() followed by av_hwdevice_ctx_init().

For a decoder:

Make the decoder as you normally would for software.  You must set an 
AVCodecContext.get_format callback.
Start feeding data to the decoder.
Once enough there is enough data to determine the output format, the get_format 
callback will be called (this will always happen before any output is 
generated).
The callback has a set of possible formats to use, this will contain 
AV_PIX_FMT_VAAPI if your stream is supported (note that not all streams are 
supported for a given decoder - for H.264 the hwaccel only supports YUV 4:2:0 
in 8-bit depth).
Make an hwframe context for the output frames and a struct vaapi_context 
containing a decode context*.  See ffmpeg_vaapi.c:vaapi_decode_init() and its 
callees for this part.
Attach your new hwframe context (AVCodecContext.hw_frames_ctx) and decode 
context (AVCodecContext.hwaccel_context) to the decoder.
Once you return from the callback, decoding continues and will give you 
AV_PIX_FMT_VAAPI frames.
If you need the output frames in normal memory rather than GPU memory, you can 
copy them back with av_hwframe_transfer_data().

For an encoder:

Find an hwframe context to use as the encoder input.  For a transcode case this 
can be the one from the decoder above, or it could be output from a filter like 
scale_vaapi.  If only have frames in normal memory, you need to make a new one 
here.
Make the encoder as you normally would (you'll need to get the codec by name 
(like "h264_vaapi"), because it will not choose it by default with just the 
ID), and set AVCodecContext.hw_frames_ctx with your hwframe context.
Now feed the encoder with the AV_PIX_FMT_VAAPI frames from your hwframe context.
If you only have input frames in normal memory, you will need to upload them to 
GPU memory in the hwframe context with av_hwframe_transfer_data() before giving 
them to the encoder.


- Mark


* It is intended that struct vaapi_context will be deprecated completely soon, 
and this part will not be required (lavc will handle that context creation).

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] hwaccel vaapi and "No VA display found for device"

2016-11-23 Thread Mark Thompson
On 23/11/16 11:48, desktop ready wrote:
> On Wed, 23 Nov 2016 10:00:25 +0000, Mark Thompson <s...@jkqxz.net> wrote :
> 
>> On 23/11/16 03:09, desktop ready wrote:
>>> Hello,
>>>
>>> I would like to confirm a bug/problem before submitting a bug
>>> report. I am working on Debian Jessie/Stable amd64 on an Intel
>>> Skylake i3-6320 and would like to decode an HEVC 8-bit encoded UHD
>>> movie.
>>>
>>> With a fresh ffmpeg github checkout I am able to use the following
>>> command without errors:
>>> ffmpeg -hwaccel vaapi -i castle.mp4 -f null -
>>>
>>> However with ffmpeg release 3.2, I have the following errors:
>>> [AVHWDeviceContext @ 0x7fdcf83adba0] No VA display found for
>>> device: . [vaapi @ 0x30a61e0] Failed to create a VAAPI device
>>> vaapi hwaccel requested for input stream #0:0, but cannot be
>>> initialized.
>>> [hevc @ 0x52bad20] Error parsing NAL unit #0.
>>
>> The device selection for VAAPI works as follows, stopping at the
>> first usable device it finds:
>>
>> 1) If the user passed a device name, try to open that name an X11
>> display. 2) If they didn't, try to open the default X11 display (i.e.
>> $DISPLAY). 3) If the user passed a device name, try to open that name
>> as the path to a DRM device. 4) If they didn't, try to
>> open /dev/dri/renderD128 as a DRM device.
>>
>> Step 4 was added after the release of ffmpeg 3.2, so you only get
>> steps 1-3 there.  Since you didn't pass a device name and
>> (presumably) aren't running X, it doesn't manage to open anything.
>>
>> In general, you always want to give it a device name; the implicit
>> selection may be right in some cases but it's better not to rely on
>> it.
>>
>> ffmpeg -hwaccel vaapi -hwaccel_device /dev/dri/renderD128 -i
>> castle.mp4 -f null -
>>
> 
> Indeed it works with ffmpeg 3.2 (and the decoding is very fast),
> thanks !
> 
> Now the mystery I have to solve is why Debian ffmpeg 3.2.2 is decoding
> this file without error, but very slowly (with high CPU load).
> 
> Here is the output first with ffmpeg 3.2 from github and after that the
> output from Debian ffmpeg 3.2.
> 
> I would be grateful if someone can point the meaningful differences.

I think the Debian backport version will not include the support for H.265 with 
VAAPI, because it is built against a version of libva which is too old.

H.265 decoding was added in libva 1.5.0, while Debian stable contains 1.4.1.  
Therefore, a backport ffmpeg would be built with the libva 1.4.1 headers and 
hence not include H.265 VAAPI support at all, even though at runtime you have a 
newer version than that.

Building it yourself uses the headers from the newer version which you have 
installed, and therefore does include H.265 support.

The reason you don't get an error is that the -hwaccel option fails silently 
when it cannot support the given codec - this exists to allow transparent 
support for falling back to software decode, which is exactly what has then 
happened in this case.

- Mark

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] hwaccel vaapi and "No VA display found for device"

2016-11-23 Thread Mark Thompson
On 23/11/16 03:09, desktop ready wrote:
> Hello,
> 
> I would like to confirm a bug/problem before submitting a bug report.
> I am working on Debian Jessie/Stable amd64 on an Intel Skylake i3-6320
> and would like to decode an HEVC 8-bit encoded UHD movie.
> 
> With a fresh ffmpeg github checkout I am able to use the following
> command without errors:
> ffmpeg -hwaccel vaapi -i castle.mp4 -f null -
> 
> However with ffmpeg release 3.2, I have the following errors:
> [AVHWDeviceContext @ 0x7fdcf83adba0] No VA display found for device: .
> [vaapi @ 0x30a61e0] Failed to create a VAAPI device
> vaapi hwaccel requested for input stream #0:0, but cannot be
> initialized.
> [hevc @ 0x52bad20] Error parsing NAL unit #0.

The device selection for VAAPI works as follows, stopping at the first usable 
device it finds:

1) If the user passed a device name, try to open that name an X11 display.
2) If they didn't, try to open the default X11 display (i.e. $DISPLAY).
3) If the user passed a device name, try to open that name as the path to a DRM 
device.
4) If they didn't, try to open /dev/dri/renderD128 as a DRM device.

Step 4 was added after the release of ffmpeg 3.2, so you only get steps 1-3 
there.  Since you didn't pass a device name and (presumably) aren't running X, 
it doesn't manage to open anything.

In general, you always want to give it a device name; the implicit selection 
may be right in some cases but it's better not to rely on it.

ffmpeg -hwaccel vaapi -hwaccel_device /dev/dri/renderD128 -i castle.mp4 -f null 
-

> However what is strange is that the ffmpeg version 3.2.2 on Debian
> Jessie Backports also works without error (but with a high CPU
> workload, which may indicate that hwaccel is not working as expected).
> 
> Ouput of ls -al /dev/dri:
> ls -al /dev/dri
> total 0
> drwxr-xr-x   2 root root   100 nov.  23 00:40 .
> drwxr-xr-x  19 root root  3520 nov.  23 03:59 ..
> crw-rw+  1 root video 226,   0 nov.  23 03:19 card0
> crw-rw   1 root video 226,  64 nov.  23 00:40 controlD64
> crw-rw+  1 root video 226, 128 nov.  23 00:40 renderD128
> 
> Is hwaccel vaapi working or not working on ffmpeg 3.2 to decode HEVC
> 8-bit encoded media ?

Yes.

- Mark

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] unable to capture desktop session using x11grab on debian jessie with ffmpeg 3.1.4

2016-11-08 Thread Mark Thompson
On 08/11/16 21:59, Mark Thompson wrote:
> On 08/11/16 12:17, 肖文良 wrote:
>> Built from the latest git source seems not help.
>>
>> following command runs about 15+ seconds. nothing was captured. If I add 
>> -loglevel debug,  this log keep being printed:
>>
>> cur_dts is invalid (this is harmless if it occurs once at the start per 
>> stream)
>> [rawvideo @ 0x3234840] PACKET SIZE: 4196352, STRIDE: 5464
>>
>> Here is the whole command line and output:
>>
>> ➜  ffmpeg git:(master) ✗ ./ffmpeg  -s 1366x768 -f x11grab -i :0.0 -c libx264 
>> -crf 28 -preset ultrafast /tmp/output.mp4
>> ffmpeg version N-82294-g6f0a171 Copyright (c) 2000-2016 the FFmpeg developers
>>   built with gcc 4.9.2 (Debian 4.9.2-10)
>>   configuration: --disable-yasm --enable-x11grab --enable-gpl 
>> --enable-libx264
>>   libavutil  55. 35.100 / 55. 35.100
>>   libavcodec 57. 66.101 / 57. 66.101
>>   libavformat57. 57.100 / 57. 57.100
>>   libavdevice57.  2.100 / 57.  2.100
>>   libavfilter 6. 66.100 /  6. 66.100
>>   libswscale  4.  3.100 /  4.  3.100
>>   libswresample   2.  4.100 /  2.  4.100
>>   libpostproc54.  2.100 / 54.  2.100
> 
> There is something strange going on here.
> 
> I can reproduce the problem precisely by using the ffmpeg package in debian 
> testing, which is close to a vanilla build of 3.2:
> 
> $ /usr/bin/ffmpeg -y -f x11grab -i :0 out.mp4
> ffmpeg version 3.2-2 Copyright (c) 2000-2016 the FFmpeg developers
>   built with gcc 6.2.0 (Debian 6.2.0-10) 20161027
>   configuration: --prefix=/usr --extra-version=2 --toolchain=hardened 
> --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu 
> --enable-gpl --disable-libtesseract --disable-stripping 
> --disable-decoder=libschroedinger --enable-avresample
> --enable-avisynth --enable-gnutls --enable-ladspa --enable-libass 
> --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio 
> --enable-libebur128 --enable-libflite --enable-libfontconfig 
> --enable-libfreetype --enable-libfribidi --enable-libgme
> --enable-libgsm --enable-libmodplug --enable-libmp3lame --enable-libopenjpeg 
> --enable-libopus --enable-libpulse --enable-librubberband 
> --enable-libschroedinger --enable-libshine --enable-libsnappy 
> --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora
> --enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libwavpack 
> --enable-libwebp --enable-libx265 --enable-libxvid --enable-libzmq 
> --enable-libzvbi --enable-opengl --enable-sdl2 --enable-x11grab 
> --enable-libdc1394 --enable-libiec61883 --enable-openal
> --enable-frei0r --enable-libopencv --enable-libx264 --enable-chromaprint 
> --enable-shared
>   libavutil  55. 34.100 / 55. 34.100
>   libavcodec 57. 64.100 / 57. 64.100
>   libavformat57. 56.100 / 57. 56.100
>   libavdevice57.  1.100 / 57.  1.100
>   libavfilter 6. 65.100 /  6. 65.100
>   libavresample   3.  1.  0 /  3.  1.  0
>   libswscale  4.  2.100 /  4.  2.100
>   libswresample   2.  3.100 /  2.  3.100
>   libpostproc54.  1.100 / 54.  1.100
> 
> But I can't reproduce it at all if I build myself (either git head or the tip 
> of the 3.2 branch).  The 3.2 here really should pretty much identical to the 
> Debian build:
> 
> $ ./ffmpeg -y -f x11grab -i :0 out.mp4
> ffmpeg version n3.2-3-g7568b0f Copyright (c) 2000-2016 the FFmpeg developers
>   built with gcc 6.2.0 (Debian 6.2.0-10) 20161027
>   configuration: --enable-libx264 --enable-gpl --enable-x11grab
>   libavutil  55. 34.100 / 55. 34.100
>   libavcodec 57. 64.100 / 57. 64.100
>   libavformat57. 56.100 / 57. 56.100
>   libavdevice57.  1.100 / 57.  1.100
>   libavfilter 6. 65.100 /  6. 65.100
>   libswscale  4.  2.100 /  4.  2.100
>   libswresample   2.  3.100 /  2.  3.100
>   libpostproc54.  1.100 / 54.  1.100
> 
> The problem with the Debian version is that the timestamps are messed up 
> somehow - I can work around it there by giving both the -framerate and -r 
> options:
> 
> /usr/bin/ffmpeg -y -framerate 30 -r 30 -f x11grab -i :0 out.mp4
> 
> at which point it produces the same result as the 3.2 I built myself.
> 
> So, I downloaded the source for exactly the debian version from 
> https://packages.debian.org/stretch/ffmpeg, applied their patches and built 
> it myself:
> 
> $ ./ffmpeg -y -f x11grab -i :0 out.mp4
> ffmpeg version 3.2 Copyright (c) 2000-2016 the FFmpeg developers
>   built with gcc 6.2.0 (Debian 6.2.0-10) 20161027
>   configuration: --enable-libx264 --enable-gpl --enable-x11grab
>   libavutil  55. 34.100 / 55. 34.100
>   libavcodec 57. 64.100 / 57. 64.100
>   l

Re: [FFmpeg-user] H264 VAAPI Encoder

2016-06-19 Thread Mark Thompson

On 19/06/16 00:34, Andy Furniss wrote:
> Mark Thompson wrote:
>>
>> So, I also went and built mesa with the encode patches and had a go.  With 
>> the changes above, it does get to actually trying to encode, but that nuked 
>> my GPU to the point of requiring reboot.
> 
> Same here, I need both patches then it does encode but there are issues.
> 
>> With the linked patch you should be able to have a go too: if it works for 
>> you then I'm interested in what your setup is.  If it nukes your GPU then, 
>> well, you had some warning at least.  I think I'll wait until it's a bit 
>> more stable before pursuing this further.
> 
> I am using an R9 285 Tonga GPU. This uses the amdgpu kernel driver and is 
> still not totally stable for uvd/vce.
> 
> The instability revolves around powerplay (which IIRC isn't enabled by 
> default yet). I have test cases that will lock vce, but can avoid by not
> letting powerplay vary clocks = force them high.
> 
> What h/w is yours?

R7 360 (Bonaire, a Sea Island), so yours is one generation newer.  I am 
therefore using radeonsi, though I believe the back-end parts talking to 
UVD/VCE here are actually in common between the two drivers so it shouldn't be 
relevant?

> Below is output of a run  that produces corrupt video - I guess this is due 
> to trying to use b frames
> but may be totally wrong :-)

Yeah.  The attempt to use B frames when they aren't supported means the frame 
referencing is all messed up.  When I tried with "-bf 0", it made a stream 
which decoded with no obvious artifacts (though somewhat weird, as noted 
previously).

> The same input OK encoded with gstreamer vaapi. In theory my card should
> do UHD, but there's a bug so I am limited smaller - older cards probably 
> wouldn't even do this res.
> 
> The results are the same testing with normal SD input + format=nv12.

I only tried 1080p, and it was happy.  I believe that generation won't be able 
to do a bigger stream like the one you tested anyway.

> Here's what it looks like -
> 
> https://drive.google.com/file/d/0BxP5-S1t9VEEUDR4V01Rblp2dlk/view?usp=sharing
> 
> andy [vce-tests]$ time ffm -loglevel debug  -vaapi_device :0 -f rawvideo 
> -framerate 50 -s 2560x1440 -pix_fmt nv12 -i /mnt/ramdisk/ducks-2560x1440.nv12 
> -vframes 20 -vf 'hwupload' -c:v h264_vaapi -profile:v 66 -b:v 40M  -y 
> /mnt/ramdisk/out.mp4

Looks very similar to the streams I got, modulo the funniness around B frames.  
I guess that's reassuring that it's not wildly different with different GPUs, 
at least.

On ref frames, it has max_ref_frames = 3 but only ever uses 1.  Unsure what to 
read into that.


On 19/06/16 00:43, Andy Furniss wrote:
> Mark Thompson wrote:
>> Ok, I tried a bit more (including a few power cycles), and it does work.  
>> The critical extra step is to disable B-frames (that should probably happen 
>> automatically for baseline profile).  It then works with input uploaded from 
>> the CPU or from a decode on the GPU.
>>
>> For example: "./avconv -v 55 -y -vaapi_device /dev/dri/renderD129 -hwaccel 
>> vaapi -hwaccel_output_format vaapi -i in.mp4 -an -vf 
>> 'format=nv12|vaapi,hwupload' -c:v h264_vaapi -profile 66 -bf 0 -b 10M 
>> out.mp4".
>>
>> The bitstream it outputs does work in ffmpeg, but it's very weird.  It has 
>> CABAC enabled (not allowed in baseline), and the POC seems to only advance 
>> on every second frame (as if they were actually fields of an interlaced 
>> stream, except it isn't).
>>
>> So, yeah.  I think there is a lot more work to do before this is generally 
>> usable, but it is certainly working and I will keep testing it as more 
>> versions turn up.
>
> Ahh, good - I hadn't seen this mail before preparing/sending the other one :-(
>
> The cabac thing is interesting - I have previously noticed that mediainfo
> showed gstreamer vaapi as constrained but yes for cabac - but it also said 2 
> for ref frames when ffprobe called one.
>
> gstreamer omx files said no to cabac - I failed to find a way with 
> ffmpeg/ffprobe to see if cabac was off/on - is there one??

Not that I know of.  I test this sort of thing by feeding streams into the 
reference decoder with trace enabled and reading the bitstream trace :/

> Also windows files get called as high, but I don't see any b frames
> though the only windows files I have were made with a game recording app that 
> comes with the driver (raptr).

It will be the driver (or something further out) writing the headers rather 
than the hardware, so I don't regard it as surprising that the labelling of the 
profile is confused.


- Mark

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] H264 VAAPI Encoder

2016-06-18 Thread Mark Thompson
On 18/06/16 16:37, Mark Thompson wrote:
> On 17/06/16 23:33, Andy Furniss wrote:
>>
>> AMD are working on vaapi encode for mesa, only a few patches about so far 
>> and they got rejected - though not for functionality.
>>
>> They do work with gstreamer, but trying above with ffmpeg fails as below 
>> (render node or X) seems the surface is seen as rgb - but the h/w takes nv12.
>>
>> Maybe mesa/the driver is giving false info?
>>
>> vainfo
>> libva info: VA-API version 0.38.1
>> libva info: va_getDriverName() returns 0
>> libva info: User requested driver 'radeonsi'
>> libva info: Trying to open /usr/lib/dri/radeonsi_drv_video.so
>> libva info: Found init function __vaDriverInit_0_38
>> libva info: va_openDriver() returns 0
>> vainfo: VA-API version: 0.38 (libva 1.6.3.pre1)
>> vainfo: Driver version: mesa gallium vaapi
>> vainfo: Supported profile and entrypoints
>>   VAProfileMPEG2Simple: VAEntrypointVLD
>>   VAProfileMPEG2Main  : VAEntrypointVLD
>>   VAProfileVC1Simple  : VAEntrypointVLD
>>   VAProfileVC1Main: VAEntrypointVLD
>>   VAProfileVC1Advanced: VAEntrypointVLD
>>   VAProfileH264Baseline   : VAEntrypointVLD
>>   VAProfileH264Baseline   : VAEntrypointEncSlice
>>   VAProfileH264Baseline   : VAEntrypointEncPicture
>>   VAProfileH264Main   : VAEntrypointVLD
>>   VAProfileH264High   : VAEntrypointVLD
>>   VAProfileNone   : VAEntrypointVideoProc
>>
>> ffmpeg doesn't seem to care about VAProfileH264Baseline
>>
>> I tried avconv and it does care = bail, but I can use -profile 66 and then 
>> get the same error as ffmpeg (though it converts to rgba rather than rgb0)
> 
> Default for H264 is High, so yeah you need -profile 66 to get it to work.
> 
>> [AVHWDeviceContext @ 0x3ca4c40] Format 0x3231564e -> unknown.
>> [AVHWDeviceContext @ 0x3ca4c40] Format 0x32315659 -> unknown.
>> [AVHWDeviceContext @ 0x3ca4c40] Format 0x56595559 -> unknown.
>> [AVHWDeviceContext @ 0x3ca4c40] Format 0x59565955 -> unknown.
>> [AVHWDeviceContext @ 0x3ca4c40] Format 0x41524742 -> bgra.
>> [AVHWDeviceContext @ 0x3ca4c40] Format 0x41424752 -> rgba.
>> [AVHWDeviceContext @ 0x3ca4c40] Format 0x58524742 -> bgr0.
>> [AVHWDeviceContext @ 0x3ca4c40] Format 0x58424752 -> rgb0.
> 
> The detection queries the capabilities of a video processor instance 
> (VAProfileNone / VAEntrypointVideoProc) to try to determine which formats are 
> interesting, then ignores the ones which aren't.  It should probably have 
> said "not supported" here rather than "unknown" for nv12 (and yv12 
> immediately below it).
> 
> I've sent a patch to change the behaviour slightly which will improve this 
> case, see 
> <https://lists.libav.org/pipermail/libav-devel/2016-June/077669.html>.
> 
>> [auto-inserted scaler 0 @ 0x3cf2a60] w:720 h:576 fmt:yuv420p sar:1/1 -> 
>> w:720 h:576 fmt:nv12 sar:1/1 flags:0x4
>> [auto-inserted scaler 1 @ 0x3cf4ec0] w:720 h:576 fmt:nv12 sar:1/1 -> w:720 
>> h:576 fmt:rgb0 sar:1/1 flags:0x4
>> [hwupload @ 0x3cd61c0] Surface format is rgb0.
> 
> hwupload reports its input capabilities based on what is returned above, so 
> it will be saying that it can only take the RGB formats as input.  Hence 
> ffmpeg auto-inserts a swscale instance to change it back from nv12 to rgba.  
> (The filter chain becomes 'format=nv12,format=rgba,hwupload'.)
> 
>> [AVHWFramesContext @ 0x3cf1700] Created surface 0x1.
>> [AVHWFramesContext @ 0x3cf1700] Direct mapping possible.
>> [h264_vaapi @ 0x3ce7dc0] Using fixed QP = 20 / 20 / 24 for IDR / P / B 
>> frames.
>> [h264_vaapi @ 0x3ce7dc0] Using (null) as format of reconstructed frames.
>> [AVHWFramesContext @ 0x3d37fa0] Unsupported format: (null).
> 
> The input to the encode is now in VAAPI rgba surfaces, but that doesn't match 
> any of the capabilities so it doesn't know what to make the reconstructed 
> frames.  It should have fallen back to the one supported format (i.e. nv12), 
> but a typo stoped it from doing that: fixed in 
> <https://lists.libav.org/pipermail/libav-devel/2016-June/077668.html>.
> 
>> [h264_vaapi @ 0x3ce7dc0] Failed to initialise reconstructed frame context: 
>> -22.
>> Output #0, matroska, to 'out.mkv':
>> Stream #0:0, 0, 0/0: Unknown: none, SAR 1:1 DAR 0:0
>> Metadata:
>>   encoder : Lavc57.46.100 h264_vaapi
>> Stream mapping:
>>   Stream #0:0 -> #0:0 (mpeg2video (native) -> h

Re: [FFmpeg-user] H264 VAAPI Encoder

2016-06-18 Thread Mark Thompson
On 17/06/16 23:33, Andy Furniss wrote:
> 
> AMD are working on vaapi encode for mesa, only a few patches about so far and 
> they got rejected - though not for functionality.
> 
> They do work with gstreamer, but trying above with ffmpeg fails as below 
> (render node or X) seems the surface is seen as rgb - but the h/w takes nv12.
> 
> Maybe mesa/the driver is giving false info?
> 
> vainfo
> libva info: VA-API version 0.38.1
> libva info: va_getDriverName() returns 0
> libva info: User requested driver 'radeonsi'
> libva info: Trying to open /usr/lib/dri/radeonsi_drv_video.so
> libva info: Found init function __vaDriverInit_0_38
> libva info: va_openDriver() returns 0
> vainfo: VA-API version: 0.38 (libva 1.6.3.pre1)
> vainfo: Driver version: mesa gallium vaapi
> vainfo: Supported profile and entrypoints
>   VAProfileMPEG2Simple: VAEntrypointVLD
>   VAProfileMPEG2Main  : VAEntrypointVLD
>   VAProfileVC1Simple  : VAEntrypointVLD
>   VAProfileVC1Main: VAEntrypointVLD
>   VAProfileVC1Advanced: VAEntrypointVLD
>   VAProfileH264Baseline   : VAEntrypointVLD
>   VAProfileH264Baseline   : VAEntrypointEncSlice
>   VAProfileH264Baseline   : VAEntrypointEncPicture
>   VAProfileH264Main   : VAEntrypointVLD
>   VAProfileH264High   : VAEntrypointVLD
>   VAProfileNone   : VAEntrypointVideoProc
> 
> ffmpeg doesn't seem to care about VAProfileH264Baseline
> 
> I tried avconv and it does care = bail, but I can use -profile 66 and then 
> get the same error as ffmpeg (though it converts to rgba rather than rgb0)

Default for H264 is High, so yeah you need -profile 66 to get it to work.

> [AVHWDeviceContext @ 0x3ca4c40] Format 0x3231564e -> unknown.
> [AVHWDeviceContext @ 0x3ca4c40] Format 0x32315659 -> unknown.
> [AVHWDeviceContext @ 0x3ca4c40] Format 0x56595559 -> unknown.
> [AVHWDeviceContext @ 0x3ca4c40] Format 0x59565955 -> unknown.
> [AVHWDeviceContext @ 0x3ca4c40] Format 0x41524742 -> bgra.
> [AVHWDeviceContext @ 0x3ca4c40] Format 0x41424752 -> rgba.
> [AVHWDeviceContext @ 0x3ca4c40] Format 0x58524742 -> bgr0.
> [AVHWDeviceContext @ 0x3ca4c40] Format 0x58424752 -> rgb0.

The detection queries the capabilities of a video processor instance 
(VAProfileNone / VAEntrypointVideoProc) to try to determine which formats are 
interesting, then ignores the ones which aren't.  It should probably have said 
"not supported" here rather than "unknown" for nv12 (and yv12 immediately below 
it).

I've sent a patch to change the behaviour slightly which will improve this 
case, see .

> [auto-inserted scaler 0 @ 0x3cf2a60] w:720 h:576 fmt:yuv420p sar:1/1 -> w:720 
> h:576 fmt:nv12 sar:1/1 flags:0x4
> [auto-inserted scaler 1 @ 0x3cf4ec0] w:720 h:576 fmt:nv12 sar:1/1 -> w:720 
> h:576 fmt:rgb0 sar:1/1 flags:0x4
> [hwupload @ 0x3cd61c0] Surface format is rgb0.

hwupload reports its input capabilities based on what is returned above, so it 
will be saying that it can only take the RGB formats as input.  Hence ffmpeg 
auto-inserts a swscale instance to change it back from nv12 to rgba.  (The 
filter chain becomes 'format=nv12,format=rgba,hwupload'.)

> [AVHWFramesContext @ 0x3cf1700] Created surface 0x1.
> [AVHWFramesContext @ 0x3cf1700] Direct mapping possible.
> [h264_vaapi @ 0x3ce7dc0] Using fixed QP = 20 / 20 / 24 for IDR / P / B frames.
> [h264_vaapi @ 0x3ce7dc0] Using (null) as format of reconstructed frames.
> [AVHWFramesContext @ 0x3d37fa0] Unsupported format: (null).

The input to the encode is now in VAAPI rgba surfaces, but that doesn't match 
any of the capabilities so it doesn't know what to make the reconstructed 
frames.  It should have fallen back to the one supported format (i.e. nv12), 
but a typo stoped it from doing that: fixed in 
.

> [h264_vaapi @ 0x3ce7dc0] Failed to initialise reconstructed frame context: 
> -22.
> Output #0, matroska, to 'out.mkv':
> Stream #0:0, 0, 0/0: Unknown: none, SAR 1:1 DAR 0:0
> Metadata:
>   encoder : Lavc57.46.100 h264_vaapi
> Stream mapping:
>   Stream #0:0 -> #0:0 (mpeg2video (native) -> h264 (h264_vaapi))
> Error while opening encoder for output stream #0:0 - maybe incorrect 
> parameters such as bit_rate, rate, width or height
> [AVIOContext @ 0x3ce8760] Statistics: 0 seeks, 0 writeouts
> [AVIOContext @ 0x3bfa540] Statistics: 2019472 bytes read, 2 seeks

So, I also went and built mesa with the encode patches and had a go.  With the 
changes above, it does get to actually trying to encode, but that nuked my GPU 
to the point of requiring reboot.

With the linked patch you should be able to have a go too: if it works for you 
then I'm interested in what your setup is.  If it nukes your GPU then, well, 
you had some warning at least.  I think I'll wait until 

Re: [FFmpeg-user] H264 VAAPI Encoder

2016-05-15 Thread Mark Thompson
On 13/05/16 20:43, Armin K. wrote:
> I noticed that recently a VAAPI based H264 encoder was added to ffmpeg.
> I built ffmpeg from git and now I have h264_vaapi listed in ffmpeg -encoders
> output.
>
> However, when I try to use ffmpeg ... -vcodec h264_vaapi I get the following
> error:
>
> Impossible to convert between the formats supported by the filter 
> 'Parsed_null_0' and the filter 'auto-inserted scaler 0'
>
> The command I use is:
>
> ffmpeg -hwaccel vaapi -i  -vcodec h264_vaapi .mkv

Short answer:

ffmpeg -vaapi_device /dev/dri/renderD128 -i  -vf 
'format=nv12,hwupload' -c:v h264_vaapi .mkv


Longer answer:

It's somewhat awkward to use because it only accepts input as VAAPI surfaces 
(AV_PIX_FMT_VAAPI), with underlying format depending on the particular hardware.

"-vaapi_device" sets the hardware device to use.  It takes either a DRI device 
(ideally a render node, as above) or an X11 display name (only if you are 
actually in X).

Then we use libavfilter to get the input into the right form:

"format=nv12" forces software conversion to NV12, which is the underlying 
format required by the Intel driver.

"hwupload" uploads that software image into a VAAPI surface, which can then be 
fed into the encoder.


You need a bit more trickiness to do a pure hardware transcode, see 
 for additional explanation.

- Mark
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".