On Thu, Jan 5, 2023 at 7:04 PM Jayson Larose <jay...@interlaced.org> wrote:
> How is the `aresample` filter supposed to work? I'm recording video > from a v4l2 capture device, audio from jack, like such: > > ffmpeg -vsync cfr -timestamps mono2abs -copyts -framerate 59.73 \ > -f v4l2 -thread_queue_size 2048 -i /dev/video10 -f jack \ > -thread_queue_size 2048 -i capture_out -map 0:v:0 -map 1:a:0 \ > -c:v hevc_nvenc -filter:v setpts=PTS-STARTPTS -c:a:0 aac \ > -b:a:0 128k -filter:a:0 aresample=async=1000 -f matroska \ > -f matroska output.mkv > > The end recording has the following stream durations, as reported by > ffprobe: > > Input #0, matroska,webm, from 'recording__2023-01-04 02_13_52.mkv': > Metadata: > ENCODER : Lavf59.25.100 > Duration: 02:34:51.14, start: 0.000000, bitrate: 1574 kb/s > Stream #0:0: Video: hevc (Main), yuv420p(tv, progressive), 1120x1008 > [SAR 1:1 DAR 10:9], 59.73 fps, 59.73 tbr, 1k tbn > Metadata: > ENCODER : Lavc59.33.100 hevc_nvenc > DURATION : 02:34:51.138000000 > Stream #0:1: Audio: aac (LC), 96000 Hz, stereo, fltp > Metadata: > ENCODER : Lavc59.33.100 aac > DURATION : 02:34:50.870000000 > > This is .268 seconds of difference, but more importantly, if I > actually analyze the content of the recorded file, the beginning of > the video starts out with audio and video reasonably well > synchronized (audio is ahead by ~4 frames), but at the end of the > video, there's nearly a half of a second of desynch between the two > (audio is behind by ~23 frames). > > I was led to believe that using `aresample=async=960000` would > stretch the audio stream up to 960000 samples per second in order to > keep audio and video in sync, but it doesn't seem to be doing > anything at all. The documentation is super vague about how it goes > about doing this, just talking about "timestamps". So I checked to > make sure that the PTS timestamps coming from my audio and video > streams were close to in sync: > > % ffprobe -timestamps mono2abs -i /dev/video10 -of compact \ > -show_packets 2>/dev/null | head -n 1 & \ > ffprobe -timestamps mono2abs -i capture_out -f jack -of compact \ > -show_packets 2>/dev/null | head -n 1 > > > packet|codec_type=video|stream_index=0|pts=1672940948329853|pts_time=1672940948.329853|dts=1672940948329853|dts_time=1672940948.329853|duration=16742|duration_time=0.016742|size=1693440|pos=N/A|flags=K_ > > packet|codec_type=audio|stream_index=0|pts=1672940948367972|pts_time=1672940948.367972|dts=1672940948367972|dts_time=1672940948.367972|duration=10666|duration_time=0.010666|size=8192|pos=N/A|flags=K_ > > and they look to be pretty much in sync. If I don't supply > `-timestamps mono2abs` the video packets come in monotonic time > and the audio packets come in wallclock time. > > This is driving me up the wall, because it's a very manual and > tedious procedure to fix the audio for every file I record. > > async option make sense only when there are pts gaps between audio. pts gaps means that frame's pts + frame'duration is < than next frame's pts. --Jays > > _______________________________________________ > ffmpeg-user mailing list > ffmpeg-user@ffmpeg.org > https://ffmpeg.org/mailman/listinfo/ffmpeg-user > > To unsubscribe, visit link above, or email > ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe". > _______________________________________________ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".