Hi Paul, Yes, I agree that a filter that would process both Video and Audio together would be the best approach.
But this does not seem to be the general case in FFmpeg. I mean, filters are either for video or for audio. With filters that just apply video transformations without affecting the time line, this is not a major issue because you can always map the original audio into the output file and get the output again in sync. However if we are removing or adding video frames or attempt to make any transformations that would result in a different length video, then we loose audio sync, which is why I tried possible workarounds (unsuccessfully) The best candidate seems to be the -filter_complex filter because it allows us to simultaneously apply video and audio filters in order to get a combined output. However, the fact that command expression variables do not exit the scope of the video section makes many synced transformations impossible. Including (as far as I know) the use case that I mentioned. The subject at this point is whether the dev community considers improving on this is worth looking at. I regard FFmpeg as an excellent library, probably the single one that has everything, although I may not be fully aware of what’s available on other programatic libraries. On the other hand, desktop video editing programs do not generally have audio/video sync issues, because of course that would be unacceptable on these kind of software. Said that, I am unsure about how difficult fixing this issue on FFmpeg would be or what it would take Regards, John Lluch > On 15 May 2023, at 22:15, Paul B Mahol <one...@gmail.com> wrote: > > On Mon, May 15, 2023 at 8:45 PM Joan Lluch <joan.ll...@icloud.com> wrote: > >> Hi all, >> >> I recently made a feature request to one of the community members. I was >> ready to pay a reasonable amount for it, but as it was a relatively quick >> fix, he was kind enough to implement it on his own, and I made a donation >> to the community instead. The feature I refer to is the “keep” parameter on >> the mpdecimate filter, which was implemented by Thilo Borgmann. >> >> I am a retired software developer and although I embraced some really >> complex projects in the past (including occasional code contributions to >> the LLVM compiler project) I am not currently in the disposition to do it >> anymore due to several reasons mostly related to age. >> >> So, I just joined this mailing list because I want to suggest a further >> improvement to FFmpeg, or otherwise ask for workarounds. >> >> The problem I want to solve is to trim duplicated frames of a video clip, >> in the same exact way that the mpdecimate does, but preserving the >> corresponding audio sections of the preserved video frames in correct sync >> on the output video. Or in other words, trim duplicated frames from a video >> input, while also trimming the corresponding audio, in sync. >> >> To accomplish this, I attempted commands such as this one: >> >> ffmpeg -i $INPUT -vf "select='if( gt(scene,0.00001), (st(1,t);st(2,1)), >> if(lte(t-ld(1),${KEEP}),st(2,1),st(2,0)) )', setpts=N/FRAME_RATE/TB" -af >> "aselect='ld(2)', asetpts=N/SR/TB” $OUTPUT >> >> The trick here is (or would be) to use variable number 2 from the video >> section to tell the audio section which frames to select (based on what the >> video section is selecting). However, this won't work because ‘variable’ >> number 2 from the first ‘select’ expression is out of scope on the >> ‘aselect’ expression. Therefore it always evaluates to 0 in the ‘aselect’ >> expression and the output goes with no audio. >> >> I think that having a way to specify ‘global’ variables, with valid scope >> through the entire ffmpeg command, featuring 'store global' and 'load >> global’ semantics, would be very helpful and would enormously increase the >> use cases for this library. So it would be nice if somebody would look at >> implementing it. I understand this is far from straightforward, but It >> would be game changing if done in my opinion. >> >> As said, my current use case is trimming the audio stream in sync with the >> video stream as described above. So alternatively, I would appreciate any >> hints on how to accomplish this if this is /already/ possible. >> > > That approach is prone to many errors and makes it useless if frame > threading is added to libavfilter. > Instead filter that takes both A and V stream and outputs A and V stream is > more solid approach. > > >> >> Thanks >> >> John LLuch >> >> >> _______________________________________________ >> ffmpeg-devel mailing list >> ffmpeg-devel@ffmpeg.org >> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel >> >> To unsubscribe, visit link above, or email >> ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe". >> > _______________________________________________ > ffmpeg-devel mailing list > ffmpeg-devel@ffmpeg.org > https://ffmpeg.org/mailman/listinfo/ffmpeg-devel > > To unsubscribe, visit link above, or email > ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe". _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".