Re: [FFmpeg-user] Autolaunch FFMPEG when network traffic detected
On Mon, Jul 23, 2018 at 9:34 AM, yannickb wrote: > I wish I could monitor each of those ports and launch the corresponding > command automatically as soon as the incoming stream is detected (and stop > when the stream ends) > > > One advantage of using a Linux distro like Debian as you mention, is that you can run each instance of a program in a separate account i.e. user1, user2 etc for each instance of ffmpeg in this example. That way its easier to keep of udp ports open and running processes by userid / group id. You can also use bash lock files per user to maintain only one open ffmpeg instance. UDP has a few quirks in my experience. You can use this command to see the processes using udp. lsof -i udp For keeping track of open ffmpeg processes and UDP ports, here is a quick few lines of bash code ... #!/bin/bash uid=`id -u $shell_user ` gid=`id -g $shell_user ` process_count=`ps -ef --Group $gid --User $uid | grep -v grep | grep ffmpeg | wc -l ` if [[ $process_count -gt 0 ]]; then printf "\n\nffmpeg is in progress\n\n" else printf "\n\nffmpeg is not in progress\n\n" fi port=999 serviceIsRunning=false openPorts=$(/bin/netstat -tulpn | grep -vE '^Active|Proto' | awk '{ print $4}' | awk -F: '{print $NF}' | sed '/^$/d' | sort -u) for openPort in $openPorts do if [[ "$port" == "$openPort" ]]; then serviceIsRunning=true echo "service is running." break else printf "\n\nfound open UDP: $openPort that is not port: $port " fi done if [ $serviceIsRunning == false ] then echo "service is not running." fi Best regards, Robert ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] issue with monitoring script
On Fri, Jul 20, 2018 at 5:37 PM, Pedro Daniel Costa < portaln...@outlook.com.br> wrote: > Hi guys > > I need some guidance > > I have setup a crontab > > To check every minute if a live transcoding channel has stopped. > > The script gets pulled via crontab and I can see the instance starting up, > but then after couple of seconds it gets shutdown, > > > If I run the script manually it starts both ffserver and ffmpeg input > couple of seconds later and it run for hours/days with no problem.. > > If I close the terminal where I have executed the script and let the > crontab execute it, it start and shutdown couple of seconds after > > I have correct permissions on the script files and config files.. > > Here is the ouput of the script > > Can someone help me on this as I want to make a monitoring script to check > all the live transmission transcodings being used > > > And if it crashes or stops I want the crontab to init ffserver and ffmpeg > shortly after again > > > Here is the output of the script, can someone please tell me what I am > doing wrong? I use the same similar script to my middlware application to > check the live multicast transmission and tunners setup from the PCI > devices.. > > > ## > # check channel transcoding > # ffserver channel1.sh > ## > > #!/bin/bash > > > if ps x |grep -v grep | grep channel1.cfg ; > then > echo "FFserver runing " > else > echo "FFserver down" > ffserver -d -f channel1.cfg > fi > sleep 10 > if ps x |grep -v grep | grep channel1.ffm ; > then > echo "Transcoder running " > else > echo "transcoder down" > ffmpeg -i udp://@239.106.3.0:4002 http://170.80.123.234:8890/channel1.ffm > fi > > > > > Any hints or tips on how to setup a correct check status script will be > appreciated. > > > > ___ > ffmpeg-user mailing list > ffmpeg-user@ffmpeg.org > http://ffmpeg.org/mailman/listinfo/ffmpeg-user > > To unsubscribe, visit link above, or email > ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe". Try editing your crontab so stdout / stderr go to a log file, to see the specific error - see example below. Also /var/log etc may have cron logs. Another thing is I would use an absolute path to ffmpeg, you can find the correct path by the command 'which ffmpeg' in the shell . You may also try putting the process in the background with an '&' at the end of your commands. sh channel1.sh > /home/username/xxx.log 2>&1 & You can also get some extra logging via these lines at the top of your file (after '#!/bin/bash') , and an exit on the error line. set -e set -x set -o errexit Best regards, Robert ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] kmsgrab on Intel 8th generation aka h/w accelerated screen capture
On Sun, Jun 24, 2018 at 7:10 PM, Kai Hendry wrote: > Hi ffmpeg users, > > Sorry I initially posted my question here: > https://www.reddit.com/r/ffmpeg/comments/8tgejw/kmsgrab_on_intel_8th_ > generation_aka_hw/ > > Via https://trac.ffmpeg.org/wiki/Hardware/VAAPI#ScreenCapture I > discovered kmsgrab which astonishingly doesn't seem to drop frames or > over heat my T480s when recording my Xorg display! > Unlike when I record with x11grab & VAAPI: > https://github.com/kaihendry/recordmydesktop2.0/blob/vaapi/x11capture#L55 > Also see: http://lists.ffmpeg.org/pipermail/ffmpeg-user/2018- > March/039227.html > Same issue on ffmpeg 4.x btw: > https://s.natalian.org/2018-06-25/152978.mp4.log > > However since kmsgrab requires CAP_SYS_ADMIN as it points out to its > friendly documentation > https://www.ffmpeg.org/ffmpeg-devices.html#kmsgrab I believe I need to > run it with `sudo`. However when I run it as `sudo` I hit a issue > accessing my pulseaudio microphone. > https://stackoverflow.com/questions/51007952/recording- > pulse-audio-as-cap-sys-admin-with-ffmpeg > > Any tips how to get this all working? > https://github.com/kaihendry/recordmydesktop2.0/blob/kms/x11capture#L54 > is my branch of kmsgrab based screen capture cli. > > > Kind regards, > ___ > ffmpeg-user mailing list > ffmpeg-user@ffmpeg.org > http://ffmpeg.org/mailman/listinfo/ffmpeg-user > > To unsubscribe, visit link above, or email > ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe". That command worked for me though of course without your external dep. Do you mean run ffmpeg as sudo? You need the proper entry in /etc/sudoers and the ffmpeg command per sudo rules must have an absolute path like /bin/ffmpeg. Regards, Robert ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] x11grab segfault when display is closed
On Fri, Jun 22, 2018 at 9:03 AM, Matt Garman wrote: > I'm using a static git build, version N-46353-g40b7e6071-static[1] on > CentOS 7.3. This server is used as a remote login launch point (or "jump > box"). What we'd like to do, for security/auditing purposes, is create > videos of user sessions. > > I cannot answer the ffmpeg specific question though as a CentOS 7.x user I use Xvfb a lot. I have it doing important high volume prod stuff at my day job and its been a workhorse that has rarely been a problem. Seems like its possible to make videos via Xvfb. I just do screenshots myself. https://github.com/lightsofapollo/x-recorder Regards, Robert ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] Unable to stream SAP to remote pulseaudio
On Tue, Jun 19, 2018 at 1:58 PM, Antoine Vacher < antoine.vac...@tigre-bleu.net> wrote: > Any idea someone? > > Thanks, > > Antoine > > I suggest posting your question to the Linux Audio Users list (LAU) . There are daily email threads on that list about Pulse Audio and at least a few of the developers there use FFMPEG in their tools. I am no help myself as I do my signal processing outside of computers, I just process the captured audio and video via FFMPEG on Linux via Bash. Regards, Robert ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] How to make ProRes files with SMPTE DTV audio for 5.1
On Mon, Jun 18, 2018 at 4:33 PM, Mark Scott wrote: > Hello, > I'm very new to ffmpeg. I work at a production company and I need to make > Broadcast ProResHQ files with 5.1 audio plus Stereo on Ch.7&8. > > I haven't been able to figure out the command line inputs to get the > correct audio configuration. I'm attaching a screen grab of the inspector > windows for the Master file, and for the broadcast file that I have made > successfully with other software. The resulting audio needs to be as > follows: 24-bit Integer (little Endian) SMPTE DTV (L R C LFE LS RS LT RT) > 48kHz. Our master files are exported with 6 mono tracks and 1 stereo track. > > The following command line creates a ProResHQ file with Stereo audio. > > ffmpeg -i TEST.mov -c:v prores -profile:v 3 -acodec pcm_s24le > TEST_ProResHQ.mov > > > How can I change the command line to get the SMPTE DTV audio? > > > MASTER FILE: > > > > BROADCAST FILE WITH CORRECT AUDIO CONFIG: > > > > Thank you, > ___ > ffmpeg-user mailing list > ffmpeg-user@ffmpeg.org > http://ffmpeg.org/mailman/listinfo/ffmpeg-user > > To unsubscribe, visit link above, or email > ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe". On Linux you can read the LTC from a wav file via the ltcdump command, no idea on other platforms. I am doing that right now actually. Change the output from mov to wav: ffmpeg -y -v error -nostdin -i my.mov -vn -c:a pcm_s24le -ar 48000 -ac 2 my.wav ltcdump -f 29.97 my.wav 2> /dev/null > my.ltc.txt I recently discussed timecode with ffmpeg here: http://ffmpeg.org/pipermail/ffmpeg-user/2018-June/040060.html ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] Audio mapping
On Thu, Jun 7, 2018 at 3:18 AM, Kasper Folman wrote: > > But what if I had a file with 4 mono tracks, and I wanted to merge those 4 > tracks into 2 stereo tracks. It does do what I’m asking it to, but again, > only the first audio track is enabled. > > ffmpeg -t 1 -i "/Volumes/Storage/_MoveToBacku > p/formatSamples/1x16-chAudioTest.mov" -filter_complex > "[0:0]copy,fps=25,setfield=prog,setdar=dar=16/9,setsar=sar=1 > /1[vOut];[0:1]pan=1c|c0=c0[a1];[0:2]pan=1c|c0=c0[a2];[0:3] > pan=1c|c0=c0[a3];[0:4]pan=1c|c0=c0[a4];[a1][a2]amerge=input > s=2[aOut0];[a3][a4]amerge=inputs=2[aOut1]" -c:a pcm_s24le -map [aOut0] > -map [aOut1] -map [vOut] -c:v prores_ks -r 25 -pix_fmt yuv422p10le > -profile:v 1 -field_order progressive -y -strict -2 > "/Users/kf/Desktop/testfile.mov" > I am using separate wav and mov files together however I am on Linux and do not use quicktime as a player. And I am on 3.3.4. Anyways using an alternative approach, a BWF works for me as a way to get 8 tracks into Ch1 / Ch2 audio of a mov file. Joining mono WAV files into a single BWF may or not work for you but thought I'd mention it. cuts.mov in this example is combined footage from two Zoom Q8 cameras, f8t.wav is a BWF produced by an 8 track Zoom F8. Only showing first pass of 2pass. ffmpeg -v error -i cuts.mov -i f8t.wav -pass 1 -passlogfile export.log -c:a pcm_s16le -c:v libx264 -profile:v main -level 4.0 -refs 1 -x264-params "keyint=8:b-pyramid=0:no-scenecut:nal-hrd=cbr:force-cfr=1" -r 3/1001 -s 1920x1080 -b:v: 24M -bufsize 24M -maxrate 24M -b:a 48k -muxrate 24M -sample_fmt s16 -ac 2 -ar 48000 -af "aresample=async=1:min_hard_comp=0.000100:first_pts=0" -crf 18 -metadata comment="Created with parseLTC.sh" -map 0:0 -map 1:0 -y export.mov Regards, Robert ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] FFMPEG output to append to a text file without overwriting the content
On Mon, Jun 4, 2018 at 6:36 PM, robertlazarski wrote: > > > On Mon, Jun 4, 2018 at 6:06 PM, Sana Tafleen > wrote: > > I would not expect ffmpeg to create a new process in that loop but I have > no experience with mpegts. I would try -stdin since I have seen unexpected > behavior without it in loops. > > Meant to say -nostdin . ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] FFMPEG output to append to a text file without overwriting the content
On Mon, Jun 4, 2018 at 6:06 PM, Sana Tafleen wrote: > Hello, > > I am sending a UDP stream to the destination and saving the contents of the > FFMPEG output to a text file. I run the ffmpeg command in a loop as > follows, > > > while : > do > echo `ffmpeg -hide_banner -f v4l2 -i /dev/video0 -c:v libx264 -f > mpegts tcp://ip:port -c:v libx264 /path/to/.mp4 2> out.txt -y` > done > > > > When I disconnect the cable connected the destination, the above ffmpeg > process stops and a new one starts oevrwriting the content of the output > that has been saved. And when I reconnect the cable, a new ffmpeg process > starts and its output is what is displayed in the out.txt file. > > I need the output of each ffmpeg process that runs to an output file. Can > anyone please suggest me a way to do it? > This part of the command, '2> out.txt ' says (a) redirect stderr and not stdout, to out.txt. And (b) overwrite the previous contents. You can append instead of overwrite by using '2>> out.txt ' . I would not expect ffmpeg to create a new process in that loop but I have no experience with mpegts. I would try -stdin since I have seen unexpected behavior without it in loops. A long shot would be using nohup, if its a hangup of some sort. I would try that if I was still stuck. Then strace on the command to see why it was creating the extra process. Kind regards, Robert > ___ > ffmpeg-user mailing list > ffmpeg-user@ffmpeg.org > http://ffmpeg.org/mailman/listinfo/ffmpeg-user > > To unsubscribe, visit link above, or email > ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe". ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] Using segment and concat with multiple videos containing LTC audio on CH1
On Fri, May 4, 2018 at 8:31 AM, robertlazarski wrote: > Hello all, first post. > > I am trying to segment mov videos from multiple Zoom Q8 cameras with > exactly the same specs, by using the Linux command ltcdump, bash and > ffmpeg. > To answer my own question and after many weeks later of experimentation, using segment was the wrong tool for the job. Each LTC audio frame at 3/1001 in this case is 1600/48000 or around .034 seconds and the closest I could come to that with segment while still being able to read the LTC, was '-force_key_frames "expr:gte(t,n_forced*0.07)" ' , which usually meant segments of 64ms but often ranged to 85ms ... didn't work for me. I had much better luck calculating the -ss and -to times using the sample position of the LTC. I was still somehow off by about a frame or .034 seconds on each video transition between cameras as the cut accuracy wasn't good enough. The vocals of the result video clearly had drift. See below, using 2pass and lots of params because debugging the transcoded videos is easier the closer they match the source. 16 bit depth instead of 24 on source videos for debugging purposes (long story). ffmpeg -y -v error -nostdin -i ../q81.mov -pass 1 -passlogfile q81.log -ss 5.506292 -to 25.5264 -c:a pcm_s16le -c:v libx264 -profile:v baseline -level 3.0 -x264-params keyint=8:b-pyramid=0:no-scenecut:nal-hrd=cbr:force-cfr=1 -refs 1 -r 3/1001 -s 1920x1080 -b:v: 24M -bufsize 24M -maxrate 24M -b:a 48k -muxrate 24M -sample_fmt s16 -ac 2 -ar 48000 -af aresample=async=1:min_hard_comp=0.000100:first_pts=0 -crf 18 q81_cut_0.mov ffmpeg -y -v error -nostdin -i ../q81.mov -pass 2 -passlogfile q81.log -ss 5.506292 -to 25.5264 -c:a pcm_s16le -c:v libx264 -profile:v baseline -level 3.0 -x264-params keyint=8:b-pyramid=0:no-scenecut:nal-hrd=cbr:force-cfr=1 -refs 1 -r 3/1001 -s 1920x1080 -b:v: 24M -bufsize 24M -maxrate 24M -b:a 48k -muxrate 24M -sample_fmt s16 -ac 2 -ar 48000 -af aresample=async=1:min_hard_comp=0.000100:first_pts=0 -crf 18 q81_cut_0.mov So, 25.5264 - 5.506292 = 20.020108 However: ffprobe -v error -show_entries format=duration -of default=noprint_wrappers=1:nokey=1 q81_cut_0.mov 20.054000 Each video in the script below is off by the same amount. Strangely using an offset of 0.033892 from 20.054000 - 20.020108 didn't work as it cut too much though splitting the value in half at 0.017 works splendidly. By the end of the video I was able to come under .002 of drift. See below for the script, magicNumberOffset="0.017" is as described above. Might be generally useful as an example how to use bash with ffmpeg. #!/bin/bash # This script generates a single video from parts of videos from 2 Zoom Q8 cameras # by using ltc on ch1 of each video. The priority is as little ltc audio drift as possible, # script performance and extra transcoding are not an issue. Little audio drift in this case # means the start and end times match the source videos closely, with no dropped video frames # nor dropped ltc frames. I was able to get within 88/48000 or 0.00183 seconds by the end of the video, # and while I have no immediate ideas how to do better I will continue to try. # The ltc comes from 2 Tentacle Sync units, that were 'jammed' to a # Zoom F8. The main idea is to sync the final export.mov video with a BWF 8 track audio file generated on # a Zoom F8, via the ffmpeg command to generate the export.mov file. # This is possible by using the BWF timecode metadata, and ltc on ch1 of each video. # ch2 contains a scratch track. # A fixed duration such as 20 seconds is set, indicating a camera # transition. Videos are mov at 1920x1080 with ltc audio at 48KHZ / 16 bits # usage: # /home/myuser/input> ls # f8.wav output parseLTC.sh q81.mov q82.mov # /home/myuser/input> cd output/ # /home/myuser/input/output> sh ../parseLTC.sh # clear any files from previous processing rm -f *.txt rm -f *cut* rm -f *.csv # read LTC from wav files, first camera is q81 and the second camera is q82 ffmpeg -y -v error -nostdin -i ../q81.mov -vn -c:a pcm_s16le -ar 48000 -ac 2 q81.wav ffmpeg -y -v error -nostdin -i ../q82.mov -vn -c:a pcm_s16le -ar 48000 -ac 2 q82.wav ltcdump -f 29.97 q81.wav 2> /dev/null > q81.ltc.txt ltcdump -f 29.97 q82.wav 2> /dev/null > q82.ltc.txt # some errors I found while playing option bingo with ffmpeg validate() { lastTimecodeInSegment=0 lastTimecodeInSegment=`tail -n 1 ltc.txt | awk '{ print $2 }' | sed 's/\(.*\):/\1./' ` # can sometimes receive unparsable dates if [[ ${#lastTimecodeInSegment} -ne 11 ]]; then lastTimecodeInSegment=0 printf "\nError: timecode number of chars is not correct: " printf " %s" "${#lastTimecodeInSegment}" return fi # can receive invalid time such as: 0680 00:13:61:10 | 22 1609 if ! date -d "$lastTimecod
[FFmpeg-user] Using segment and concat with multiple videos containing LTC audio on CH1
Hello all, first post. I am trying to segment mov videos from multiple Zoom Q8 cameras with exactly the same specs, by using the Linux command ltcdump, bash and ffmpeg. The LTC is being generated by a Tentacle Sync for each camera, that was "jammed" to a Zoom F8 which is master audio I use to sync the final video via LTC ... I have this working fine on each camera individually using Ardour. The goal is to concat the segments from these cameras into one video. The main idea is the videos switch cameras at a fixed duration determined at runtime by a simple constant like 20 seconds, determined by reading the output from ltcdump on each video. I have this very close to working. The last step is fixing an audio drift problem caused during the transition from one camera to another. This drift is because I cannot figure out how to exactly match an LTC audio frame to each segment. I tried lots of numbers for the segment and was unsuccessful. Here is an example of a transition and the problem I am trying to fix. Q81 is camera 1, Q82 is camera 2. I am missing frames 28 and 29 - the last .xx digits are the frame and not milliseconds. And the LTC frame of somewhere between 1597 and 1601 samples at 48KHZ does not match the segment exactly. ### ltcdump output from file q81_out280.mov.wav , last two .xx digits are frames #User bits Timecode |Pos. (samples) #DISCONTINUITY 00:26:34:27 | 1221 2821 ### ltcdump output from file q82_out305.mov.wav , last two .xx digits are frames #User bits Timecode |Pos. (samples) #DISCONTINUITY 00:26:35:00 | 818 2418 What I believe I need to do is get the segment to match the LTC frame as closely as possible. In these examples the segment I believe I need is 1600 / 48000. However I tried and could not get the segments to match. Here's my code to show what I am doing. Its about 300 lines. #!/bin/bash # This script generates a single video from parts of videos from 2 cameras by using ltc. # Videos are mov at 1280x740 with audio at 48KHZ and 16 bits # # usage: # /home/myuser/input> ls # output parseLTC.sh q81.mov q82.mov # /home/myuser/input> cd output/ # /home/myuser/input/output> sh ../parseLTC.sh # clear any files from previous processing rm -f *.txt rm -f *all* rm -f *out* # Trim first camera so the LTC frame starts as closely as possible to zero # 00:26:15:00 |2 1615 ffmpeg -y -v error -ss 00:00:06.067697 -t 00:30:15.365 -i ../q81.mov -c copy q81t.mov mv q81t.mov q81.mov mv ../q82.mov q82.mov # debug ffmpeg -y -v error -i q81.mov -vn -c:a pcm_s16le -ar 48000 -ac 2 q81.wav ffmpeg -y -v error -i q82.mov -vn -c:a pcm_s16le -ar 48000 -ac 2 q82.wav # split file by key frames in a .07 second duration, need a couple entries that # ltcdump can find in each file - ideally just a single entry. # first Q8 ffmpeg -y -v error -fflags +genpts -i q81.mov -vsync 1 -c:a pcm_s16le -c:v libx264 -refs 1 -x264opts b-pyramid=0 -r 3/1001 -threads 0 -s 1280x720 -b:v: 1024k -bufsize 1216k -maxrate 1280k -b:a 192k -sample_fmt s16 -ac 2 -ar 48000 -af "aresample=async=1:min_hard_comp=0.10:first_pts=0" -segment_time 0.07 -segment_list q81t.ffcat -sc_threshold 0 -force_key_frames "expr:gte(t,n_forced*0.07)" -f segment -reset_timestamps 1 -crf 0 q81_out%03d.mov # second Q8 ffmpeg -y -v error -fflags +genpts -i ../q82.mov -vsync 1 -c:a pcm_s16le -c:v libx264 -refs 1 -x264opts b-pyramid=0 -r 3/1001 -threads 0 -s 1280x720 -b:v: 1024k -bufsize 1216k -maxrate 1280k -b:a 192k -sample_fmt s16 -ac 2 -ar 48000 -af "aresample=async=1:min_hard_comp=0.10:first_pts=0" -segment_time 0.07 -segment_list q82t.ffcat -sc_threshold 0 -force_key_frames "expr:gte(t,n_forced*0.07)" -f segment -reset_timestamps 1 -crf 0 q82_out%03d.mov # save the ffcat output to run again if need be while debugging cp q81t.ffcat sav_q81t.ffcat cp q82t.ffcat sav_q82t.ffcat # some errors I found while playing option bingo with ffmpeg validate() { fileToProcess='' fileToProcess=$1 epoch_secs_ltc_to_process=0 ltc_to_process_formatted='' isLTCDumpOutputValid=0 found_stop_time=0 # test for shell error printf "\nExecuting: ltcdump -f 29.97 "$fileToProcess" " # adding frame rate removes this DISCONTINUITY # 00:26:53:24 | 7224 8824 #DISCONTINUITY ltcdump -f 29.97 "${fileToProcess}" > /dev/null 2>&1 if [ $? -ne 0 ] ; then printf "\nError processing file %s\n" "${fileToProcess}" printf "\nshell returned non zero value on validity check" return fi printf "\nExecuting: ltcdump -f 29.97 "${fileToProcess}" 2>/dev/null | grep -v '#' | grep ':' | wc -w " hasLTC=0 hasLTC=`ltcdump -f 29.97 "${fileToProcess}" 2>/dev/null | grep -v '#' | grep ':' | wc -w` if [[ $hasLTC -eq 0 ]]; then printf "\nError processing file %s\n" "${fileToProcess}" printf "\ncannot parse timecode\n" return fi rm -f ltc.txt # find timecode in f