Re: [FFmpeg-user] Get the file size in advance when encoding from one another format

2022-06-14 Thread Adam Nielsen via ffmpeg-user
> I've created a simple wrapper for FFmpeg using java and use this in a
> spring boot web application, and everything works fine. But the problem is
> I have this requirement that I need to get the file size of any media
> format in advance from encoding to another format because I need to set the
> Header Content-Length.

Yes this is easy.  Simply transcode to a temporary file, get its file
size, then stream the temporary file over the HTTP connection.

Because the output file size depends on the complete set of input data,
the formula to calculate the file size in advance is the same as the
formula to do the transcode.  So you could transcode it once (there is
your formula to calculate the file size), get the file size, delete it,
then transcode the video a second time, but that would waste CPU time.
So if you save the results of the file-size calculation formula, then
your transcode is already done, and you can stream that over the HTTP
connection.

As Reindl and Moritz said last time, HTTP chunked transfer encoding
would be better as you wouldn't need to store a temporary file, but I
understand sometimes you are working with a restrictive set of
requirements and dodgy workarounds are the best you can do.

Cheers,
Adam.
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] h264_v4l2m2m not working properly

2022-06-13 Thread Adam Nielsen via ffmpeg-user
> Console output for hardware and software encoding.
> https://0x0.st/ouiE.txt
> 
> Software encoding finished encoding successfully. Playback was also
> possible.
> 
> But I want to do hardware encoding because software encoding only gives me
> about 0.5x.
> 
> Is there anything I can find out from this log?

I can't see what the problem could be.  It looks like it encounters
corrupted video in the input stream then ends the encoding operation
early.  Do you have a longer input file, with no errors, that you can
test with?  Just to see whether hardware encoding works if there are no
errors in the input stream.

What does the output file from the hardware encode look like?  Is it
zero bytes, does it contain some data but it's not playable, etc?

>From the log output it looks like the hardware encode should've worked
and produced a playable output file.

Cheers,
Adam.
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] h264_v4l2m2m not working properly

2022-06-09 Thread Adam Nielsen via ffmpeg-user
> I am trying to use h264_v4l2m2m on a Raspberry pi 4B to do M2TS to H264
> hardware encoding, but it is not working properly. The log is below.
> http://0x0.st/oMOa.txt
> 
> If I don't use hardware encoding or if the source is not m2ts (tested with
> mpeg-1 to h264(v4l2m2m)) I can encode.

It looks a bit surprising, it seems it's bailing because the audio is
corrupted.  What happens if you add -an to try the hardware encode but
without any audio?  I don't see why the audio would affect hardware vs
software video encoding though.

Maybe you could post an equivalent log showing the software H264 encode
so we can compare which messages are the same and which are different?

Do you get any errors in dmesg from the Pi H264 hardware encoder?
Sometimes it can fail if there isn't enough GPU memory, so you could
also try increasing gpu_mem in /boot/config.txt to see if it has any
effect.  It should work with 128 and definitely with 256.

Cheers,
Adam.
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] Why does ffplay lie about vq value? (was: How to get ffplay live stream latency under 15 seconds)

2022-05-04 Thread Adam Nielsen via ffmpeg-user
Hi all,

I've still been working on this but haven't yet worked out how to get
the ffmpeg->network->ffplay latency down to under 15 seconds.

I have however noticed that when playing the video, ffplay includes a
"vq" parameter in the status line, which hovers between 0 KB and 50 KB.
However if I terminate the stream at the source (by exiting ffmpeg),
the "vq" value on the client instantly jumps up to many megabytes, and
gradually counts down to zero as the backlog of ~15 seconds worth of
frames are played (ffplay continues playing the backlog of frames even
though ffmpeg has stopped sending anything).

What would cause the ffplay "vq" value to remain incorrectly near zero
while the stream is active, but then suddenly increase to show its true
value only once new frames have stopped being sent?

All I can think of is that it's something to do with timestamps,
however I have tried to tell ffplay to ignore timestamps and play back
the frames as quickly as possible with "-sync ext" but this doesn't
seem to work.  (See my original message below with more detail.)

Does this vq behaviour give anyone any hints about why ffplay falls so
far behind the live stream?

This is one of the many commands I have used on the Raspberry Pi for
sending the stream, all with the same result:

  ffmpeg -f rawvideo 
-pix_fmt yuv420p 
-video_size 1280x960 
-use_wallclock_as_timestamps 1 
-fflags +nobuffer 
-i /dev/video0 
-vsync cfr
-r 15
-c:v h264_v4l2m2m
-b:v 5M
-f rtp_mpegts
udp://239.0.0.1:5004

And on the client x86 PC:

  ffplay -probesize 32 -sync ext udp://239.0.0.1:5004
  # Plus variations of all the options described further below

Many thanks,
Adam.

On Mon, 3 Jan 2022 22:18:55 +1000
Adam Nielsen via ffmpeg-user  wrote:

> Hi all,
> 
> I'm using ffplay to play a video stream from a Raspberry Pi (encoded
> via ffmpeg using the Pi's hardware H264 encoder).
> 
> Unfortunately I cannot work out how to get the stream latency lower
> than about 15 seconds.  I would be happy if the overall delay could be
> reduced to 2 seconds or less.
> 
> I have tried a number of things I have found online but none of them
> seem to make any difference.  I am seeing the following behaviour:
> 
>  * When I start the stream, the video appears within one second (the
>source stream is configured to send key frames once per second),
>however the stream is already 5-6 seconds behind the live image.
> 
>  * If I stop the stream at the source, ffplay continues to play the
>stream (with no new data coming in over the network) up until the
>timestamp where I stopped it.  So this tells me ffplay must be
>buffering the data, if it can keep playing for 15 seconds after the
>stream has stopped transmitting data over the network.
> 
>  * I have told ffplay to play the incoming stream as fast as possible
>(see below) however it still remains between 5 and 15 seconds
>behind.  It starts off 5 seconds behind and gradually increases to
>around 15 seconds behind, but never gets worse than that.  But
>again, when I stop the stream at the source, ffplay suddenly plays
>the video at a much higher framerate and catches all the way up to
>where the live stream was when it stopped being sent over the
>network.
> 
> I have tried various parameters to ffplay:
> 
> -sync ext
>   Supposed to make the video play as quickly as possible.  Without it
>   the playback can lag further and further behind (beyond 15 seconds)
>   however with it it still lags by up to 15 seconds.
> 
> -probesize 32
>   Makes the video start a little quicker but makes no difference to the
>   latency.
> 
> -vf setpts='N/(30*TB)'
>   Supposed to bump the framerate from the source 15 fps to 30 fps.  This
>   doesn't seem to do anything until the source stream is terminated.
>   At that point it does double the frame rate and quickly renders all
>   the frames ffplay has apparently been buffering the whole time.
> 
> -vf setpts='PTS-STARTPTS'
>   Supposed to remove any offset from the PTS value, but this has no
>   effect.
> 
> -avioflags direct
>   Prevents the video from playing with one message saying "[mpegts]
>   Unable to seek back to the start" followed by many messages stating
>   "[udp] Part of datagram lost due to insufficient buffer size".
> 
> -flags low_delay
> -fflags nobuffer
> -fflags discardcorrupt
> -framedrop
> -strict experimental
> -sync audio
> -sync video
>   None of these make any difference one way or another.
> 
> Is there any way to get ffplay to render everything in its buffer so
> you can consistently get under five seconds latency?  Or does it always
> have to buffer a lot of data and run a long way behind t

Re: [FFmpeg-user] FFMPEG RTSP stream problem

2022-04-07 Thread Adam Nielsen via ffmpeg-user
> I have an RTSP stream which framerate is 1/16, which means every 16 sec 
> a frame is transported through RTSP. This framerate is because of a 
> special purpose, cannot and want not to change it.
> 
> What i want is to save every frame of this video as single JPEG image on 
> my server (Linux). Here the full command, which is working in bash script:

This looks like an ONVIF camera.  Most of these offer screenshots in
JPEG format, so you can use curl or wget to download a single frame as
a .jpg when you need it.

Is there a reason why you don't want to use the screenshot URL?

> its waiting about 2 minutes. But don't know why?
>    libswscale  4.  8.100 /  4.  8.100
>    libswresample   2.  9.100 /  2.  9.100
>    libpostproc    54.  7.100 / 54.  7.100
> 
> And after that its finishing with the following lines:
> 
> Input #0, rtsp, from 

It looks like it might be trying to probe the input to figure out what
format it's in, since you can stream multiple codecs over RTSP.

You could try messing with the -probesize option, but I'd only do this
if there's some reason why the screenshot URL is unusable.

Cheers,
Adam.
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] ffmpeg on Raspberry Pi using h264_v4l2m2m hardware acceleration

2022-03-04 Thread Adam Nielsen via ffmpeg-user
> The resulting .mp4 is flagged as VFR which is not what I expected nor
> wanted.

I am using v4l2_m2m on a Pi and getting constant framerate (although
still not correct) so it is at least possible:

  Frame rate mode  : Constant
  Frame rate   : 13.508 FPS
  Original frame rate  : 15.000 FPS

> I was hoping to use VBR rather than CBR but can't find a way to set
> it.

Looking at the Pi V4L2 controls it appears to support VBR, but ffmpeg
must set the correct values.  A quick Google says to use -q:v instead
of -b:v, e.g. https://slhck.info/video/2017/02/24/vbr-settings.html

I'm streaming off the Pi camera in variable framerate mode (at max 15
fps), so I'm trying to get frames duplicated as necessary to maintain a
fixed 15 fps on the output.

I was using the `-r` option for this to set the output framerate so
possibly adding this might help, as it doesn't look like you're telling
ffmpeg what output framerate to use.  Possibly if the input file is VFR
it might be carrying that flag across.

I am also using this option to try to fix up the timestamps:

  -bsf "setts=ts=N*(1/${FPS})*10,h264_metadata=tick_rate=$[$FPS * 2]"

However this doesn't seem to be working right and appears to be what is
marking the output as 13.5 fps instead of 15 fps.  So if anyone knows
what is wrong with my calculations please let me know.

Unfortunately this doesn't work too well for streaming as ffplay likes
to keep a 10-15 second buffer so the streamed video has an enormous
delay. I suspect it's something to do with these timestamps as the
instant I stop streaming, ffplay suddenly rushes through the 10 second
buffer at 60 fps.  I just wish I could figure out how to tell ffplay
not to buffer quite so much video!

I have tried various combinations of -sync ext -fflags nobuffer -flags
low_delay -strict experimental -vf setpts=N/40/TB -an -fast
-noframedrop -probesize 32 but none of them make much difference which
is what leads me to think the issue might be on the encoding side.

Cheers,
Adam.
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] drawtext reload=N > 1?

2022-02-28 Thread Adam Nielsen via ffmpeg-user
> > Have you tried benchmarking to see how much benefit you'd get from
> > this optimisation?  You could check CPU usage and I/O load with
> > reload=1 and again with reload=0 and see what the difference is.
> 
> Regarding I/O load, I know it’s probably negligible, but it just
> offends my sensibilities to read something 11,999 times for no
> reason. And if this feature gets implemented (thank you, Gyan!!!)
> then I won’t have to worry about where I put the tmp file.

That's probably the best reason for something like this where it's
"just better" even if there is no tangible benefit :)

However what bothers me more than a little I/O is all the CPU
calculations required to load and parse the font file, interpret the
arcs and lines in the font, calculate the shape of each character, draw
it all in and antialias the lines to give smooth characters, and do all
*that* repeatedly on every single frame even if the text hasn't changed.

I guess in some ways if efficiency is what you're after, it would be
better to render the whole thing to a raw image file as needed, and
then just copy those pixels across as-is on every frame.

It's actually something I have been meaning to look in to, because I
wouldn't mind being able to overlay an icon or two onto the video as
well as just text.

Cheers,
Adam.
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] drawtext reload=N > 1?

2022-02-27 Thread Adam Nielsen via ffmpeg-user
> Would it be worthy feature request to allow drawtext to accept
> integer values N > 1, and then reload the text file every Nth frame?
> It seems like a win for CPU and I/O loading, with the benefit of
> being fully backward compatible with existing scripts that read in
> every 1 frame (e.g. reload=1).
> 
> Or are there downsides to this that I’m not seeing?

I thought about something similar as I'm also overlaying infrequently
changed data (temperature) onto video.

However I ended up putting the file on a tmpfs partition so there's no
actual disk IO, the whole thing sits in memory anyway.

Have you tried benchmarking to see how much benefit you'd get from this
optimisation?  You could check CPU usage and I/O load with reload=1 and
again with reload=0 and see what the difference is.  Let us know what
you find, as I haven't actually tried this myself so it would be
interesting to know what the impact is of reading the file on every
frame.

> * and yes, I’m writing to weather.tmp and cping to weather.txt to
> prevent a file I/O collision.

Do you mean "mv" instead of "cp"?  I don't think "cp" is atomic but
"mv" is.  Using "cp" won't break anything but you might get a frame
here or there with incomplete data.

Cheers,
Adam.
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] FFMpeg question on Raspberry Pi

2022-02-25 Thread Adam Nielsen via ffmpeg-user
> We are most interested in being able to group videos by a certain time 
> period, e.g., the motion tripped within a particular time period.  The 
> ideal setup would be looking at the concatenating video with two sliders 
> - one for the beginning and one for the ending and downloading to the 
> phone.  While moving the sliders to see images/time stamps of the video 
> where to start and stop.  The time stamps could be on the original 
> videos.   Is the a video player that might be able to do that?

None that I am aware of but personally I would find this kind of method
difficult to use.  Trail cameras, Zoneminder and other programs tend to
save each motion "event" into a separate file with the time and date in
the filename.  The idea is when you sort your list of video files
alphabetically, the oldest video will be at the top and the newest at
the end.  Each video only contains a recording of the motion - if there
is no motion, nothing is recorded.

At this point you can play each video one by one by double-clicking on
it, or add them all into a playlist in your preferred video player.
By skipping to the next video file (usually one button press), you will
jump directly to the next time the motion sensor triggered.  It makes
it very quick to skip through each video, and if you hit one you are
interested in you can let it play through to watch it in full.  The
following video file will be the next event that happened afterwards, so
it will be very similar to playing one long concatenated video, except
each event is already in a separate file in case you need to send it to
someone else or make a copy of it for future reference.

> Or, even having one slider and being able to look at a time period after 
> that.  For example, the 20 minutes of video following a certain 
> identified point of the concatenating video . . .

Normally these programs will only record video for a short time after
the motion sensor has been triggered.  So if you example you have a
person enter the camera frame, have a sleep, then get up and leave two
hours later, you'll only end up with two events lasting a minute or two
each - first when the person enters the frame and again when they exit.
Possibly another short video file or two if they roll over in their
sleep.  But the point is the cameras don't record continuously so you
can't get 20 minutes after an event that only lasts for a few seconds.

There are other ways to do this (recording continuously and flagging
times when events happen) but this is a different problem and solution,
and at that point there is nothing to concatenate because you have a
continuous recording.

Cheers,
Adam.
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] FFMpeg question on Raspberry Pi

2022-02-22 Thread Adam Nielsen via ffmpeg-user
> Thanks about the USB tip.  I’m trying to  concatenate automatically,
> however.   We have many Arlo cameras where we CAN connect to the
> internet.  Otherwise, you’re right, we could just use a trail cam but
> the time someone would need to be spending going through assembling
> videos would not be worth it.

Concatenating the videos into one would be fairly straightforward, if
somewhat inconvenient (if the video is of leaves blowing you'd have to
sit through it in full instead of just skipping to the next video).
But if you wanted to do this you could just copy the files off the trail
camera and run a short ffmpeg command to join them all together into
one video.

The hard part of what you ask is using the video player to scroll
through the videos and downloading a segment to your phone.

Also, how remote is this camera?  If you already have
Internet-connected cameras that do what you want, have you considered a
long range wireless link?  Mikrotik is one of the lower priced vendors,
with some of their longer range devices apparently being able to
maintain a line-of-sight link for 40 km (25 mi) on 2.4 GHz:

  https://mikrotik.com/products/group/wireless-systems

I haven't used any of these products so they are just examples of
what's available, not a recommendation.

Cheers,
Adam.
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] FFMpeg question on Raspberry Pi

2022-02-22 Thread Adam Nielsen via ffmpeg-user
>      I need help in trying to develop a security camera for a remote 
> area of a farm.  There is no internet in some places there and some of 
> the motion videos may be long, e.g., 20 to 30 minutes.
> 
>      So, I would like to be able to record these longer motion videos on 
> a Raspberry Pi locally, concatenate them and then be able to somehow 
> quickly review the compilation/concatenated video on a video player and 
> then download the snippet(s) of video to a smart phone.

You're going to have to do a fair bit of programming/scripting to get
this I suspect, as I don't think there's anything around that can do
this out of the box on a Pi.

However, since you won't want to use an SD card for this (as writing
all the video will kill the SD card very quickly) you'll probably need
to use a USB external hard drive.  In this case you could just buy two,
and swap them over when you visit the camera.  Then back on another
computer you can flick through the video on the USB hard drive.

>  1.     Recording motion using Motion or MotionEyes to a particular
> directory for the day,
>  2.     Then using FFMpeg to possibly automatically concatenate the
> videos in that directory into one bigger file, and
>  3.     Then using a video player to scroll through the video and
> download a particular segment to my iPhone.

Have you considered using a game camera instead of a Raspberry Pi?  They
have motion sensors built in, they'll capture video of the motion, and
save each event as a different video file.  Then you can visit it, swap
over the memory card, and watch all the videos on any device you can
plug the card into (even a smartphone if you have a card reader for
it).  They run off batteries and include infrared lights to capture
video at night, so they are well suited for remote areas where you
don't need a live video feed.

The only real benefit of using the Pi would be that you get
Ethernet/WiFi on it for remote access/live video, but if you won't be
using that because it's too far away from a WiFi network and you don't
want to use WiFi extenders or dig a cable, using a game camera will
probably save you a huge amount of effort.

Cheers,
Adam.
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] watch a folder for new images

2022-01-31 Thread Adam Nielsen via ffmpeg-user
> The best solution would be if something like fileSystemWatcher could be 
> added to FFmpeg.
> Below is a C# example for fileSystemWatcher.

FFmpeg is written in C++.  Windows, Mac and Linux each have a different
method for watching the filesystem change.  Do you have any example C++
code that works across each operating system FFmpeg runs on?  If not,
that probably explains why it's not supported.

Instead of trying to repurpose FFmpeg to make it your whole application,
you could try making your application work with FFmpeg.  You could watch
the filesystem in your own code and when a new file appears, launch
ffmpeg as needed.  I don't run Windows so I'm not familiar with the
options, but it looks like there are some .NET interops for FFmpeg
which would let you skip launching the .exe and just pass the frames to
it directly.

Failing that, another option is to use two instances of ffmpeg.  One
(which your program calls when each new file appears) encodes the frame
and sends it over a UDP socket, and the second ffmpeg instance simply
streams video from the UDP socket.  This should work well in your case
because if no UDP traffic is coming in, the second ffmpeg instance will
simply continue to display the last frame.

However I do wonder whether FFmpeg is the best choice here for
displaying the final video.  It almost seems like it would be better to
have ffmpeg launch when a new file appears, write the result of the
transformation out to another new image file, then use some kind of
image viewer to pick up and display those second sets of images.  I
wouldn't be surprised if some image viewers already supported watching
the filesystem and reloading the image if the file changed, which would
greatly simplify your problem and wouldn't require any changes to
FFmpeg.

Cheers,
Adam.
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] No Audio from ALSA device on Raspberry Pi

2022-01-30 Thread Adam Nielsen via ffmpeg-user
> Using 'arecord' I am able to record audio. I have a .asoundrc file in my
> home dir and reference the device by the name in there ("-D dmic_sv").

Where is the ffmpeg output from where you tried using this "dmic_sv"
device that works with arecord?

> But, when I try to record with ffmpeg, nothing.
> 
> If I use the device "hw:0" it fails

Does this work with arecord?  If it works with arecord it should work
with ffmpeg.

> pi@cam-08:~/scripts $ ffmpeg -hide_banner -f alsa -channels 1 -sample_rate
> 44100 -i hw:0 -t 30 out.wav
> [alsa @ 0x1997210] cannot open audio device hw:0 (No such file or directory)
> hw:0: Input/output error

hw:0 means card #0, but your device list shows you don't have a card at
index 0 that supports recording:

> pi@cam-08:~/scripts $ arecord -l
>  List of CAPTURE Hardware Devices 
> card 1: sndrpii2scard [snd_rpi_i2s_card], device 0: simple-card_codec_link

hw:1,0 looks like the only valid device based on that output (card 1,
device 0).  Since there is no card 0 in the list, anything with hw:0
will return "no such file" (which really means "no such audio device").

> Again, I get a file, but when I play it it seems to have no audio.

It looks like you're passing different device names to arecord and
ffmpeg.  It's probably best to stick to using the same devices for both
otherwise it's difficult to know where the problem is.

Cheers,
Adam.
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] Lensfun

2022-01-16 Thread Adam Nielsen via ffmpeg-user
> Interesting...
> --
> ben@dell2in1:~/ffmpeg/ffbuild$ more config.log | grep lensfun
> lensfun_filter
> lensfun_filter
> liblensfun
> liblensfun
> liblensfun
> liblensfun
> lensfun_filter
> lensfun_filter
> lensfun_filter='yes'
> lensfun_filter_deps='liblensfun version3'
> liblensfun='no'
> ---
> This is the config.log. Somehow the liblensfun is set to "no".
> Why, and how to set it to "yes"?

Look in the log file, your grep command is hiding all the error
messages explaining the problem!

Cheers,
Adam.
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] Lensfun

2022-01-16 Thread Adam Nielsen via ffmpeg-user
> No, I have the latest version from github, and built for myself.
> Interesting because in ffmpeg/libavfilter there is a file, whose name is
>  vf_lensfun.c but only a ".c" extension. All the other filters have a ".c"
>  ".o" and a ".d" extension.

According to Google this is provided by the external library liblensfun,
so I'm guessing if you look in config.log (or the 'configure' output
when you ran it) it will tell you that it couldn't find a working
liblensfun version so it compiled ffmpeg without it?

Cheers,
Adam.
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] ERROR: zimg >= 2.7.0 not found using pkg-config

2022-01-14 Thread Adam Nielsen via ffmpeg-user
> Hi, when compiling, is there a good way to fix the
> 
> ERROR: zimg >= 2.7.0 not found using pkg-config
> 
> Was using this guide
> 
> https://zimg.buaa.us/documents/install/

That tells you to configure things to install into /usr/local (which is
a good idea), but have you then configured pkg-config to look in
/usr/local/lib/pkgconfig as well?  Often it won't look in the
/usr/local prefix by default.

I can't remember how to configure it off the top of my head but the
instructions will only be a Google away.

Cheers,
Adam.
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] Can MJPEG be live streamed without transcoding?

2022-01-09 Thread Adam Nielsen via ffmpeg-user
> >   [mpegts @ 0x6725d0] Stream 0, codec mjpeg, is muxed as a private data
> >   stream and may not be recognized upon reading.  
> 
> You cannot mux random data into mpegts, this is not a limitation of FFmpeg.
> 
> A small change to the FFmpeg source code probably makes possible what
> you want, but since the resulting stream does not conform to any
> specification, the developers will not do it.

It doesn't have to be MPEG-TS, the only thing receiving this stream
will be another FFmpeg instance, so it doesn't worry me whether it
conforms to any standards or not, all I need is something FFmpeg can
send to itself over UDP.

Since FFmpeg can read raw compressed video from a file in a number of
different codecs, I assumed it could do the same via UDP, but perhaps
not.

Thanks,
Adam.
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


[FFmpeg-user] Can MJPEG be live streamed without transcoding?

2022-01-07 Thread Adam Nielsen via ffmpeg-user
Hi all,

I'm trying to stream video from a USB camera plugged into a Raspberry
Pi[1], however I can't work out how to stream the camera's MJPEG data
over a network without transcoding it first[2].

If I use a command like this:

  $ ffmpeg -f v4l2 -input_format mjpeg -video_size 1280x720
-i /dev/video1 -c:v copy -f rtp_mpegts udp://[ff01:1::1]:5004

Then it streams the data, but with this warning:

  [mpegts @ 0x6725d0] Stream 0, codec mjpeg, is muxed as a private data
  stream and may not be recognized upon reading.

I am using ffmpeg to receive the data, however I can't work out how to
tell ffmpeg to read the private data stream and use it as MJPEG data:

  $ ffmpeg -c:v mjpeg -i udp://[ff01:1::1]:5004 -map 0:v ...
  Input #0, mpegts, from 'udp://[ff01:1::1]:5004':
Duration: N/A, start: 509.858389, bitrate: N/A
Program 1 
  Metadata:
service_name: Service01
service_provider: FFmpeg
Stream #0:0[0x100]: Data: bin_data ([6][0][0][0] / 0x0006)
  Output #0, rtp_mpegts, to 'udp://239.0.0.1:5004':
  Output file #0 does not contain any stream

If I try to use "-map 0:v" then it tells me "Stream map '0:v' matches
no streams."

I tried using "-f rtp" instead of "-f rtp_mpegts" and that also sends
the data but again, I can't work out how to tell ffmpeg to read it as
MJPEG on the receiving end.

How can I stream the MJPEG data via UDP from one ffmpeg instance to
another?  I don't want to use pipes for this, because I suspect those
will cause the same problem[2].

Many thanks,
Adam.

[1] I am also streaming from the built in camera, but I need a second
video stream which the USB camera is for.

[2] The reason for asking is that I have discovered that the USB camera
sometimes does not return frames, which causes ffmpeg to freeze,
waiting for a frame that never arrives.  Ultimately I am using the
Raspberry Pi's hardware H264 encoder to transcode the MJPEG stream into
H264, but when ffmpeg freezes, it causes all hardware encoding on the
Pi to stop, and sometimes even causes the Pi to completely lock up
requiring a power cycle to recover from.  So to avoid this I am
thinking that if I stream the MJPEG data across the network (on
localhost) then have a second ffmpeg doing the H264 V4L2 M2M encoding,
if the first MJPEG one drops out it hopefully won't cause the second
H264 one to freeze and break the Pi.
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] using multiple instances of ffmpeg

2022-01-03 Thread Adam Nielsen via ffmpeg-user
> I wanted to know if any of you have had success using multiple
> instances of ffmpeg to connect to separate multi-cast or uni-cast
> audio/video streams simultaneously?

Yes, I have multiple Raspberry Pi devices each streaming 1-2 multicast
H264 streams.  I then have a machine that uses multiple ffplay
instances to view all the streams at the same time (tiled across two
monitors), and another machine that uses ffmpeg to capture all the
streams and record them to disk.

I originally was using IPv6 multicast, but had to return back to IPv4
because there is an old Cisco switch in the middle of my network that
does not support MLD snooping properly, so I either get the multicast
streams forwarded to every device (even if it does not want the stream)
swamping them with too much traffic, or IPv6 breaks completely because
neighbour-discovery messages are not forwarded.  But going back to IPv4
with IGMP snooping works well.  Once I eventually upgrade that switch
I'll go back to IPv6 again.

Cheers,
Adam.
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


[FFmpeg-user] How to get ffplay live stream latency under 15 seconds

2022-01-03 Thread Adam Nielsen via ffmpeg-user
Hi all,

I'm using ffplay to play a video stream from a Raspberry Pi (encoded
via ffmpeg using the Pi's hardware H264 encoder).

Unfortunately I cannot work out how to get the stream latency lower
than about 15 seconds.  I would be happy if the overall delay could be
reduced to 2 seconds or less.

I have tried a number of things I have found online but none of them
seem to make any difference.  I am seeing the following behaviour:

 * When I start the stream, the video appears within one second (the
   source stream is configured to send key frames once per second),
   however the stream is already 5-6 seconds behind the live image.

 * If I stop the stream at the source, ffplay continues to play the
   stream (with no new data coming in over the network) up until the
   timestamp where I stopped it.  So this tells me ffplay must be
   buffering the data, if it can keep playing for 15 seconds after the
   stream has stopped transmitting data over the network.

 * I have told ffplay to play the incoming stream as fast as possible
   (see below) however it still remains between 5 and 15 seconds
   behind.  It starts off 5 seconds behind and gradually increases to
   around 15 seconds behind, but never gets worse than that.  But
   again, when I stop the stream at the source, ffplay suddenly plays
   the video at a much higher framerate and catches all the way up to
   where the live stream was when it stopped being sent over the
   network.

I have tried various parameters to ffplay:

-sync ext
  Supposed to make the video play as quickly as possible.  Without it
  the playback can lag further and further behind (beyond 15 seconds)
  however with it it still lags by up to 15 seconds.

-probesize 32
  Makes the video start a little quicker but makes no difference to the
  latency.

-vf setpts='N/(30*TB)'
  Supposed to bump the framerate from the source 15 fps to 30 fps.  This
  doesn't seem to do anything until the source stream is terminated.
  At that point it does double the frame rate and quickly renders all
  the frames ffplay has apparently been buffering the whole time.

-vf setpts='PTS-STARTPTS'
  Supposed to remove any offset from the PTS value, but this has no
  effect.

-avioflags direct
  Prevents the video from playing with one message saying "[mpegts]
  Unable to seek back to the start" followed by many messages stating
  "[udp] Part of datagram lost due to insufficient buffer size".

-flags low_delay
-fflags nobuffer
-fflags discardcorrupt
-framedrop
-strict experimental
-sync audio
-sync video
  None of these make any difference one way or another.

Is there any way to get ffplay to render everything in its buffer so
you can consistently get under five seconds latency?  Or does it always
have to buffer a lot of data and run a long way behind the incoming
data?

Cheers,
Adam.
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] Please advise

2021-12-30 Thread Adam Nielsen via ffmpeg-user
> I have received 2 replies to my topic.
> And both are asking for me to provide clarification.
> 
> I don't see any way to reply to them.

You should have gotten the replies via e-mail.  Use 'Reply All' so that
the ffmpeg-user@ffmpeg.org e-mail address is in the To or CC field in
your reply e-mail and everything else will work automatically.  If
you're not accessing the list via e-mail then you should do so as
e-mail is the only method managed by the people here.  There are
alternatives but they are run by third parties so you'd have to direct
your questions to them instead.

> Please advise how I can reply to their replies.

Please also try to use a better subject when you post messages.  The
subject is supposed to be a short summary of your message so that
people on the list can focus in on messages they can help with, as
there are a lot of people here with very different areas of expertise.
"Please advise" isn't a good subject because obviously anyone sending a
message wants advice, so it goes without saying.  In this case a better
subject would've been "How do I respond to a message", which neatly
summarises what you are asking.

If you use a poor choice of subject it means you are less likely to get
people to respond.

Cheers,
Adam.
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] Multiple ffmpeg instances failing to subscribe to separate audio streams

2021-12-21 Thread Adam Nielsen via ffmpeg-user
> We have been initially working with the 2110 decode example on the
> github site (https://github.com/cbcrc/FFmpeg) and have been
> successfully streaming media with it. We have evolved our design and
> would like to operate with multiple media streams.

The set up is probably a bit complex for anyone to be able to quickly
reproduce.  Can you simplify it for debugging purposes?  e.g. remove
the SDP file and stream from a URL directly?  Ideally you want the
minimum number of components in play that demonstrate the problem
(which will also tell you which component is at fault, if the problem
goes away once you remove that part).

I am wondering whether it's some issue around multicast and the
subscription/unsubscription messages getting mixed up.  Can you look at
the local system's IGMP subscription list (e.g. /proc/net/igmp) and
confirm that the subscriptions aren't dropping out when you're losing
data?

You could try streaming from two different IP addresses instead of just
the one to see if this makes the problem go away.  If it works from two
different IPs but not two streams from the same IP, that would confirm
that it's probably an issue related to the multicast unsubscription
messages being sent at the wrong time.  But if you get the problem with
only one stream per IP address you can probably rule out IGMP as the
issue.

Sorry I couldn't be more helpful but it's a bit of an involved set up
to reproduce locally for testing.

Cheers,
Adam.
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] onvif use

2021-12-14 Thread Adam Nielsen via ffmpeg-user
> I have tried to reproduce the streaming without success:
> 
> ffplay http://admin:password@192.168.1.59:8000
> 
> and to record it in various protocols and formats:
> 
> ffmpeg -i http://admin:password@192.168.1.59:8000 stream.h264
> or
> ffmpeg -i
> rtsp://admin:password@192.168.1.59:8000//onvif/device_service
> stream.mp4
> 
> Nothing happens and when I cancel with "ctrl + c" appear
> "Exiting normally, received signal 2."

How did you find these URLs?  FFmpeg doesn't have ONVIF support so you
normally have to use an ONVIF program to query the camera and ask it
for its streaming URLs, and then pass those to FFmpeg.

> I have been able to see the camera in software under Windows such as
> AnyCam or IP Camera Viewer ( is detected as PPS IPC based on ONVIF
> MINI 9T). But I am a linux user and I prefer to use ffmpeg

See if you can find a Linux-compatible ONVIF utility (doesn't have to
be a viewer, there are command-line ones around) and use that to
request the available streaming URLs from the camera.  Usually there are
multiple URLs and the one you pick depends on the codec and video
resolution you prefer.

Cheers,
Adam.
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] help with pipe command ffmpeg

2021-12-08 Thread Adam Nielsen via ffmpeg-user
> i want to know how to output to a windows named pipe and reference
> that in c++ code as input ...
> 
> kindly share some working command line examples of ffmpeg.exe
> 
> googling is not much help full, tired of it...

It seems that using named pipes is not very common.  If you still want
to use them you will probably have to work it out for yourself,
otherwise using a more common approach will give you more examples to
look at.

I would suggest using sockets instead as there is probably much more
code for this, and you won't be limited to Windows as socket code will
also work on non-Windows platforms.

For example if you output to udp://127.0.0.1:5000 then the traffic
still won't leave the machine (so same as using named sockets) but you
can use standard UDP reception code to receive the data - there are
plenty of examples for how to listen to UDP traffic in C++.  Also, you
could move the transmission and reception programs to different
machines one day if you choose to, which is much harder with named
pipes.

Cheers,
Adam.
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] Struggling with audio of multipart stream

2021-12-02 Thread Adam Nielsen via ffmpeg-user
> I have a TPLink Kasa IP camera, which can be coaxed into sending a live or
> recorded stream. It looks like this:

That looks like some sort of custom format, perhaps to make it easier
to stream over HTTP to a browser.

I am guessing if you want to use this stream you'll probably have to
write your own code to parse it and strip out the audio and video
streams.

> I can view or transcode the video, but it throws a lot of errors and
> doesn't seem to see the audio stream. Can anybody help?

That's because it's not a normal video format and you're just lucky the
video decoder is able to pick out usable frames amongst all the other
"rubbish" that isn't video.

Unless you want to write some custom code, you're better off figuring
out how to get the camera to send it in a proper format.

Almost all cameras offer some kind of streaming, perhaps via RTP or
RTSP, maybe requiring ONVIF to discover the URL, etc. so this one can
almost certainly do it too, if you can just figure out how.

It will by far be the better solution.

Cheers,
Adam.
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] kmsgrab: variable framerate captures (was: Oh man, this could be so great. But it's just... not.)

2021-11-20 Thread Adam Nielsen via ffmpeg-user
> 2.) As far as I can tell, kmsgrab (like x11grab or other screen devices)
> wants a constant frame rate value (-r).
> [...] This is exactly how camera input is
> recorded on mobile devices, as they're all variable frame rate, and it
> works fine. kmsgrab should do the same.

Wouldn't this work the same as camera devices that change frame rate
with exposure rate?  I'm using a Raspberry Pi camera that slows down
the frame rate as it gets darker, and using
-use_wallclock_as_timestamps seems to work for that.  I also combine it
with -vsync 1 to duplicate the frames so I get a constant FPS in the
output video but I think -vf fps would probably be better.  Maybe you
need to mess with the setpts or setts filters to get accurate timing
but I'm pretty sure it's all doable already.

At least I can produce a fixed-framerate video that runs at the correct
speed when taken from a variable-framerate V4L2 camera device, so it
should be similar, assuming of course kmsgrab only provides a frame
upon pageflipping or similar.  If it's just sampling the screen
contents at regular intervals then none of this will work and you'll
probably get bad tearing in the recorded video, so hopefully it's not
done that way.

Cheers,
Adam.
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] Can I set a "wait-timeout" for ffmpeg to not abort a stream download?

2021-11-15 Thread Adam Nielsen via ffmpeg-user
> On a new invocation I will have to adjust the -t argument to handle
> the time until the original stop was supposed to be done of course.

If you're using "at" as you have suggested in your other message,
possibly you could just write the FFmpeg PID to a file and have "at"
send a TERM signal at the specified time.  That way you'd only need one
calculation, or possibly none if you can specify the duration as a
start and end time.

Just be careful of the edge case where it's in the process of
restarting right when the end time arrives, so there's nothing to
terminate and then the stream resumes.

> Is there a readymade function that can add/subtract time in hh:mm:ss
> format and return the same format?
> I think ffmpeg can actually handle -t hh:mm:ss in addition to -t sss

"date" can do a limited bit of maths but not sure if it will cover your
use case:

  date -d "now + 30 mins"

> Can I somehow specify to ffmpeg that it should *append* the new data
> to the already downloaded first part? Or do I have to get this new
> bit separately and then paste them together?

ffmpeg doesn't append, it takes in whole files and writes whole files.
You'd have to merge them after the fact.  You could possibly work
around the issue but I'm not sure it's worth it.

One workaround would be to have your ffmpeg instance stream the
transcoded data to a localhost UDP socket instead of a file ("-f
rtp_mpegts udp://localhost:1234").  Then you could run a second ffmpeg
instance that reads data from the UDP socket and writes it to your
file.  Since this second instance is listening on UDP, it won't
terminate when the first ffmpeg gets cut off, so when the first one
restarts and resumes sending the UDP data, the second one will keep
going, effectively "appending" the data.

But now you have to deal with two ffmpeg instances per recording, one
of which may randomly restart, and you might have to mess around with
which of the two instances does the actual encoding depending on what
codecs rtp_mpegts is able to send over the UDP socket.

> I have seen while watching live on the source webpage that it
> sometimes towards the top of the hour seems to black out the player
> for a while and then resume. It could take anywhere from 10 seconds
> to a couple of minutes when it happens.
> 
> The aborted recording does not have any of that it just suddenly ends.

You'll probably find the web player just blanks out the video while
it's trying to reconnect.  You might be able to get ffmpeg to do the
same with one of the -vsync options, but then having minute long black
sections in the output video might not be great for playback.

> I don't really understand why I would need to collect the "complete
> output" since I am not asking about *how or why* ffmpeg failed, what
> I asked is if there is an argument to ffmpeg, which will force the -t
> video time to be observed intstead of ffmpeg aborting when the stream
> pauses its streaming.

To understand why this was asked, have a read of "Pounding A Nail: Old
Shoe or Glass Bottle":

  https://weblogs.asp.net/alex_papadimoulis/408925

I know sometimes you just want an answer to your immediate problem, but
so often the issues people have would be better solved by tackling it a
different way.  Not having all the information about the problem means
often time gets wasted trying to solve the wrong issue.

In your case we could've spent ages trying to investigate ways to keep
ffmpeg alive, when it turned out that the lost TCP connection meant
keeping ffmpeg alive would not have helped solve the problem, and any
attempts to try to do so would have been wasted effort.  This is why
everyone is eager to fully understand what you are trying to achieve,
so they can offer the best solution.  Yes, the information might seem
irrelevant to you, but many a time someone will spot something you
weren't even aware of so it's always worth giving the whole story when
asking for help.

And yes, speaking from experience sometimes this results in solutions
you don't want, but that's better than an answer that does exactly what
you asked for but still doesn't fix the problem!

> I cannot dig out such information anyway since it is hidden below
> several levels of at command handling and scripts calling other
> scripts. It all happens in an at job container...

Sounds like you need to improve your logging :-P

Cheers,
Adam.
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] Can I set a "wait-timeout" for ffmpeg to not abort a stream download?

2021-11-15 Thread Adam Nielsen via ffmpeg-user
> Here you go (even though I was asking for the existence of an ffmpeg argument 
> to
> tell it to wait out the set -t time rather than aborting if there is a pause 
> in
> the stream).
> Actual URL domains obfuscated.
> 
> ffmpeg -hide_banner -referer "https://w.com/.html; -i
> "https://.org/2fwljiCVp2jdxA63hnS-ng==,1636958338/LS-ATL-54548-10/index.m3u8;

Assuming the issue is that the TCP connection is terminated (due to a
timeout, server disconnect, etc.) there won't be an ffmpeg option for
this.

Having ffmpeg wait until more data arrives is not applicable to a
disconnected TCP connection, because it's like calling someone on the
phone and they hang up.  No matter how long you wait with the phone
against your ear they will never just start speaking again, one of you
has to make another call to get reconnected.

Same thing here - ffmpeg exits when the "call" hangs up, because
waiting would serve no point - the TCP connection has already been
terminated, so no data will ever pass again across that connection, so
there's nothing to wait for.

Your best bet is probably going to be scripting it, so that when it
terminates, the script runs ffmpeg again to re-establish a new
connection to the server.

You may also want to investigate why the connections are getting
dropped, however depending on how important the streaming is, even a
perfect system will occasionally drop TCP connections so being able to
handle that happening will make your set up much more robust.

Cheers,
Adam.
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] Can I set a "wait-timeout" for ffmpeg to not abort a stream download?

2021-11-14 Thread Adam Nielsen via ffmpeg-user
> Some live streams I download from are sometimes interrupted for a short time 
> of
> 1-2 minutes and when I use ffmpeg for download it quits at these occurrences.
> 
> Is there some way to enforce the "-t 3800 " argument so it really waits all 
> that
> time if the video stops before aborting the download?
> 
> Or else some other argument that can set an idle-timeout value to say 300 s or
> so, such that the video download is not aborted but instead just paused until
> the stream returns or when the -t argument has been reached?

What protocol/URL are the streams you're using?  I have the opposite
problem, where data stops arriving and ffmpeg will sit there for hours
doing nothing, when I want it to exit so I can detect the interruption
in a script and restart it.

I'm wondering whether you're using a TCP connection and it's the TCP
connection that drops out.  In my case it's both UDP and a local V4L2
device, and both those will sit there forever and then resume
immediately if new data eventually arrives.

Cheers,
Adam.
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] Alternative to Dynamic Text

2021-11-08 Thread Adam Nielsen via ffmpeg-user
> Instead of a sleep and retry, we can simply continue with existing text 
> and try again on next frame. We can also extend the semantics of the 
> option (in a backward-compatible way) to specify the frame interval at 
> which the file is reloaded.

I like this idea.  I'm using it on a Raspberry Pi 1 to write the time
and camera name in a status bar at the bottom of the video, and the Pi
is pegged at 100% CPU usage even using hardware encoding.  So anything
that lowers the CPU usage is welcome.

If you will only reload the file every few frames, I wonder whether you
could cache the output image rather than just the text?  This way the
font rendering would only happen every few frames as well (only when
the text actually changes), further lowering CPU usage.

If this option to re-use previously rendered text could also apply to
placeholders (like text=%{localtime}) that would also help lower the
CPU usage.

Cheers,
Adam.
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] Alternative to Dynamic Text

2021-11-06 Thread Adam Nielsen via ffmpeg-user
> Btw, I think following approach may help (or may not, I have no windows
> system by hand to test it myself). Let's say I want to atomically
> replace file a.txt with file b.txt
> mklink /h wrk.txt a.txt
> open wrk.txt with ffmpeg
> update b.txt as needed
> mklink /h next.txt b.txt
> move /y next.txt wrk.txt
> now update a.txt as needed or may delete a, b and create new b.
> hardlink again and move again
> and so on in loop

>From what I read, the 'move' command under Windows is not atomic.  It
works by deleting the original file then renaming the new one, so if
ffmpeg tries to read it between the delete and the rename, ffmpeg will
abort with an error saying it can't find the file.

The Win32 API function MoveFileTransacted() appears to be the only way
to do it under Windows.

I think probably the best solution is to change ffmpeg so that if the
textfile parameter cannot read the file, it displays a blank or just
uses the previous data.  That way it no longer becomes critical to
update the file atomically.

Cheers,
Adam.
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] Alternative to Dynamic Text

2021-11-03 Thread Adam Nielsen via ffmpeg-user
> I have tried to do what you have suggested. Somehow, it still cause
> ffmpeg to crash at the exact moment the file is being renamed.
> 
> By the way, I am running in Windows environment. Not sure, if in
> Linux will this issue occurs.

It works fine for me but I'm on Linux.

If renaming doesn't work, have you tried copying the new file over the
top of the old one?

I'm not familiar enough with Windows to know how to atomically
overwrite a file.  But I searched Google and found this:

  
https://stackoverflow.com/questions/167414/is-an-atomic-file-rename-with-overwrite-possible-on-windows

It suggests there is a Windows API function called MoveFileTransacted()
that you could use.

I'm surprised just opening the file in read-write mode, seeking to the
start, then writing your content doesn't work though.  You might get a
frame here or there with incomplete data but it won't cause ffmpeg to
exit with an error.

Cheers,
Adam.
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] How to obtain packets as soon as sending a frame?

2021-11-03 Thread Adam Nielsen via ffmpeg-user
> I am experimenting with the encoder
> example.
> According to the FFmpeg encoding and decoding API
> overview,
> the codec might accept multiple input frames without returning a
> packet, until its internal buffers are filled.
> 
> Is there a way to receive a packet as soon as I send a frame?

I am no expert but I imagine it is keeping the old packets around so it
can refer to them from later packets, and it only sends them when it no
longer needs to refer back to them.

> Specifically, I am using libx265. The first frame is always an I
> frame (key frame) which can be encoded without later frames. However,
> I have to send quite a few following frames to obtain the first
> packet, and this leads to severe latency.

I suppose you could configure the codec to send an I fraame for every
frame or every second frame?  You'd get almost no compression but the
latency would probably be quite low.

You could also consider increasing the frame rate, e.g. 30 fps in, 120
fps out.  Since you'd be submitting each input frame multiple times,
the encoder would produce an output packet sooner.

There are probably much better ways than this but I imagine you'd have
to delve into the codec source code.

Cheers,
Adam.
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] Alternative to Dynamic Text

2021-11-02 Thread Adam Nielsen via ffmpeg-user
> textfile/reload=1 is working.
> 
> However, the textfile is being updated by another process at every
> second. This caused ffmpeg to crash after running sometime as the
> ffmpeg unable to read the textfile when the other process is updating
> it.

In the manpage it says the update has to be atomic, i.e. you should
write your update to a different file, then move the new file over the
top of the old one.  That way ffmpeg will always see either the new
content or the old, and nothing in between.

Cheers,
Adam.
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] Alternative to Dynamic Text

2021-11-02 Thread Adam Nielsen via ffmpeg-user
> For subtitles, what if the content of .srt or .ass file is being
> updated at every second. Will ffmpeg crash because the file is being
> ‘locked’ by another process that is updating it?

If you're going to update it every second, how come textfile/reload=1
won't work?

I put my textfile in /tmp which is a tmpfs filesystem so that it
remains in memory the whole time (fast, no disk access) if you're
worried about disk I/O.

Cheers,
Adam.
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


[FFmpeg-user] How to set scanline stride value? Corrupted video with RasPi HW encoder

2021-10-22 Thread Adam Nielsen via ffmpeg-user
Hi all,

I'm having a small issue encoding video with the Raspberry Pi's
hardware encoder, available as a V4L2M2M device that ffmpeg can use in
current firmware/kernel versions.

If I try to encode a video where the width is not a multiple of 64
pixels, I get video corruption.

I opened an issue about it on the Raspberry Pi GitHub repo[1] including
screenshots of the corruption, and they got back to me saying they
suspect ffmpeg is not setting the stride value correctly as reported by
the V4L2M2M interface.

Is there any way to find out what stride value ffmpeg is using and/or
override it, to try to work out whether ffmpeg is setting the wrong
value, or whether the RasPi firmware is supplying the wrong value to
begin with?

If anyone has ffmpeg running on a Pi, then the command listed in the
GitHub issue to reproduce the problem is:

  ffmpeg -f lavfi -i testsrc=size=1296x972:rate=30 -pix_fmt yuv420p -c:v 
h264_v4l2m2m -f avi - | ffplay -

By adjusting the width of the test video, only values that are
multiples of 64 produce clean video.  Other values produce diagonal
lines across the video as shown in the GitHub issue.

Many thanks,
Adam.

1: https://github.com/raspberrypi/firmware/issues/1608
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


[FFmpeg-user] Terminate on no frames/input buffer overflow

2021-10-01 Thread Adam Nielsen via ffmpeg-user
Hi all,

I'm streaming a video with ffplay on a Raspberry Pi using the hardware
decoder (v4l2_m2m) and the video stream keeps randomly freezing, I am
guessing due to a bug in the hardware decoder implementation.

When this happens, I must run "killall ffplay".  I have configured
systemd to restart ffplay automatically so the streams resume a few
seconds later, until they freeze again anywhere from a few minutes to a
few hours later.

Rather than having to manually SSH in and terminate ffplay, I am
wondering whether there is a way that ffplay can be told to exit when
the decoder freezes, so that I might have a chance of recovering from
this situation automatically?

It looks like when the freeze happens, ffplay's memory usage increases
as if it's buffering the incoming stream, so perhaps being able to get
ffplay to quit in this situation might work.

I have tried to use systemd's configuration to limit ffplay's memory
usage, however it doesn't seem to terminate ffplay when this situation
arises so I am not entirely sure what's going on.

As a side note, the reason I am assuming this is a hardware bug is
because I am streaming two videos, and when one freezes, so does the
other one.  Occasionally I also get corrupted bits of one stream
appearing in the other ffplay window as well, and dmesg reports errors
about the graphics decoder timing out.  If I only restart one video
stream I just get the old frozen frame showing up again.  I have to
restart both ffplay instances before playback returns to normal.

Any ideas how to tell ffplay to exit when it no longer receives frames
from the streaming source?

Many thanks,
Adam.
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] How to set V4L2 M2M codec options like bitrate

2021-10-01 Thread Adam Nielsen via ffmpeg-user
Hi Andriy,

> > Unfortunately the most important option I need to change is the
> > "repeat_sequence_header" control, which I need to set to 1 as the
> > default of 0 means if you miss the initial frame of the video, you can
> > never join the stream mid-way through, as ffplay just produces heaps of
> > "non-existing PPS 0 referenced" messages and never opens the video
> > window.
> >
> > Does this option have an equivalent to set on the ffmpeg command line,
> > now v4l2-ctl will no longer work?  
> 
> Not yet.
> Can you test the attached patch, please? It will init the
> repeat_sequence_header option to 1.

My apologies for the delay responding - it took me longer than I
expected to get ffmpeg building on my Pi.

I have now tested your patch and I can confirm that this does indeed fix
the problem and get things working for me!

Is this the sort of thing that can be merged into the official release,
so that I don't have to maintain a custom ffmpeg build?

Many thanks for your help with this!

Cheers,
Adam.
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] Problem with console log

2021-09-03 Thread Adam Nielsen via ffmpeg-user
> So there isn’t a way to repair the master file? I know it's a 2008 MiniDV
> captured. So it's old.

Possibly there was some issue (e.g. dirty tape heads) when the camera
recorded it so there's probably not much you can do to fix it.

> I need to use the master file with other editing applications. 

Assuming the other applications can't use it as-is, probably your only
option would be to re-encode it with ffmpeg, as it appears to handle
it.  This will cause some quality loss, but if you use an extremely
high bit rate it won't be too noticeable.  Once you've edited it in your
other application you can use a normal bit rate again when you encode
the final output video.

Cheers,
Adam.
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] Problem with console log

2021-09-03 Thread Adam Nielsen via ffmpeg-user
> I have the following problem:
> I recently tested a MOV (codec DV) file that has the following console log
> error
> How can I fix this?

The error means your file has some corruption but ffmpeg is able to
cope with the problem.  You can fix it by playing a file that isn't
corrupted, or by using the -loglevel option to hide the warnings, e.g.
-loglevel fatal

Cheers,
Adam.
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] Framerate automatically increased while cropping

2021-08-22 Thread Adam Nielsen via ffmpeg-user
> why is the framerate automatically changed here from 29.98 fps to 120
> fps, and how can I prevent from this ?
> 
>    Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 
> 1280x720, 2508 kb/s, 29.98 fps, 120 tbr, 90k tbn, 2k tbc (default)

From what I can find, "tbr" is the target framerate the media is
indicating it wants.  So ffmpeg is just respecting that value.

I guess you could use the -r option to override this and force a
specific framerate if you want something different?  Or you could fix
the metadata in the source file to have the correct target framerate to
begin with.

Cheers,
Adam.
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] How to set V4L2 M2M codec options like bitrate

2021-08-15 Thread Adam Nielsen via ffmpeg-user
> > I'm using the h264_v4l2m2m codec to do hardware accelerated encoding on
> > a Raspberry Pi.  This is working well, except I am unable to set any
> > encoder options such as bitrate, as I cannot find how to tell ffmpeg to
> > set V4L2 control options.
> 
> You can use -b:v option to set the bitrate, i.e.
> ./ffmpeg -i input -codec:v h264_v4l2m2m -b:v 2M out.mp4
> 
> There is a patch for documentation that I need to update. It may be useful
> to you:
> https://patchwork.ffmpeg.org/project/ffmpeg/patch/20200117034211.12142-1-andriy.gel...@gmail.com/

Ah excellent, many thanks for that, that does work for setting the
bitrate.

Unfortunately the most important option I need to change is the
"repeat_sequence_header" control, which I need to set to 1 as the
default of 0 means if you miss the initial frame of the video, you can
never join the stream mid-way through, as ffplay just produces heaps of
"non-existing PPS 0 referenced" messages and never opens the video
window.

Does this option have an equivalent to set on the ffmpeg command line,
now v4l2-ctl will no longer work?

Thanks again,
Adam.
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".