In avfilter it is very easy to override the default threading
implementation by just assigning to AVFilterGraph->execute.
However, this doesn't seem to be possible avcodec. You can't just simply
assign to AVCodecContext->execute(2) since ffmpeg internally always
initializes it's own threading.
I'm unable to get ffplay/ffmpeg to properly wait and reconnect to a http
stream.
1. ffplay -reconnect 1 -i
http://localhost:8000/file/cir7q2zxh00013l5hh0src1fh.mxf
2. Kill file server
3. Start file server
4. ffplay does not recover
I've also tried with a H264 stream with similar results. Am I
When streaming a non-seekable http response (without Content-Length in
response) that is cancelled on server side, ffmpeg doesn't seem to provide
any warning and it looks to the user like everything worked fine.
Is there any container or options that could at least print a warning if
the
I'm not suggesting to throw away the frames.
Keep the frames but set an appropriate start time. That way the "trimmed"
frames would be decoded but not displayed.
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
>Not if there are filters that alter the timestamps, nor if the source file
> duration is not accurate enough or completely unknown.
True, but if that is not the case then it would? In theory it should
be possible to make a conservative guess.
Would be nice to be able to at least specify it
When I set "pipe:1" as the output the resulting file always ends up without
a duration in it's metadata/header (ffprobe doesn't find it).
I'm primarily interested in outputting webm, but I have the same issues
with nut and mkv.
Is there a way around this?
And why is that an issue? It could just pipe out the duration at the start
of the file since it knows the duration from the source file... or is it
possible to manually specify the duration so that it is written?
___
ffmpeg-user mailing list
I'm using ffmpeg to segment/concat a 120Mbit/s file in 2 second segments
to/from a HTTP server. The problem is that ffmpeg only writes/reads one
segment at a time which doesn't fully use all available bandwidth.
Each request to/from the server has a limit of 10MB/s. However using 5+
concurrent
It would have been easier if the format itself was concatenable.
What formats would you suggest?
You can probably do slightly better using
pipes.
Hm. You mean I would create a named pipe for every segment with the name =
filename?
Please do not top-post on this mailing-list. If you do not
septidi 7 floréal, an CCXXIII, Robert Nagy a écrit :
I'm using ffmpeg to segment/concat a 120Mbit/s file in 2 second segments
to/from a HTTP server. The problem is that ffmpeg only writes/reads one
segment at a time which doesn't fully use all available bandwidth.
Each request to/from
I need to frame accurately trim the start on a long clip in a fast way.
I am aware that for frame accurate seeking one needs to decode and
re-encode a clip. However, that would be to slow for my usage.
So what I was think of was seeking to the first frame before the seek point
and decode to the
Haven't tried two-pass encode. But I can't quite use 2 pass encoding in my
usage scenario since I'm capturing live.
On Mon, Jan 19, 2015 at 6:04 PM, Werner Robitza werner.robi...@gmail.com
wrote:
On Mon, Jan 19, 2015 at 7:49 AM, Robert Nagy rona...@gmail.com wrote:
However, at the first 1-2
I'm encoding some video using x264 to Main 3.2 for streaming playback on
Apple devices.
The video contains quite a bit of text which ends up with rather non smooth
edges. Which there probably is not much to do about in Main profile.
However, at the first 1-2 seconds of the video the text looks
I want to generate some thumbnails for a video library.
Currently I'm using:
-frames:v 1 -filter:v select=gt(scene\,0.5) thumb.png
Which almost works. The problem is that I'd like the thumbnail to be taken
2 seconds after the scene change have been detected.
Is there a way to achieve that?
14 matches
Mail list logo