Re: [FFmpeg-user] decomb versus deinterlace

2020-04-18 Thread pdr0
Mark Filipak wrote
>> Deinterlacing does not necessarily have to be used in the context of
>> "telecast".  e.g. a consumer camcorder recording home video interlaced
>> content is technically not "telecast".  Telecast implies "broadcast on
>> television"
> 
> You are right of course. I use "telecast" (e.g., i30-telecast) simply to
> distinguish the origin of 
> scans from hard telecines. Can you suggest a better term? Perhaps
> "i30-camera" versus "i30"? Or 
> maybe the better approach would be to distinguish hard telecine: "i30"
> versus "i30-progressive"? Or 
> maybe distinguish both of them: "i30-camera" versus "i30-progressive"?


Some home video cameras can shoot native progressive modes too - 24p ,
23.976p. Some DV cameras shoot 24p advanced pulldown or standard.

 So why not use a descriptive term for what it actually in terms of content,
and how it's arranged or stored?  (see below)



>> The simplest operational definition is double rate deinterlacing
>> separates
>> and resizes each field to a frame +/- other processing. Single rate
>> deinterlacing does the same as double, but discards either even or odd
>> frames (or fields if they are discarded before the resize)
> 
> I think I understand your reference to "resize": line-doubling of
> half-height images to full-height 
> images, right?

"Resizing " a field in this context is any method of taking a field and
enlarging it to a full sized frame. There are dozens of different
algorithms. Line doubling is one method, but that is essentially a "nearest 
neighbor" resize without any interpolation. That's the simplest type. Some
complex deinterlacers use information from other fields to fill in the
missing information with adaptive motion compensation


> But I don't understand how "double rate" fits in. Seems to me that fields
> have to be converted 
> (resized) to frames no matter what the "rate" is. I also don't understand
> why either rate or 
> double-rate would discard anything.

The "rate" describes the output frame rate. 

Double rate deinterlacing keeps all the temporal information. Recall what
"interlace content" really means. It's 59.94 distinct moments in time
captured per second . In motion you have 59.94 different images.  

Single rate deinterlacing drops 1/2 the temporal information (either even,
or odd fields are retained)

single rate deinterlace: 29.97i interlaced content => 29.97p output
double rate deinterlace: 29.97i interlaced content => 59.94p output


>> I know you meant telecine up conversion of 23.976p to 29.97i (not "p").
>> But
>> other framerates can be telecined eg. An 8mm 16fps telecine to 29.97i.
> 
> Well, when I've telecined, the result is p30, not i30. Due to the presence
> of ffmpeg police, I 
> hesitate to write that ffmpeg outputs only frames -- that is certainly
> true of HandBrake, though. 
> When I refer to 24fps and 30fps (and 60fps, too) I include 24/1.001 and
> 30/1.001 (and 60/1.001) 
> without explicitly writing it. Most ordinary people (and most BD & DVD
> packaging) don't mention or 
> know about "/1.001".


The result of telecine is progressive content (you started with progressive
content) , but the output signal is interlaced. That's the reason for
telecine in the first place - that 29.97i signal is required for equipment
compatibility. So it's commonly denoted as 29.97i  . That can be confusing
because interlaced content is also 29.97i.  That's why /content/ is used to
describe everything .

When I'm lazy I use 23.976p notation (but it really means 24000/1001) , 
because 24.0p is something else - for example, there are both 24.0p and
23.976p blu-ray and they are different frame rates . Similarly, I use
"29.97" (but it really means 3/1001), because "30.0" is something else.
You can have cameras or web video as 30.0p. Both exist and are different and
should be differentiated otherwise you have time and sync issues



>> "Combing" is just a generic, non-specific visual description. There can
>> be
>> other causes for "combing". eg. A warped film scan that causes spatial
>> field
>> misalignment can look like "combing". Interlaced content in motion , when
>> viewed on a progressive display without processing is also described as
>> "combing" - it's the same underlying mechanism of upper and lower field
>> taken at different points in time
> 
> Again, good points. May I suggest that when I use "combing" I mean the
> frame content that results 
> from a 1/24th second temporal difference between the odd lines of a
> progressive image and the even 
> line of the same progressive image that results from telecine? If there's
> a better term, I'll use 
> that better term. Do you know of a better term?

I know what you're trying to say , but the term "combing" , it's appearance,
and underlying mechanism is the same.  This is how the term "combing" is
currently used in both general public and industry professionals. If you
specifically mean combining on frames from telecine , then you should say so
, 

Re: [FFmpeg-user] hw_decode.c on osx?

2020-04-18 Thread su.00048--- via ffmpeg-user
on 2020.04.18 07:15 -0400, ted park wrote:

> > videotoolbox decodes to CMSampleBuffer, which is CoreMedia's generic buffer
> > wrapper class.  a couple levels down, it's probably a CVPixelBuffer.  if 
> > it's
> > working for you, i'd be curious to know what hardware and what os version
> > you're running on, and what type of file you're feeding to hw_decode.
> 
> Oh okay, makes sense. The one I tried it on had a W5700, I tried again on a
> dual nehalem Xserve with no gpu (almost, has  a GT120 for console/desktop
> monitor) and got similar errors
> 
> xserve:~ admin$ $OLDPWD/hw_decode videotoolbox hd-bars-h264.mov 
> hd-bars-h264.raw
> [h264 @ 0x7fb74d001800] Failed setup for format videotoolbox_vld: hwaccel 
> initialisation returned error.
> Failed to get HW surface format.
> [h264 @ 0x7fb74d001800] decode_slice_header error
> [h264 @ 0x7fb74d001800] no frame!
> Error during decoding
> 
> xserve:~ admin$ $OLDPWD/hw_decode videotoolbox hd-bars-hevc.mp4 
> hd-bars-hevc.raw
> [hevc @ 0x7fd8a8018a00] get_buffer() failed
> [hevc @ 0x7fd8a8018a00] videotoolbox: invalid state
> [hevc @ 0x7fd8a8018a00] hardware accelerator failed to decode picture
> Error during decoding
> Error during decoding
> 
> The difference would be I didn't expect it to work in the first place I
> guess. Do you know hardware decoding works in ffplay? It's harder to tell for
> mac frameworks imo, I'd try attaching to ffplay and seeing if you can get it
> to use a hardware decoder. Which gpu does the machine have?

you've probably already seen carl eugen's post referring to ticket 8615.  
looks like this may be a regression ...  to answer your question, though,
i'm on a 2018 mini (i7); gpu is "Intel UHD Graphics 630", which should
support hardware decoding for hevc (and certainly for h264).

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-18 Thread Mark Filipak

On 04/18/2020 10:36 PM, Carl Eugen Hoyos wrote:

Am So., 19. Apr. 2020 um 04:12 Uhr schrieb Mark Filipak
:


On 04/18/2020 10:02 PM, Carl Eugen Hoyos wrote:

Am So., 19. Apr. 2020 um 03:43 Uhr schrieb Mark Filipak
:


My experience is that regarding "decombing" frames 2 7 12 17 ...,
'pp=linblenddeint' (whatever it is) does a better job than 'yadif'.


(Funny that while I always strongly disagreed some people also
said this when yadif was new - this doesn't make it more "true"
in any useful sense though.)


I am splitting out solely frames (n+1)%5=3 and applying "deinterlace"
solely to them as single frames. For that application, 'pp=linblenddeint'
appears to do a better job (visually) than does 'yadif'.


Did you ever test my suggestion?


I've reviewed all your posts to this and related threads, Carl Eugen, and I've not found your 
suggestion. Do you mind suggesting it again?


Thanks.
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] hw_decode.c on osx?

2020-04-18 Thread su.00048--- via ffmpeg-user
on 2020.04.18 17:03 +0200, carl eugen wrote:

> Am Fr., 17. Apr. 2020 um 14:37 Uhr schrieb su.00048---:
> > 
> > i have ffmpeg building successfully on osx (10.14.6; xcode 11.3.1),
> > and am using the generated libraries in a personal video project i'm
> > working on.  now i'm ready to start thinking about implementing hardware
> > decoding ...  i tried adapting some of the code from hw_decode.c, but
> > ran into runtime errors.  thinking that it was my code that was at
> > fault, i then compiled hw_decode by itself, but i get the same errors
> > even there.  is anyone aware of hw_decode.c not working on osx?
> > 
> > trying to decode known good videos, either h264 or hevc, results in
> > some internal errors (see below).  the videos behave as expected when
> > opened with my locally built versions of ffplay or ffmpeg, however.
> > 
> > h264 example:
> > 
> >% hwdecode videotoolbox h264.mov h264.out
> > 
> >[h264 @ 0x7f8a2b003e00] get_buffer() failed
> >Assertion src->f->buf[0] failed at libavcodec/h264_picture.c:70
> >Abort
> > 
> > hevc fails differently:
> > 
> >% hwdecode videotoolbox hevc.mov hevc.out
> > 
> >[hevc @ 0x7f8464814600] get_buffer() failed
> >[hevc @ 0x7f8464814600] videotoolbox: invalid state
> >[hevc @ 0x7f8464814600] hardware accelerator failed to decode picture
> >Error during decoding
> >Error during decoding
> > 
> > there are no significant warnings, etc., when building the executable,
> > so i'm pretty sure everything is ok there.  videotoolbox shows up as
> > being available in the output of configure.
> > 
> > any suggestions as to what might be going wrong?
> 
> Did you see ticket #8615?

no, i hadn't seen that.  thanks.  

that does look very much like the same issue.  compiler version in the bug
report looks like the same one i'm using, so it could be that the reporter is
also still on mojave.  i don't have a catalina machine to test on, so i can't
check to see if things are any different there.  my builds are based on commit
59e3a9aede5e2f9271c82583d8724c272ce33a0a (2020.04.13; same day), if that's any
help ...  now that i know this is presumably a bug, if i get some time, i will
try to investigate what's going on, but i'm not very familiar with the ffmpeg
codebase ...  anyway, will keep monitoring issue 8615 in the meanwhile.  thanks
again for the info.

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] DTS discontinuity in stream

2020-04-18 Thread Carl Eugen Hoyos
Am So., 19. Apr. 2020 um 04:36 Uhr schrieb Mark Filipak
:

> Your advice is very appreciated.

Do not compress anything that you want other people to read.

Carl Eugen
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-18 Thread Carl Eugen Hoyos
Am So., 19. Apr. 2020 um 04:12 Uhr schrieb Mark Filipak
:
>
> On 04/18/2020 10:02 PM, Carl Eugen Hoyos wrote:
> > Am So., 19. Apr. 2020 um 03:43 Uhr schrieb Mark Filipak
> > :
> >
> >> My experience is that regarding "decombing" frames 2 7 12 17 ...,
> >> 'pp=linblenddeint' (whatever it is) does a better job than 'yadif'.
> >
> > (Funny that while I always strongly disagreed some people also
> > said this when yadif was new - this doesn't make it more "true"
> > in any useful sense though.)
>
> I am splitting out solely frames (n+1)%5=3 and applying "deinterlace"
> solely to them as single frames. For that application, 'pp=linblenddeint'
> appears to do a better job (visually) than does 'yadif'.

Did you ever test my suggestion?

Carl Eugen
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

[FFmpeg-user] DTS discontinuity in stream

2020-04-18 Thread Mark Filipak

Hello,

I've done a 55-telecine and got errors. I don't really know, but they appear to be related to the 
input, not the 55-telecine.


Some samples follow, and the full log is attached as a zip due to its size.

The command line was:
ffmpeg -report -ss 2:58 -analyzeduration 50 -probesize 50 -i 
"G:\BDMV\STREAM\0.m2ts" -filter_complex 
"telecine=pattern=5,split[A][B],[A]select='not(eq(mod((n+1)\,5)\,3))'[C],[B]select='eq(mod((n+1)\,5)\,3)',pp=linblenddeint[D],[C][D]interleave" 
-map 0 -c:v libx264 -crf 20 -codec:a copy -codec:a:1 pcm_s16be -codec:s copy "C:\AVOut\0.3.MKV"


The errors manifest in MPV (Ctrl-O) as a total running time that advances (not constant, not 
accurate) so that it just keeps ahead of the actual running time.


Samples of the errors:

[mpegts @ 020e685a0040] DTS discontinuity in stream 23: packet 4 with DTS 23490352, packet 5 
with DTS 23957495


[matroska @ 020eea2fd040] Starting new cluster due to timestamp

Your advice is very appreciated.

Regards,
Mark.
<>
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-18 Thread Mark Filipak

Oops. "(n+1)%5=3" should have been "(n+1)%5==3".

"On 04/18/2020 10:02 PM, Carl Eugen Hoyos wrote:

Am So., 19. Apr. 2020 um 03:43 Uhr schrieb Mark Filipak
:


My experience is that regarding "decombing" frames 2 7 12 17 ...,
'pp=linblenddeint' (whatever it is) does a better job than 'yadif'.


(Funny that while I always strongly disagreed some people also
said this when yadif was new - this doesn't make it more "true"
in any useful sense though.)


I am splitting out solely frames (n+1)%5=3 and applying "deinterlace" solely to them as single 
frames. For that application, 'pp=linblenddeint' appears to do a better job (visually) than does 
'yadif'.



"lb/linblenddeint
"Linear blend deinterlacing filter that deinterlaces the given block by
filtering all lines with a (1 2 1) filter."

I don't know what a "(1 2 1) filter" is -- I don't know to what "1 2 1" refers.


To the fact that no other filter uses as little information to deinterlace.


To me, deinterlace just means weaving the odd & even lines. To me, a frame that is already woven 
doesn't need deinterlacing. I know that the deinterlace filters do additional processing, but none 
of them go into sufficient detail for me to know, in advance, what they do.

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-18 Thread Mark Filipak

On 04/18/2020 10:02 PM, Carl Eugen Hoyos wrote:

Am So., 19. Apr. 2020 um 03:43 Uhr schrieb Mark Filipak
:


My experience is that regarding "decombing" frames 2 7 12 17 ...,
'pp=linblenddeint' (whatever it is) does a better job than 'yadif'.


(Funny that while I always strongly disagreed some people also
said this when yadif was new - this doesn't make it more "true"
in any useful sense though.)


I am splitting out solely frames (n+1)%5=3 and applying "deinterlace" solely to them as single 
frames. For that application, 'pp=linblenddeint' appears to do a better job (visually) than does 
'yadif'.



"lb/linblenddeint
"Linear blend deinterlacing filter that deinterlaces the given block by
filtering all lines with a (1 2 1) filter."

I don't know what a "(1 2 1) filter" is -- I don't know to what "1 2 1" refers.


To the fact that no other filter uses as little information to deinterlace.


To me, deinterlace just means weaving the odd & even lines. To me, a frame that is already woven 
doesn't need deinterlacing. I know that the deinterlace filters do additional processing, but none 
of them go into sufficient detail for me to know, in advance, what they do.

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-18 Thread Carl Eugen Hoyos
Am So., 19. Apr. 2020 um 03:43 Uhr schrieb Mark Filipak
:

> My experience is that regarding "decombing" frames 2 7 12 17 ...,
> 'pp=linblenddeint' (whatever it is) does a better job than 'yadif'.

(Funny that while I always strongly disagreed some people also
said this when yadif was new - this doesn't make it more "true"
in any useful sense though.)

> "lb/linblenddeint
> "Linear blend deinterlacing filter that deinterlaces the given block by
> filtering all lines with a (1 2 1) filter."
>
> I don't know what a "(1 2 1) filter" is -- I don't know to what "1 2 1" 
> refers.

To the fact that no other filter uses as little information to deinterlace.

Carl Eugen
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-18 Thread Mark Filipak

On 04/18/2020 08:44 PM, Carl Eugen Hoyos wrote:

Am Sa., 18. Apr. 2020 um 21:32 Uhr schrieb Mark Filipak
:


Regarding deinterlace, Carl Eugen, I'm not trying to deinterlace.


pp=linblenddeint is a (very simple) deinterlacer, once upon a
time it was the preferred deinterlacer for some users, possibly
because of its low performance requirements.
telecine=pattern=5 produces one interlaced frame out of five
(assuming non-static input).


Well, you see, I don't call frames 2 7 12 17 ... of the 55 telecine "interlaced". I have been 
calling them "combed", but I do so simply because, 1, according to the MPEG spec they are not 
interlaced, and 2, there's no established term for them. Perhaps there is a better term than 
"combed", but I don't know it. I do know that according to the MPEG spec, "interlaced" is not the 
correct term. Applying "interlace" to both 1/framerate & 1/24 second temporal line differences 
confuses novices regarding what interlace actually is. That confusion leads to a cascade of 
confusion regarding many, many other processes (in ffmpeg and elsewhere) in which the terms 
"interlace" and "deinterlace" are used indiscriminately.



Carl Eugen

PS:
Note that you have a different definition of "interlaced" than
FFmpeg due to the fact that you only think of analogue video
transmission which FFmpeg does not support. FFmpeg can
only deal with digital video frames, so "interlace" within
FFmpeg is not a process but a property of (some) frames. I
believe you call this property "combing".
Or in other words: FFmpeg does not offer any explicit
"deinterlacing" capabilities, only different filters for decombing
that we call deinterlacers (like linblenddeint, bwdif and yadif).


Carl Eugen, you hit the nail on the head!

The MPEG spec defines interlace as 1/fieldrate. To the best of my knowledge, the temporal difference 
between odd/even lines in the "combed" frame(s) of a telecine (any telecine) is 1/24 sec (not 
1/fieldrate). I know that most folks call that "interlace", but trust me, applying the same term to 
two quite different phenomena is one of the things that confuses novices.


To clarify: I don't think of analog video transmission. To the best of my knowledge, when a vintage 
TV program is mastered to DVD (or presumably to BD though I've not encountered one), the analog 
tapes are digitized and packaged as 1/framerate interlaced fields. That's where I enter the picture. 
Though it is true that I date from the time of analog TV, and though it's true that I'm an engineer 
who designed for analog TV, I did so strictly in the digital domain, designing an integrated fsync, 
vsync, dot clock (and, for NTSC, color bust) sequencer in order to make Atari game systems more 
compliant with  NTSC & PAL timing standards.



PPS:
I know very well that even inside FFmpeg there are several
definitions of "interlaced frames". But since we discuss filters
in an FFmpeg filter chain, neither decoding field-encoded
mpeg2video or paff streams nor mbaff or ildct encoding are
relevant, only the actual content of single frames is which can
be progressive or interlaced (for you: "not combed" or
"combed") which is - in theory and to a very large degree in
practice - independent of the encoding method.


Thanks for your insight on this. My experience is that regarding "decombing" frames 2 7 12 17 ..., 
'pp=linblenddeint' (whatever it is) does a better job than 'yadif'.


"lb/linblenddeint
"Linear blend deinterlacing filter that deinterlaces the given block by filtering all lines with a 
(1 2 1) filter."


I don't know what a "(1 2 1) filter" is -- I don't know to what "1 2 1" refers. pdr0 recommended it 
and I found that it works better than any of the other deinterlace filters. Without pdr0's help, I 
would never have tried it.


___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-18 Thread Carl Eugen Hoyos
Am Sa., 18. Apr. 2020 um 21:32 Uhr schrieb Mark Filipak
:

> Regarding deinterlace, Carl Eugen, I'm not trying to deinterlace.

pp=linblenddeint is a (very simple) deinterlacer, once upon a
time it was the preferred deinterlacer for some users, possibly
because of its low performance requirements.
telecine=pattern=5 produces one interlaced frame out of five
(assuming non-static input).

Carl Eugen

PS:
Note that you have a different definition of "interlaced" than
FFmpeg due to the fact that you only think of analogue video
transmission which FFmpeg does not support. FFmpeg can
only deal with digital video frames, so "interlace" within
FFmpeg is not a process but a property of (some) frames. I
believe you call this property "combing".
Or in other words: FFmpeg does not offer any explicit
"deinterlacing" capabilities, only different filters for decombing
that we call deinterlacers (like linblenddeint, bwdif and yadif).

PPS:
I know very well that even inside FFmpeg there are several
definitions of "interlaced frames". But since we discuss filters
in an FFmpeg filter chain, neither decoding field-encoded
mpeg2video or paff streams nor mbaff or ildct encoding are
relevant, only the actual content of single frames is which can
be progressive or interlaced (for you: "not combed" or
"combed") which is - in theory and to a very large degree in
practice - independent of the encoding method.
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] decomb versus deinterlace

2020-04-18 Thread Mark Filipak

On 04/18/2020 07:16 PM, pdr0 wrote:

Mark Filipak wrote

Deinterlacing is conversion of the i30-telecast (or i25-telecast) to p30
(or p25) and, optionally,
smoothing the resulting p30 (or p25) frames.


That is the description for single rate deinterlacing. But that is not what
a flat panel TV does with interlaced content or "telecast" - it double rate
deinterlaces to 50p (50Hz regions) or 59.94p (60Hz regions). The distinction
is important to mention; one method discards half the temporal information
and motion is not as smooth.


Very good point. My 60Hz TV would best do a p24-to-p60 telecine (via something like 55 pull-down or, 
better: motion compensation, but it apparently does not). It apparently does a 23 pull-down (to p30) 
and then frame doubles. The reason I say "apparently" is that, on my TV, hard-telecined 30fps 
content and 24fps content have identical judder.



Deinterlacing does not necessarily have to be used in the context of
"telecast".  e.g. a consumer camcorder recording home video interlaced
content is technically not "telecast".  Telecast implies "broadcast on
television"


You are right of course. I use "telecast" (e.g., i30-telecast) simply to distinguish the origin of 
scans from hard telecines. Can you suggest a better term? Perhaps "i30-camera" versus "i30"? Or 
maybe the better approach would be to distinguish hard telecine: "i30" versus "i30-progressive"? Or 
maybe distinguish both of them: "i30-camera" versus "i30-progressive"?



The simplest operational definition is double rate deinterlacing separates
and resizes each field to a frame +/- other processing. Single rate
deinterlacing does the same as double, but discards either even or odd
frames (or fields if they are discarded before the resize)


I think I understand your reference to "resize": line-doubling of half-height images to full-height 
images, right?


But I don't understand how "double rate" fits in. Seems to me that fields have to be converted 
(resized) to frames no matter what the "rate" is. I also don't understand why either rate or 
double-rate would discard anything.



Combing is fields that are temporally offset by 1/24th second (or 1/25th
second) resulting from
telecine up-conversion of p24 to p30 (or p25).


Including "(or 1/25th second)" in the above was a mistake. Sorry.


I know you meant telecine up conversion of 23.976p to 29.97i (not "p"). But
other framerates can be telecined eg. An 8mm 16fps telecine to 29.97i.


Well, when I've telecined, the result is p30, not i30. Due to the presence of ffmpeg police, I 
hesitate to write that ffmpeg outputs only frames -- that is certainly true of HandBrake, though. 
When I refer to 24fps and 30fps (and 60fps, too) I include 24/1.001 and 30/1.001 (and 60/1.001) 
without explicitly writing it. Most ordinary people (and most BD & DVD packaging) don't mention or 
know about "/1.001".



"Combing" is just a generic, non-specific visual description. There can be
other causes for "combing". eg. A warped film scan that causes spatial field
misalignment can look like "combing". Interlaced content in motion , when
viewed on a progressive display without processing is also described as
"combing" - it's the same underlying mechanism of upper and lower field
taken at different points in time


Again, good points. May I suggest that when I use "combing" I mean the frame content that results 
from a 1/24th second temporal difference between the odd lines of a progressive image and the even 
line of the same progressive image that results from telecine? If there's a better term, I'll use 
that better term. Do you know of a better term?



Decombing is smoothing combed frames.


Yes, but this is an ambiguous term. "Decombing" can imply anything from
various methods of deinterlacing to inverse telecine / removing pulldown .


To the best of my knowlege, ffmpeg doesn't use the terms "combing" or "decombing" -- certainly 
there's no decomb filter. I don't have a term that distinguishes smoothing of a 1/24th second comb 
(what I call "decombing") from smoothing of a 1/60th second (or 1/50th second) comb that results 
from deinterlace (which I don't call "decombing"). Can you suggest a term for the latter? Or terms 
for the both of them?


Regarding inverse telecine (aka INVT), I've never seen INVT that didn't yield back uncombed, purely 
progressive pictures (as "picture" is defined in the MPEG spec). Can you/will you enlighten me 
because it's simply outside my experience.



It seems to me that some people call combing that results from telecine,
interlace. Though they are
superficially similar, they're different.


Yes, it's more appropriately called "combing".


So, if I properly understand, you favor "combing" to mean the 1/60th (or 1/50th) second temporal 
difference between odd/even lines that result from deinterlace. That's exactly the reverse of how I 
have been using "combing". You see, to me, "combing" doesn't refer to visual appearance, but to 
structural 

Re: [FFmpeg-user] Failure: No wav codec tag found

2020-04-18 Thread Carl Eugen Hoyos
Am Sa., 18. Apr. 2020 um 22:55 Uhr schrieb Mark Filipak
:
>
> On 04/18/2020 06:30 AM, Carl Eugen Hoyos wrote:
> > Am Sa., 18. Apr. 2020 um 02:07 Uhr schrieb Mark Filipak
> > :
> >>
> >> Never mind. MPV was able to tell me that the pcm_bluray is big endian.
> >
> > While this is technically true, note that it has absolutely no relevance
> > for users of FFmpeg (including library users).
>
> Here's my logic:
> If a commercial blu-ray's LPCM is big endian, then it's more likely that
> any/all decoding software will handle big endian (as opposed to little 
> endian).

No.

> So, if I have a transcode choice (e.g.,
> pcm_s16be versus pcm_s16le), I choose big endian (e.g., pcm_s16be).

This makes no sense.

> Since the bits/sample is only 16, big endian versus little endian is not much 
> of a
> performance issue. Big endian streams more efficiently, but so what, eh?

No / makes no sense.

Carl Eugen
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] decomb versus deinterlace

2020-04-18 Thread pdr0
Mark Filipak wrote
> Deinterlacing is conversion of the i30-telecast (or i25-telecast) to p30
> (or p25) and, optionally, 
> smoothing the resulting p30 (or p25) frames.

That is the description for single rate deinterlacing. But that is not what
a flat panel TV does with interlaced content or "telecast" - it double rate
deinterlaces to 50p (50Hz regions) or 59.94p (60Hz regions). The distinction
is important to mention; one method discards half the temporal information
and motion is not as smooth.

Deinterlacing does not necessarily have to be used in the context of
"telecast".  e.g. a consumer camcorder recording home video interlaced
content is technically not "telecast".  Telecast implies "broadcast on
television"

The simplest operational definition is double rate deinterlacing separates
and resizes each field to a frame +/- other processing. Single rate
deinterlacing does the same as double, but discards either even or odd
frames (or fields if they are discarded before the resize)


> Combing is fields that are temporally offset by 1/24th second (or 1/25th
> second) resulting from 
> telecine up-conversion of p24 to p30 (or p25).

I know you meant telecine up conversion of 23.976p to 29.97i (not "p"). But
other framerates can be telecined eg. An 8mm 16fps telecine to 29.97i. 

"Combing" is just a generic, non-specific visual description. There can be
other causes for "combing". eg. A warped film scan that causes spatial field
misalignment can look like "combing". Interlaced content in motion , when
viewed on a progressive display without processing is also described as
"combing" - it's the same underlying mechanism of upper and lower field
taken at different points in time


> Decombing is smoothing combed frames. 

Yes, but this is an ambiguous term. "Decombing" can imply anything from
various methods of deinterlacing to inverse telecine / removing pulldown .




> It seems to me that some people call combing that results from telecine,
> interlace. Though they are 
> superficially similar, they're different.

Yes, it's more appropriately called "combing".

When writing your book , I suggest mentioning field matching and decimation
( inverse telecine, removing pulldown) in contrast to deinterlacing. 

I recommend describing the content. That's the key distinguishing factor
that determines what you have in terms of interlaced content vs. progressive
content that has been telecined









--
Sent from: http://www.ffmpeg-archive.org/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] decomb versus deinterlace

2020-04-18 Thread Mark Filipak

On 04/18/2020 05:35 PM, Jim DeLaHunt wrote:

On 2020-04-18 13:06, Mark Filipak wrote:
Forgive me if this subject seems pedantic to you. I think it's important and the source of a lot 
of misunderstanding.


As always, correct me if I'm wrong.

According to the MPEG spec, interlace relates to fields that are temporally offset by 1/60th 
second (NTSC) or 1/50th second (PAL) that typically originate as telecast streams.


Deinterlacing is conversion of the i30-telecast (or i25-telecast) to p30 (or p25) and, optionally, 
smoothing the resulting p30 (or p25) frames.


Combing is fields that are temporally offset by 1/24th second (or 1/25th second) resulting from 
telecine up-conversion of p24 to p30 (or p25).


Decombing is smoothing combed frames.


I like this as an attempt to explain terms simply but clearly and 
unambiguously. Thank you!


You're very kind. Thank you.


It would be even better if you would define "frame" and "field" as part of this 
...


I actually prefer the words
"field" exclusively for a single telecast scan (thus: "odd scan", "even scan"),
"picture" (as "picture" is defined in the MPEG spec) in lieu of "frame" for a single (progressive), 
full-height image, and
"half-picture" in lieu of "field" for half-height images that are extracted from pictures on an 
odd-/even-line basis (thus: "odd half-picture", "even half-picture").
I believe that the above definitions are consistent with the MPEG spec, though even in the spec the 
terms "scan" and "field" are used somewhat interchangeably.


To me, fields are containers for encoded scans & half-pictures, and frames are containers for 
encoded pictures.


I must confess that I'm ignorant of the internal structures of fields and frames. I have fairly deep 
understanding of MPEG PES streams:


PES Header formats & PES Stream formats & PES Flags & PES Header Extensions
- MPEG Video Sequences & MPEG Sequence Headers & MPEG Sequence Extensions
- - GOPs & User Data & MPEG Picture Headers & MPEG Picture Coding Extensions
- - - MPEG Frames.

However, the insides of MPEG Frames is unknown to me.

For example, are fields that contain encoded scans stored differently from fields that contain 
encoded half-pictures? I suspect not, but I don't really know. The reason I suspect not is a 
logically historical reason: Scans predate soft telecine. Soft telecine was not defined until late 
summer 1999 and was invented to shoehorn progressive content into the existing interlace container 
(i.e., field) formats. So, for compatibility, I imagine that fields that contain scans and fields 
that contain half-pictures are indistinguishable except at the PES Header level.


For example, are field structures stored inside frame structures?

For example, if a video is field-based, are there really even frame structures? Or do GOPs consist 
of just a serial string of field structures?

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] decomb versus deinterlace

2020-04-18 Thread Jim DeLaHunt

On 2020-04-18 13:06, Mark Filipak wrote:
Forgive me if this subject seems pedantic to you. I think it's 
important and the source of a lot of misunderstanding.


As always, correct me if I'm wrong.

According to the MPEG spec, interlace relates to fields that are 
temporally offset by 1/60th second (NTSC) or 1/50th second (PAL) that 
typically originate as telecast streams.


Deinterlacing is conversion of the i30-telecast (or i25-telecast) to 
p30 (or p25) and, optionally, smoothing the resulting p30 (or p25) 
frames.


Combing is fields that are temporally offset by 1/24th second (or 
1/25th second) resulting from telecine up-conversion of p24 to p30 (or 
p25).


Decombing is smoothing combed frames.


I like this as an attempt to explain terms simply but clearly and 
unambiguously. Thank you!


It would be even better if you would define "frame" and "field" as part 
of this, or point to definitions. Also, is part of the definition of 
"field" that it has half the visual information of a "frame", in the 
form of every other scan line of the frame?


(And what is a "scan line"? etc.  Wouldn't it be nice to have a book 
which explained all these concepts clearly and unambiguously?)


Best regards,
  —Jim DeLaHunt, software engineer, Vancouver, Canada

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Failure: No wav codec tag found

2020-04-18 Thread Mark Filipak

On 04/18/2020 06:30 AM, Carl Eugen Hoyos wrote:

Am Sa., 18. Apr. 2020 um 02:07 Uhr schrieb Mark Filipak
:


Never mind. MPV was able to tell me that the pcm_bluray is big endian.


While this is technically true, note that it has absolutely no relevance
for users of FFmpeg (including library users).

Carl Eugen


Here's my logic:
If a commercial blu-ray's LPCM is big endian, then it's more likely that any/all decoding software 
will handle big endian (as opposed to little endian). So, if I have a transcode choice (e.g., 
pcm_s16be versus pcm_s16le), I choose big endian (e.g., pcm_s16be).


Since the bits/sample is only 16, big endian versus little endian is not much of a performance 
issue. Big endian streams more efficiently, but so what, eh?


I'm just trying to be convervative.

Thanks for your comments,
Mark.
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

[FFmpeg-user] decomb versus deinterlace

2020-04-18 Thread Mark Filipak
Forgive me if this subject seems pedantic to you. I think it's important and the source of a lot of 
misunderstanding.


As always, correct me if I'm wrong.

According to the MPEG spec, interlace relates to fields that are temporally offset by 1/60th second 
(NTSC) or 1/50th second (PAL) that typically originate as telecast streams.


Deinterlacing is conversion of the i30-telecast (or i25-telecast) to p30 (or p25) and, optionally, 
smoothing the resulting p30 (or p25) frames.


Combing is fields that are temporally offset by 1/24th second (or 1/25th second) resulting from 
telecine up-conversion of p24 to p30 (or p25).


Decombing is smoothing combed frames.

My video sources are generally:
p24 movies from blu-ray, or
p24 movies from DVD (i.e., soft telecine), or
i30 movies from DVD (i.e., hard telecine).

Should I ever encounter a i30-telecast video on a blu-ray or DVD, I might (but probably won't) 
deinterlace.


It seems to me that some people call combing that results from telecine, interlace. Though they are 
superficially similar, they're different.


Regards,
Mark.
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-18 Thread Mark Filipak

On 04/18/2020 01:01 PM, Carl Eugen Hoyos wrote:

Am Sa., 18. Apr. 2020 um 00:53 Uhr schrieb Mark Filipak
:


I'm not using the 46 telecine anymore because you introduced me to 
'pp=linblenddeint'
-- thanks again! -- which allowed me to decomb via the 55 telecine.


Why do you think that pp is a better de-interlacer than yadif?
(On hardware younger that's not more than ten years old.)

Carl Eugen


The subjects of prior threads are getting mixed in with this thread, "ffmepg 
architecture question".

The architecture question is about recursion/non-recursion of filter complexes.

The prior threads were about how to decomb a telecine in general and a 55-telecine in particular. 
Oh, well. It's my fault. I shouldn't have cranked one Jack-in-the-box before closing the previous 
Jack-in-the-box.


Regarding deinterlace, Carl Eugen, I'm not trying to deinterlace. The transcode source is 
progressive video (p24), not interlace video (i30-telecast or 125-telecast).


I'm performing p24-to-p60 transcode via 55 pull-down telecine. The result has 1 combed frame in 
every set of 5 frames (P P C P P). I'm trying to decomb those combed frames.


'pp' seems to do a better job of decombing because it has a procedure, 'pp=linblenddeint', that 
seems to do a better job of mixing the combed fields. 'yadif' seems to be optimized solely for 
deinterlacing.


To be clear: I will never be processing telecast sources and will never be 
deinterlacing.

Thank you all for being patient.

Regards,
Mark.


___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-18 Thread Jim DeLaHunt

On 2020-04-18 02:08, Paul B Mahol wrote:

[Mark Filipak] is just genuine troller, and do not know better, I 
propose you just ignore his troll attempts. 


I disagree. What I see from Mark's messages is that he is genuinely 
using ffmpeg for reasonable purposes. He runs into limitations of the 
inadequate documentation, which is not up to the task of explaining this 
complex, capable, but inconsistent piece of software. He asks questions. 
He persists in trying to get answers, even in the face of rudeness and 
dismissal.


I think that adds up to a legitimate contributor to the list, not to a 
troll.

    —Jim DeLaHunt, software engineer, Vancouver, Canada

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-18 Thread Mark Filipak

Sorry, the previous post got sent accidentally by my email program. Kindly 
ignore it.

On 04/18/2020 01:01 PM, Carl Eugen Hoyos wrote:

Am Sa., 18. Apr. 2020 um 00:53 Uhr schrieb Mark Filipak
:


I'm not using the 46 telecine anymore because you introduced me to 
'pp=linblenddeint'
-- thanks again! -- which allowed me to decomb via the 55 telecine.


Why do you think that pp is a better de-interlacer than yadif?
(On hardware younger that's not more than ten years old.)

Carl Eugen


This thread, "ffmepg architecture question", is getting mixed in
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Streaming to YouTube Live

2020-04-18 Thread First Last
Thank you so much for your help and for all of the useful information!

As I was writing, the thought momentarily occurred to me, that perhaps the
issue related to the audio, but then the thought disappeared. Perhaps
because of my inexperience.

I have added the silent audio stream, as recommended, and the video preview
manifested correctly in the YouTube live console.

This worked both with the original framerate configuration, and with the
changes you recommended.

Even though in both cases YouTube complains that it is not receiving
sufficient video, and that the bitrate is lower than recommended, the final
output stream, so far, is holding online.

For my configuration, at least at this stage, the most important factor is
stability of the stream. Occasional buffering, or degradation of image
quality is perfectly acceptable.

I believe the rationale for the setup of the original solution which I
linked to is most likely optimization for processing power and memory, which
is important for me as well. I am running one virtual processor with half a
gigabyte of memory.

In the more minimal configuration, processor usage seems to be stable at
about one quarter capacity, and memory at about one fifth capacity.

With a higher framerate setup, it is pushing the maximum ceiling of one
hundred percent. Yes, real motion video, perhaps for a future project. :)

A further interest of mine is how many lightweight streams of this kind I
can run from a single server, using multithreading, or parallel processes,
if that is even possible, which is another reason resource optimization is a
factor.

But first, this would be the foundation, and I would have to do further
thinking, reading, and possibly asking before that point.

Thank you once again! 




--
Sent from: http://www.ffmpeg-archive.org/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-18 Thread pdr0
Carl Eugen Hoyos-2 wrote
> Am Sa., 18. Apr. 2020 um 19:27 Uhr schrieb pdr0 

> pdr0@

> :
>>
>> Carl Eugen Hoyos-2 wrote
>> > Am Sa., 18. Apr. 2020 um 00:53 Uhr schrieb Mark Filipak
>> > 
>>
>> > markfilipak.windows+ffmpeg@
>>
>> > :
>> >
>> >> I'm not using the 46 telecine anymore because you introduced me to
>> >> 'pp=linblenddeint'
>> >> -- thanks again! -- which allowed me to decomb via the 55 telecine.
>> >
>> > Why do you think that pp is a better de-interlacer than yadif?
>> > (On hardware younger that's not more than ten years old.)
>>
>> It's not a question of "better" in his case.
>>
>> It's a very specific scenario - He needs to keep that combed frame, as a
>> single frame to retain the pattern.
> 
> I know, while I agree with all other developers that this is useless,
> I have explained how it can be done.

I dislike it too, but that's just an opinion . He's asking a technical
question - that deserves a technical answer


>> Single rate deinterlacing by any method
>> will cause you to choose either the top field or bottom field, resulting
>> in
>> a duplicate frame or the prior or next frame - and it's counterproductive
>> for what he wanted (blend deinterlacing to keep both fields as a single
>> frame)
> 
> (To the best of my knowledge, this is technically simply not true.)
> 
> yadif by default does not change the number of frames.
> (Or in other words: It works just like the pp algorithms, only better)

most deinterlacers have 2 modes, single and double rate. For example, yadif
has mode =0, or  mode =1 . eg. if you stared with a 29.97 interlaced source,
you will get 29.97p in single rate, 59.94p in double rate. Double rate is
more "proper" for interlaced content. Single rate discards half the temporal
information

In general, blend deinterlacing is terrible, the worst type of
deinterlacing, but he "needs" it for his specific scenario.  The "quality"
of yadif is quite low ,deinterlacing and aliasing artifacts. bwdif is
slightly better, and there are more complex deinterlacers not offered by
ffmpeg 





--
Sent from: http://www.ffmpeg-archive.org/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] delay time in live ultrasound converter

2020-04-18 Thread Michael Koch

Am 18.04.2020 um 19:30 schrieb Michael Glenn Williams:

Wow that is so cool! Will ffmpeg generate ultrasound sounds to, or do we
know of a plugin or other that could do that, then feed the signal to
ffmpeg?


Sure, you can generate ultrasound with FFmpeg. Try the first example in 
chapter 3.24.
Of course, the frequency must be smaller than half the sample rate. Make 
a 12kHz sine file and then try to convert it down to 2kHz.


Michael
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-18 Thread Carl Eugen Hoyos
Am Sa., 18. Apr. 2020 um 19:27 Uhr schrieb pdr0 :
>
> Carl Eugen Hoyos-2 wrote
> > Am Sa., 18. Apr. 2020 um 00:53 Uhr schrieb Mark Filipak
> > 
>
> > markfilipak.windows+ffmpeg@
>
> > :
> >
> >> I'm not using the 46 telecine anymore because you introduced me to
> >> 'pp=linblenddeint'
> >> -- thanks again! -- which allowed me to decomb via the 55 telecine.
> >
> > Why do you think that pp is a better de-interlacer than yadif?
> > (On hardware younger that's not more than ten years old.)
>
> It's not a question of "better" in his case.
>
> It's a very specific scenario - He needs to keep that combed frame, as a
> single frame to retain the pattern.

I know, while I agree with all other developers that this is useless,
I have explained how it can be done.

> Single rate deinterlacing by any method
> will cause you to choose either the top field or bottom field, resulting in
> a duplicate frame or the prior or next frame - and it's counterproductive
> for what he wanted (blend deinterlacing to keep both fields as a single
> frame)

(To the best of my knowledge, this is technically simply not true.)

yadif by default does not change the number of frames.
(Or in other words: It works just like the pp algorithms, only better)

Carl Eugen
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] delay time in live ultrasound converter

2020-04-18 Thread Michael Glenn Williams
Wow that is so cool! Will ffmpeg generate ultrasound sounds to, or do we
know of a plugin or other that could do that, then feed the signal to
ffmpeg?

Thank you!

On Sat, Apr 18, 2020 at 9:48 AM Michael Koch 
wrote:

> Am 18.04.2020 um 18:25 schrieb Ted Park:
> > I don't know where I can find bats nearby so I couldn't try it but how
> > does it work? The book makes it sound like you can use any mic, even
> > one built into a laptop for this? I suppose that's plausible looking
> > at a typical mic's frequency response graph, they are just cut off at
> > 20khz, and don't roll off after 20khz like I thought they would, but
> > what about the sample rate? At 44.1kHz doesn't that mean anything over
> > 22khz is more aliasing or harmonic distortion than an actual recording
> > of bat sounds?
>
> The sounds of those bats that I did record were in the 12kHz to 15kHz
> range. 44.1kHz sample rate is sufficient. I did use two Rode NT1
> microphones, connected to a Tascam DR-70D recorder. If I record the
> ultrasound in the recorder, I use 48kHz sample rate. If live processing
> is required, I connect the Tascam's output to my notebook's audio input,
> which has only 44.1kHz sample rate. That works as well. It is important
> that you disable the low pass filter in the Windows control panel
> (properties of the microphone).
>
> Michael
>
> ___
> ffmpeg-user mailing list
> ffmpeg-user@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-user
>
> To unsubscribe, visit link above, or email
> ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-18 Thread pdr0
Carl Eugen Hoyos-2 wrote
> Am Sa., 18. Apr. 2020 um 00:53 Uhr schrieb Mark Filipak
> 

> markfilipak.windows+ffmpeg@

> :
> 
>> I'm not using the 46 telecine anymore because you introduced me to
>> 'pp=linblenddeint'
>> -- thanks again! -- which allowed me to decomb via the 55 telecine.
> 
> Why do you think that pp is a better de-interlacer than yadif?
> (On hardware younger that's not more than ten years old.)

It's not a question of "better" in his case. 

It's a very specific scenario - He needs to keep that combed frame, as a
single frame to retain the pattern. Single rate deinterlacing by any method
will cause you to choose either the top field or bottom field, resulting in
a duplicate frame or the prior or next frame - and it's counterproductive
for what he wanted (blend deinterlacing to keep both fields as a single
frame)






--
Sent from: http://www.ffmpeg-archive.org/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-18 Thread pdr0
Paul B Mahol wrote
> On 4/18/20, pdr0 

> pdr0@

>  wrote:
>> Mark Filipak wrote
>>> Gee, pdr0, I'm sorry you took the time to write about 'interleave' not
>>> working because it is working
>>> for me.
>>
>>
>> Interleave works correctly in terms of timestamps
>>
>> Unless I'm misunderstanding the point of this thread, your "recursion
>> issue"
>> can be explained from how  interleave works
>>
>>
> 
> He is just genuine troller, and do not know better, I propose you just
> ignore his troll attempts.


I do not believe so. He is truly interested in how ffmpeg works. 

Your prior comment about interleave and timestamps was succinct and perfect
- but I can see why it would be "cryptic" for many users. If someone is
claims that comment is "irrelevant", then they are not "seeing" what you
see. It deserves to be expanded upon; if not for him, then do it for other
people who search for information. 

There are different types of people, different learning styles, and
different ways of seeing things.  Teach other people what you know to be
true.  Explain in different words if they don't get it. A bit of tolerance
now, especially in today's crappy world goes a long way. 





--
Sent from: http://www.ffmpeg-archive.org/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-18 Thread Carl Eugen Hoyos
Am Sa., 18. Apr. 2020 um 00:53 Uhr schrieb Mark Filipak
:

> I'm not using the 46 telecine anymore because you introduced me to 
> 'pp=linblenddeint'
> -- thanks again! -- which allowed me to decomb via the 55 telecine.

Why do you think that pp is a better de-interlacer than yadif?
(On hardware younger that's not more than ten years old.)

Carl Eugen
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] delay time in live ultrasound converter

2020-04-18 Thread Michael Koch

Am 18.04.2020 um 18:25 schrieb Ted Park:

I don't know where I can find bats nearby so I couldn't try it but how
does it work? The book makes it sound like you can use any mic, even
one built into a laptop for this? I suppose that's plausible looking
at a typical mic's frequency response graph, they are just cut off at
20khz, and don't roll off after 20khz like I thought they would, but
what about the sample rate? At 44.1kHz doesn't that mean anything over
22khz is more aliasing or harmonic distortion than an actual recording
of bat sounds?


The sounds of those bats that I did record were in the 12kHz to 15kHz 
range. 44.1kHz sample rate is sufficient. I did use two Rode NT1 
microphones, connected to a Tascam DR-70D recorder. If I record the 
ultrasound in the recorder, I use 48kHz sample rate. If live processing 
is required, I connect the Tascam's output to my notebook's audio input, 
which has only 44.1kHz sample rate. That works as well. It is important 
that you disable the low pass filter in the Windows control panel 
(properties of the microphone).


Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Failure: No wav codec tag found

2020-04-18 Thread Carl Eugen Hoyos
Am Sa., 18. Apr. 2020 um 18:15 Uhr schrieb Carl Zwanzig :
>
> On 4/18/2020 3:30 AM, Carl Eugen Hoyos wrote:
> > Am Sa., 18. Apr. 2020 um 02:07 Uhr schrieb Mark Filipak
> > :
> >>
> >> Never mind. MPV was able to tell me that the pcm_bluray is big endian.
> >
> > While this is technically true, note that it has absolutely no relevance
> > for users of FFmpeg (including library users).
>
> That info -is- potentially useful to ffmpeg users and was something at least
> I did not know yesterday. And the thread is relevant because it tells me
> something that ffmpeg does not do, so I won't ask it to.

Again: It is technically true that pcm_bluray stores data big-endian-wise.
This information was (very) important ten years for the developer who
implemented pcm_bluray support in FFmpeg, it may still be interesting
to curious people (and there is nothing wrong asking about it).
The fact that pcm_bluray stores data big-endian-wise is completely
irrelevant when you use FFmpeg, neither when you use the command
line application "ffmpeg" nor when you use libavcodec.
(The big-endian fact has no relevant effect, no matter the hardware
you are using nor the output format. The only effect is that the
extremely fast "decoding" of pcm_bluray is in theory for some input
files faster on big-endian hardware than on little-endian hardware,
this will be difficult to prove because of the very high decoding speed
of pcm codecs in general.)

And since there were indications in this mailing list thread that the
following is not completely obvious: From FFmpeg's pov, the pcm
codecs are audio codecs just like mp3, aac or alac. The fact that the
pcm codecs are not compressing data made implementing them
(much) easier but when you are using a pcm codec (for decoding or
encoding), the usage is exactly the same as if the codec would
compress data.

Carl Eugen
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] delay time in live ultrasound converter

2020-04-18 Thread Ted Park
I don't know where I can find bats nearby so I couldn't try it but how
does it work? The book makes it sound like you can use any mic, even
one built into a laptop for this? I suppose that's plausible looking
at a typical mic's frequency response graph, they are just cut off at
20khz, and don't roll off after 20khz like I thought they would, but
what about the sample rate? At 44.1kHz doesn't that mean anything over
22khz is more aliasing or harmonic distortion than an actual recording
of bat sounds?

On Sat, Apr 18, 2020 at 11:37 AM Michael Koch
 wrote:
>
> Am 18.04.2020 um 16:52 schrieb Michael Glenn Williams:
> > The subject line about ultrasound caught me eye on this thread that woke up
> > from last year.
> > Can anyone tell us what the original interest in ffmpeg and ultrasound is?
>
> Well, you can use FFmpeg to convert ultrasound to lower frequencies, for
> example if you want to hear bats. I have described that in my book, see
> chapters 3.14 and 3.19 (but the chapter numbers may change in future)
> www.astro-electronic.de/FFmpeg_Book.pdf
>
> Michael
> ___
> ffmpeg-user mailing list
> ffmpeg-user@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-user
>
> To unsubscribe, visit link above, or email
> ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-18 Thread Carl Zwanzig

On 4/18/2020 2:08 AM, Paul B Mahol wrote:

On 4/18/20, pdr0  wrote:

Mark Filipak wrote

Gee, pdr0, I'm sorry you took the time to write about 'interleave' not
working because it is working for me.



Interleave works correctly in terms of timestamps

Unless I'm misunderstanding the point of this thread, your "recursion issue"
can be explained from how  interleave works



He is just genuine troller, and do not know better, I propose you just
ignore his troll attempts.


Which "he" are you referring to? pdr0 or Mark? (Paul?) Or someone else?

z!
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Failure: No wav codec tag found

2020-04-18 Thread Carl Zwanzig

On 4/18/2020 3:30 AM, Carl Eugen Hoyos wrote:

Am Sa., 18. Apr. 2020 um 02:07 Uhr schrieb Mark Filipak
:


Never mind. MPV was able to tell me that the pcm_bluray is big endian.


While this is technically true, note that it has absolutely no relevance
for users of FFmpeg (including library users).


That info -is- potentially useful to ffmpeg users and was something at least 
I did not know yesterday. And the thread is relevant because it tells me 
something that ffmpeg does not do, so I won't ask it to.


z!
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] delay time in live ultrasound converter

2020-04-18 Thread Michael Koch

Am 18.04.2020 um 16:52 schrieb Michael Glenn Williams:

The subject line about ultrasound caught me eye on this thread that woke up
from last year.
Can anyone tell us what the original interest in ffmpeg and ultrasound is?


Well, you can use FFmpeg to convert ultrasound to lower frequencies, for 
example if you want to hear bats. I have described that in my book, see 
chapters 3.14 and 3.19 (but the chapter numbers may change in future)

www.astro-electronic.de/FFmpeg_Book.pdf

Michael
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Testing transcode speeds on Raspberry Pi 4

2020-04-18 Thread MediaMouth

> On Apr 17, 2020, at 1:57 AM, Ted Park  wrote:
> 
> I think I did the same thing, or similar at least. I thought the specs said 
> 4k decode & encode but I might have been  misreading 2 4k displays & 4k 
> decode, for hardware encoding it says 1080p60 max. Apparently the SoC on the 
> Pi 4 is basically the same as the one on the 3B+, with better thermal & power 
> management, not a significant upgrade as I thought it was when I got it.
> 
> For anything up to 1920x1080 though, I feel like there must be a hardware 
> accelerated scaler that can use the same format as the input for encoding, 
> and if you have a way to manage workers and segment the transcoding job, 
> organizing the units (basically some sort of chassis and power distribution 
> solution) and a "mini-fleet" management, a portable mini render farm on a 
> dolly is feasible for something like a "dailies farm in a backpack". (the 
> "reference" hw accelerated encoding tool included in Raspbian produced output 
> that didn't look as good as I expected from the bitrate)

Thanks for all that.  Very helpful
Managing the transcoding wouldn't be an issue -- we have a pretty solid system 
that can distinguish between source resolutions and codecs and traffic any file 
to the ost appropriate machine.  I'll test a few 1080p files to see what the 
speeds are, but so far impression is an RPI-powered render farm might not have 
have a practical or even financial advantage over more expensive machines.

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] delay time in live ultrasound converter

2020-04-18 Thread Michael Glenn Williams
The subject line about ultrasound caught me eye on this thread that woke up
from last year.
Can anyone tell us what the original interest in ffmpeg and ultrasound is?

Thank you!

On Fri, Apr 17, 2020 at 3:55 PM Roger Pack  wrote:

> On Thu, Aug 22, 2019 at 3:16 PM Michael Koch
>  wrote:
> >
> > Hello Paul,
> >
> > > ffplay and using pipe gives you huge delay. By using mpv and
> filtergraph
> > > directly you would get much lesser delay.
> > > Default delay introduced by this filter is the one set by win_size in
> > > number of samples, which is by default 4096.
> > > If you need less delay even than this one and can not use lower
> win_size
> > > because of lesser precision use amultiply solution which
> > > inherently have 0 delay.
> >
> > In my test the FFT method has a shorter delay time than the amultiply
> > method.
> > I just found out that the delay can be minimized by setting
> > -audio_buffer_size to a very small value (10ms).
> > Delay is now about 0.5 seconds. Short enough to see and hear the bats
> > simultaneously.
>
> Se also https://trac.ffmpeg.org/wiki/DirectShow#BufferingLatency
> GL! :)
> ___
> ffmpeg-user mailing list
> ffmpeg-user@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-user
>
> To unsubscribe, visit link above, or email
> ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] hw_decode.c on osx?

2020-04-18 Thread Carl Eugen Hoyos
Am Fr., 17. Apr. 2020 um 14:37 Uhr schrieb su.00048--- via ffmpeg-user
:
>
>
> i have ffmpeg building successfully on osx (10.14.6; xcode 11.3.1),
> and am using the generated libraries in a personal video project i'm
> working on.  now i'm ready to start thinking about implementing hardware
> decoding ...  i tried adapting some of the code from hw_decode.c, but
> ran into runtime errors.  thinking that it was my code that was at
> fault, i then compiled hw_decode by itself, but i get the same errors
> even there.  is anyone aware of hw_decode.c not working on osx?
>
> trying to decode known good videos, either h264 or hevc, results in
> some internal errors (see below).  the videos behave as expected when
> opened with my locally built versions of ffplay or ffmpeg, however.
>
> h264 example:
>
> % hwdecode videotoolbox h264.mov h264.out
>
> [h264 @ 0x7f8a2b003e00] get_buffer() failed
> Assertion src->f->buf[0] failed at libavcodec/h264_picture.c:70
> Abort
>
> hevc fails differently:
>
> % hwdecode videotoolbox hevc.mov hevc.out
>
> [hevc @ 0x7f8464814600] get_buffer() failed
> [hevc @ 0x7f8464814600] videotoolbox: invalid state
> [hevc @ 0x7f8464814600] hardware accelerator failed to decode picture
> Error during decoding
> Error during decoding
>
> there are no significant warnings, etc., when building the executable,
> so i'm pretty sure everything is ok there.  videotoolbox shows up as
> being available in the output of configure.
>
> any suggestions as to what might be going wrong?

Did you see ticket #8615?

Carl Eugen
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Failure: No wav codec tag found

2020-04-18 Thread Kieran O Leary
On Sat, Apr 18, 2020 at 12:53 AM Mark Filipak <
markfilipak.windows+ffm...@gmail.com> wrote:

> On 04/17/2020 07:50 PM, Carl Eugen Hoyos wrote:
> > Am Sa., 18. Apr. 2020 um 01:42 Uhr schrieb Mark Filipak
> > :
> >
> >> I know that PCM was never used for DVDs
> >
> > DVDs with PCM audio exist.
>
> Cool! I've never seen one but, cool. What flavor of PCM? BE or LE?
>
As Carl said, endianness doesn't really matter here. Why were you curious?
But as an aside, PCM on DVDs was somewhat common for music. I remember Pink
Floyd's The Wall and This is Spinal Tap having PCM options. PowerDVD was
pretty good at giving stream info back in the day.

Best,

Kieran
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] hw_decode.c on osx?

2020-04-18 Thread Ted Park
Hi,

> videotoolbox decodes to CMSampleBuffer, which is CoreMedia's generic buffer
> wrapper class.  a couple levels down, it's probably a CVPixelBuffer.  if it's
> working for you, i'd be curious to know what hardware and what os version
> you're running on, and what type of file you're feeding to hw_decode.


Oh okay, makes sense. The one I tried it on had a W5700, I tried again on a 
dual nehalem Xserve with no gpu (almost, has  a GT120 for console/desktop 
monitor) and got similar errors

xserve:~ admin$ $OLDPWD/hw_decode videotoolbox hd-bars-h264.mov hd-bars-h264.raw
[h264 @ 0x7fb74d001800] Failed setup for format videotoolbox_vld: hwaccel 
initialisation returned error.
Failed to get HW surface format.
[h264 @ 0x7fb74d001800] decode_slice_header error
[h264 @ 0x7fb74d001800] no frame!
Error during decoding

xserve:~ admin$ $OLDPWD/hw_decode videotoolbox hd-bars-hevc.mp4 hd-bars-hevc.raw
[hevc @ 0x7fd8a8018a00] get_buffer() failed
[hevc @ 0x7fd8a8018a00] videotoolbox: invalid state
[hevc @ 0x7fd8a8018a00] hardware accelerator failed to decode picture
Error during decoding
Error during decoding

The difference would be I didn't expect it to work in the first place I guess. 
Do you know hardware decoding works in ffplay? It's harder to tell for mac 
frameworks imo, I'd try attaching to ffplay and seeing if you can get it to use 
a hardware decoder. Which gpu does the machine have?

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Failure: No wav codec tag found

2020-04-18 Thread Carl Eugen Hoyos
Am Sa., 18. Apr. 2020 um 02:07 Uhr schrieb Mark Filipak
:
>
> Never mind. MPV was able to tell me that the pcm_bluray is big endian.

While this is technically true, note that it has absolutely no relevance
for users of FFmpeg (including library users).

Carl Eugen
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-18 Thread Paul B Mahol
On 4/18/20, pdr0  wrote:
> Mark Filipak wrote
>> Gee, pdr0, I'm sorry you took the time to write about 'interleave' not
>> working because it is working
>> for me.
>
>
> Interleave works correctly in terms of timestamps
>
> Unless I'm misunderstanding the point of this thread, your "recursion issue"
> can be explained from how  interleave works
>
>

He is just genuine troller, and do not know better, I propose you just
ignore his troll attempts.
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] delay time in live ultrasound converter

2020-04-18 Thread Michael Koch

Am 18.04.2020 um 08:41 schrieb Michael Koch:

Am 18.04.2020 um 00:54 schrieb Roger Pack:

In my test the FFT method has a shorter delay time than the amultiply
method.
I just found out that the delay can be minimized by setting
-audio_buffer_size to a very small value (10ms).
Delay is now about 0.5 seconds. Short enough to see and hear the bats
simultaneously.

Se also https://trac.ffmpeg.org/wiki/DirectShow#BufferingLatency


Yes, that's the same that I found out some time ago by try and error.
In the above article is mentioned a "-rtbufsize" parameter. I can't 
find any documentation for it in ffmpeg-all.html


I just realized that it's in the documentation without the leading minus 
sign.


Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] delay time in live ultrasound converter

2020-04-18 Thread Michael Koch

Am 18.04.2020 um 00:54 schrieb Roger Pack:

In my test the FFT method has a shorter delay time than the amultiply
method.
I just found out that the delay can be minimized by setting
-audio_buffer_size to a very small value (10ms).
Delay is now about 0.5 seconds. Short enough to see and hear the bats
simultaneously.

Se also https://trac.ffmpeg.org/wiki/DirectShow#BufferingLatency


Yes, that's the same that I found out some time ago by try and error.
In the above article is mentioned a "-rtbufsize" parameter. I can't find 
any documentation for it in ffmpeg-all.html


Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".