Re: [FFmpeg-user] v360 / perspective

2020-12-02 Thread Michael Koch

Am 01.12.2020 um 19:56 schrieb Michael Koch:

Hello,

there seem to be some problems in the v360 filter with "perspective" 
output.
Tested with the latest Windows built, 2 days old. You can reproduce 
this with any equirectangular input image.


ffmpeg -i equirectangular_test.png -lavfi v360=e:perspective -y out1.png

The output of the above command has two problems:
-- The center of the input image is not mapped to the center of the 
output image. For most other projections the image center is preserved.

-- The output is mirror reversed (which means you cannot read text).

Both problems can be corrected with this workaround:

ffmpeg -i equirectangular_test.png -lavfi 
v360=e:perspective:pitch=90:v_flip=1 -y out2.png


Now I want to add a yaw rotation after the pitch rotation:

ffmpeg -i equirectangular_test.png -lavfi 
v360=e:perspective:rorder=pyr:pitch=90:yaw=20:v_flip=1 -y out3.png


But in the output you can see that a roll rotation was done.


Today I did for the very first time figure out how to compile ffmpeg.
The above problem can be solved by changing two lines in vf_v360.c
Sorry, I haven't yet figured out how to use git.One thing after the 
other...


Old version:
line 3136:   vec[1] = sin_theta;
line 3137: vec[2] = cos_theta * cos_phi;

New version:
line 3136: vec[1] = cos_theta * cos_phi;
line 3137: vec[2] = sin_theta;

I'm not sure if lines 3139 to 3141 must also be changed.

Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

[FFmpeg-user] v360 / perspective

2020-12-01 Thread Michael Koch

Hello,

there seem to be some problems in the v360 filter with "perspective" output.
Tested with the latest Windows built, 2 days old. You can reproduce this 
with any equirectangular input image.


ffmpeg -i equirectangular_test.png -lavfi v360=e:perspective -y out1.png

The output of the above command has two problems:
-- The center of the input image is not mapped to the center of the 
output image. For most other projections the image center is preserved.

-- The output is mirror reversed (which means you cannot read text).

Both problems can be corrected with this workaround:

ffmpeg -i equirectangular_test.png -lavfi 
v360=e:perspective:pitch=90:v_flip=1 -y out2.png


Now I want to add a yaw rotation after the pitch rotation:

ffmpeg -i equirectangular_test.png -lavfi 
v360=e:perspective:rorder=pyr:pitch=90:yaw=20:v_flip=1 -y out3.png


But in the output you can see that a roll rotation was done.

Michael
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] asubboost and asupercut

2020-11-29 Thread Michael Koch

Am 28.11.2020 um 22:09 schrieb Paul B Mahol:
Ok, made the wet option do actually apply gain to final output. 


I think the asubboost block diagram does now look like this (view with 
fixed width font):


IN -o> DRY ---> ADD -> WET -> OUT
    |    ^
    |    |
    +-> LP -> ADD -o-> FEEDBACK -+
   ^   |
   |   |
   +- DECAY <- DELAYLINE <-+

The name "feedback" is misleading because this signal isn't going back.
It would make sense to rename "feedback" to "wet".
What's now "wet" could be renamed "gain", or it could be removed because 
it's unnecessary.


Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] asubboost and asupercut

2020-11-28 Thread Michael Koch

Am 28.11.2020 um 19:18 schrieb Paul B Mahol:

On Sat, Nov 28, 2020 at 5:18 PM Michael Koch 
wrote:


Am 28.11.2020 um 14:48 schrieb Paul B Mahol:

On Sat, Nov 28, 2020 at 2:46 PM Paul B Mahol  wrote:


On Sat, Nov 28, 2020 at 2:24 PM Michael Koch <

astroelectro...@t-online.de>

wrote:


Am 28.11.2020 um 13:44 schrieb Paul B Mahol:

On Sat, Nov 28, 2020 at 1:35 PM Michael Koch <

astroelectro...@t-online.de>

wrote:


Am 28.11.2020 um 12:57 schrieb Paul B Mahol:

On Sat, Nov 28, 2020 at 12:41 PM Michael Koch <

astroelectro...@t-online.de>

wrote:


Am 27.11.2020 um 20:50 schrieb Paul B Mahol:

On Fri, Nov 27, 2020 at 8:24 PM Michael Koch <

astroelectro...@t-online.de>

wrote:


Am 27.11.2020 um 19:25 schrieb Paul B Mahol:

On Fri, Nov 27, 2020 at 7:09 PM Michael Koch <

astroelectro...@t-online.de>

wrote:


Hello,

I have a few questions about the asubboost and asupercut

filters.

-- In asubboost it's not yet clear how the block diagram of the

filter

looks like. Especially the "decay" and "feedback" options are

unclear.

What's the input of the the delay line? Before or after the low

pass

filter? Where does the feedback go to? Before or after the

lowpass

filter? I have attached a sketch of a possible block diagram,

but

it's

only a wild guess.


This filter just adds delayed sub frequencies set by cut off

frequency

back

to output. Decay sets decay of old sub echo in buffer and

feedback

sets

how

much
new sub frequencies are added to the delay buffer.

I did try to reverse engineer the asubboost filter from its output
signal. Is the attached sketch correct?
It seems the "feedback" parameter is unnecessary because it does

exactly

the same thing as "wet".


No, your reasoning is invalid.

I do not have time to draw graphs or do consulting for free.

When you swap the values of "wet" and "feedback", the output does

always

remain the same.
If you think that I'm wrong, please show an example to prove the

opposite.

Make sure that you take into account decay parameter, delay buffer is

still

used.

When you swap the values of "wet" and "feedback", the output does

always

remain the same, regardless which values you use for "dry", "decay" and
"delay".
As can be shown with this example:

set "A=0.4"
set "B=0.7"

ffmpeg -f lavfi -i aevalsrc='0.5*gt(t,0.1)':d=1 -lavfi


asplit[a][b];[b]asubboost=dry=0.3:wet=%A%:decay=0.4:feedback=%B%:delay=50[c],[a][c]join,showwaves=draw=full:s=800x300:r=1

-frames 1 -y out1.png

ffmpeg -f lavfi -i aevalsrc='0.5*gt(t,0.1)':d=1 -lavfi


asplit[a][b];[b]asubboost=dry=0.3:wet=%B%:decay=0.4:feedback=%A%:delay=50[c],[a][c]join,showwaves=draw=full:s=800x300:r=1

-frames 1 -y out2.png

ffmpeg -i out1.png -i out2.png -lavfi vstack -y out.png


Red is input step signal, green is output step response of the

asubboost

filter. In the lower half of the output image the "wet" and "feedback"
values are swapped.


What we said previously about delay buffers and that above command?
It was literally less than 24h.

The previous example was "showfreqs" which has a frequency domain
output. Not suitable for analyzing filters which contain delays.
Now I'm using "showwaves" which has time domain output. That's a
different thing. Of course delay lines can be analyzed in time domain.



That command can not show you the action of delayed input as it is not
designed for it.

A step signal contains all frequencies and is the best possible source
for analyzing unknown black boxes that may contain delays. You can
replace the input by any other source, but you will never find any
difference in the two outputs. It's a fact, "wet" and "feedback" are
interchangeable.


Nope, you are very mistaken. Try with real audio.


You can use this example with any real audio input you want:

set "A=0.4"
set "B=0.7"

ffmpeg -i dog.mp3 -lavfi 
asubboost=dry=0.3:wet=%A%:decay=0.4:feedback=%B%:delay=50 -y out1.wav
ffmpeg -i dog.mp3 -lavfi 
asubboost=dry=0.3:wet=%B%:decay=0.4:feedback=%A%:delay=50 -y out2.wav
ffmpeg -i out1.wav -i out2.wav -lavfi [0][1]amerge,aeval=val(0)-val(1) 
-y out.wav


out.wav is the difference between out1.wav and out2.wav, and it's 
perfect silence. Which means the two inputs are equal.


Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] asubboost and asupercut

2020-11-28 Thread Michael Koch

Am 28.11.2020 um 14:48 schrieb Paul B Mahol:

On Sat, Nov 28, 2020 at 2:46 PM Paul B Mahol  wrote:



On Sat, Nov 28, 2020 at 2:24 PM Michael Koch 
wrote:


Am 28.11.2020 um 13:44 schrieb Paul B Mahol:

On Sat, Nov 28, 2020 at 1:35 PM Michael Koch <

astroelectro...@t-online.de>

wrote:


Am 28.11.2020 um 12:57 schrieb Paul B Mahol:

On Sat, Nov 28, 2020 at 12:41 PM Michael Koch <

astroelectro...@t-online.de>

wrote:


Am 27.11.2020 um 20:50 schrieb Paul B Mahol:

On Fri, Nov 27, 2020 at 8:24 PM Michael Koch <

astroelectro...@t-online.de>

wrote:


Am 27.11.2020 um 19:25 schrieb Paul B Mahol:

On Fri, Nov 27, 2020 at 7:09 PM Michael Koch <

astroelectro...@t-online.de>

wrote:


Hello,

I have a few questions about the asubboost and asupercut filters.

-- In asubboost it's not yet clear how the block diagram of the

filter

looks like. Especially the "decay" and "feedback" options are

unclear.

What's the input of the the delay line? Before or after the low

pass

filter? Where does the feedback go to? Before or after the

lowpass

filter? I have attached a sketch of a possible block diagram, but

it's

only a wild guess.


This filter just adds delayed sub frequencies set by cut off

frequency

back

to output. Decay sets decay of old sub echo in buffer and feedback

sets

how

much
new sub frequencies are added to the delay buffer.

I did try to reverse engineer the asubboost filter from its output
signal. Is the attached sketch correct?
It seems the "feedback" parameter is unnecessary because it does

exactly

the same thing as "wet".


No, your reasoning is invalid.

I do not have time to draw graphs or do consulting for free.

When you swap the values of "wet" and "feedback", the output does

always

remain the same.
If you think that I'm wrong, please show an example to prove the

opposite.

Make sure that you take into account decay parameter, delay buffer is

still

used.

When you swap the values of "wet" and "feedback", the output does always
remain the same, regardless which values you use for "dry", "decay" and
"delay".
As can be shown with this example:

set "A=0.4"
set "B=0.7"

ffmpeg -f lavfi -i aevalsrc='0.5*gt(t,0.1)':d=1 -lavfi
asplit[a][b];[b]asubboost=dry=0.3:wet=%A%:decay=0.4:feedback=%B%:delay=50[c],[a][c]join,showwaves=draw=full:s=800x300:r=1

-frames 1 -y out1.png

ffmpeg -f lavfi -i aevalsrc='0.5*gt(t,0.1)':d=1 -lavfi
asplit[a][b];[b]asubboost=dry=0.3:wet=%B%:decay=0.4:feedback=%A%:delay=50[c],[a][c]join,showwaves=draw=full:s=800x300:r=1

-frames 1 -y out2.png

ffmpeg -i out1.png -i out2.png -lavfi vstack -y out.png


Red is input step signal, green is output step response of the asubboost
filter. In the lower half of the output image the "wet" and "feedback"
values are swapped.


What we said previously about delay buffers and that above command?
It was literally less than 24h.


The previous example was "showfreqs" which has a frequency domain 
output. Not suitable for analyzing filters which contain delays.
Now I'm using "showwaves" which has time domain output. That's a 
different thing. Of course delay lines can be analyzed in time domain.




That command can not show you the action of delayed input as it is not
designed for it.


A step signal contains all frequencies and is the best possible source 
for analyzing unknown black boxes that may contain delays. You can 
replace the input by any other source, but you will never find any 
difference in the two outputs. It's a fact, "wet" and "feedback" are 
interchangeable.


Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] asubboost and asupercut

2020-11-28 Thread Michael Koch

Am 28.11.2020 um 13:44 schrieb Paul B Mahol:

On Sat, Nov 28, 2020 at 1:35 PM Michael Koch 
wrote:


Am 28.11.2020 um 12:57 schrieb Paul B Mahol:

On Sat, Nov 28, 2020 at 12:41 PM Michael Koch <

astroelectro...@t-online.de>

wrote:


Am 27.11.2020 um 20:50 schrieb Paul B Mahol:

On Fri, Nov 27, 2020 at 8:24 PM Michael Koch <

astroelectro...@t-online.de>

wrote:


Am 27.11.2020 um 19:25 schrieb Paul B Mahol:

On Fri, Nov 27, 2020 at 7:09 PM Michael Koch <

astroelectro...@t-online.de>

wrote:


Hello,

I have a few questions about the asubboost and asupercut filters.

-- In asubboost it's not yet clear how the block diagram of the

filter

looks like. Especially the "decay" and "feedback" options are

unclear.

What's the input of the the delay line? Before or after the low pass
filter? Where does the feedback go to? Before or after the lowpass
filter? I have attached a sketch of a possible block diagram, but

it's

only a wild guess.


This filter just adds delayed sub frequencies set by cut off

frequency

back

to output. Decay sets decay of old sub echo in buffer and feedback

sets

how

much
new sub frequencies are added to the delay buffer.

I did try to reverse engineer the asubboost filter from its output
signal. Is the attached sketch correct?
It seems the "feedback" parameter is unnecessary because it does exactly
the same thing as "wet".


No, your reasoning is invalid.

I do not have time to draw graphs or do consulting for free.

When you swap the values of "wet" and "feedback", the output does always
remain the same.
If you think that I'm wrong, please show an example to prove the opposite.


Make sure that you take into account decay parameter, delay buffer is still
used.


When you swap the values of "wet" and "feedback", the output does always
remain the same, regardless which values you use for "dry", "decay" and "delay".
As can be shown with this example:

set "A=0.4"
set "B=0.7"

ffmpeg -f lavfi -i aevalsrc='0.5*gt(t,0.1)':d=1 -lavfi 
asplit[a][b];[b]asubboost=dry=0.3:wet=%A%:decay=0.4:feedback=%B%:delay=50[c],[a][c]join,showwaves=draw=full:s=800x300:r=1 
-frames 1 -y out1.png


ffmpeg -f lavfi -i aevalsrc='0.5*gt(t,0.1)':d=1 -lavfi 
asplit[a][b];[b]asubboost=dry=0.3:wet=%B%:decay=0.4:feedback=%A%:delay=50[c],[a][c]join,showwaves=draw=full:s=800x300:r=1 
-frames 1 -y out2.png


ffmpeg -i out1.png -i out2.png -lavfi vstack -y out.png


Red is input step signal, green is output step response of the asubboost 
filter. In the lower half of the output image the "wet" and "feedback" 
values are swapped.


Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] asubboost and asupercut

2020-11-28 Thread Michael Koch

Am 28.11.2020 um 12:57 schrieb Paul B Mahol:

On Sat, Nov 28, 2020 at 12:41 PM Michael Koch 
wrote:


Am 27.11.2020 um 20:50 schrieb Paul B Mahol:

On Fri, Nov 27, 2020 at 8:24 PM Michael Koch <

astroelectro...@t-online.de>

wrote:


Am 27.11.2020 um 19:25 schrieb Paul B Mahol:

On Fri, Nov 27, 2020 at 7:09 PM Michael Koch <

astroelectro...@t-online.de>

wrote:


Hello,

I have a few questions about the asubboost and asupercut filters.

-- In asubboost it's not yet clear how the block diagram of the filter
looks like. Especially the "decay" and "feedback" options are unclear.
What's the input of the the delay line? Before or after the low pass
filter? Where does the feedback go to? Before or after the lowpass
filter? I have attached a sketch of a possible block diagram, but it's
only a wild guess.


This filter just adds delayed sub frequencies set by cut off frequency

back

to output. Decay sets decay of old sub echo in buffer and feedback sets

how

much
new sub frequencies are added to the delay buffer.

I did try to reverse engineer the asubboost filter from its output
signal. Is the attached sketch correct?
It seems the "feedback" parameter is unnecessary because it does exactly
the same thing as "wet".


No, your reasoning is invalid.

I do not have time to draw graphs or do consulting for free.


When you swap the values of "wet" and "feedback", the output does always 
remain the same.

If you think that I'm wrong, please show an example to prove the opposite.

Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] asubboost and asupercut

2020-11-27 Thread Michael Koch

Am 27.11.2020 um 23:27 schrieb Paul B Mahol:

On Fri, Nov 27, 2020 at 11:20 PM Michael Koch 
wrote:


Am 27.11.2020 um 22:47 schrieb Paul B Mahol:

On Fri, Nov 27, 2020 at 10:40 PM Michael Koch <

astroelectro...@t-online.de>

wrote:


Am 27.11.2020 um 20:50 schrieb Paul B Mahol:

On Fri, Nov 27, 2020 at 8:24 PM Michael Koch <

astroelectro...@t-online.de>

wrote:


Am 27.11.2020 um 19:25 schrieb Paul B Mahol:

On Fri, Nov 27, 2020 at 7:09 PM Michael Koch <

astroelectro...@t-online.de>

wrote:


Hello,

I have a few questions about the asubboost and asupercut filters.

-- In asubboost it's not yet clear how the block diagram of the

filter

looks like. Especially the "decay" and "feedback" options are

unclear.

What's the input of the the delay line? Before or after the low pass
filter? Where does the feedback go to? Before or after the lowpass
filter? I have attached a sketch of a possible block diagram, but

it's

only a wild guess.


This filter just adds delayed sub frequencies set by cut off

frequency

back

to output. Decay sets decay of old sub echo in buffer and feedback

sets

how

much
new sub frequencies are added to the delay buffer.

Sorry, I didn't understand it. I think this filter could be explained
much better with a block diagram.



-- Is it correct that the "slope" parameter effectively changes the
cutoff frequency, but doesn't change the steepness (dB / octave) of

the

filter?


It changes steepness, not cutoff.

What's the unit of the "slope" option?



1 / octave, can be set as dB unit, eg. -20dB, 0dB or 1 is max value.

Use mentioned mpv command for the reason, as it displays curve.

The command does not work on my computer. I did download mpv.exe and run
the command line. mpv exits immediately, no output, no error message.
When I run mpv without any options, then a mpv window opens.


Make sure you run it inside power shell or cmd.exe and make sure you

tried

mpv.com too.

Now with mpv.com it works. But the version doesn't yet have the
asubboost filter. With other filters it works.


Download the latest release version of mpv. Also note that asubbost is not
a simple IIR filter
and thus it can not really show its response.


That's clear. If the filter has a delay line, then a simple frequency 
response makes no sense.


It would really help if you could draw a block diagram for asubboost. 
That would be much easier to understand than a description with words.


Michael

--
******
  ASTRO ELECTRONIC   Dipl.-Ing. Michael Koch
   Raabestr. 43   37412 Herzberg
  www.astro-electronic.de
  Tel. +49 5521 854265   Fax +49 5521 854266
**

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] asubboost and asupercut

2020-11-27 Thread Michael Koch

Am 27.11.2020 um 22:47 schrieb Paul B Mahol:

On Fri, Nov 27, 2020 at 10:40 PM Michael Koch 
wrote:


Am 27.11.2020 um 20:50 schrieb Paul B Mahol:

On Fri, Nov 27, 2020 at 8:24 PM Michael Koch <

astroelectro...@t-online.de>

wrote:


Am 27.11.2020 um 19:25 schrieb Paul B Mahol:

On Fri, Nov 27, 2020 at 7:09 PM Michael Koch <

astroelectro...@t-online.de>

wrote:


Hello,

I have a few questions about the asubboost and asupercut filters.

-- In asubboost it's not yet clear how the block diagram of the filter
looks like. Especially the "decay" and "feedback" options are unclear.
What's the input of the the delay line? Before or after the low pass
filter? Where does the feedback go to? Before or after the lowpass
filter? I have attached a sketch of a possible block diagram, but it's
only a wild guess.


This filter just adds delayed sub frequencies set by cut off frequency

back

to output. Decay sets decay of old sub echo in buffer and feedback sets

how

much
new sub frequencies are added to the delay buffer.

Sorry, I didn't understand it. I think this filter could be explained
much better with a block diagram.



-- Is it correct that the "slope" parameter effectively changes the
cutoff frequency, but doesn't change the steepness (dB / octave) of

the

filter?


It changes steepness, not cutoff.

What's the unit of the "slope" option?



1 / octave, can be set as dB unit, eg. -20dB, 0dB or 1 is max value.

Use mentioned mpv command for the reason, as it displays curve.

The command does not work on my computer. I did download mpv.exe and run
the command line. mpv exits immediately, no output, no error message.
When I run mpv without any options, then a mpv window opens.


Make sure you run it inside power shell or cmd.exe and make sure you tried
mpv.com too.


Now with mpv.com it works. But the version doesn't yet have the 
asubboost filter. With other filters it works.


Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] asubboost and asupercut

2020-11-27 Thread Michael Koch

Am 27.11.2020 um 20:50 schrieb Paul B Mahol:

On Fri, Nov 27, 2020 at 8:24 PM Michael Koch 
wrote:


Am 27.11.2020 um 19:25 schrieb Paul B Mahol:

On Fri, Nov 27, 2020 at 7:09 PM Michael Koch <

astroelectro...@t-online.de>

wrote:


Hello,

I have a few questions about the asubboost and asupercut filters.

-- In asubboost it's not yet clear how the block diagram of the filter
looks like. Especially the "decay" and "feedback" options are unclear.
What's the input of the the delay line? Before or after the low pass
filter? Where does the feedback go to? Before or after the lowpass
filter? I have attached a sketch of a possible block diagram, but it's
only a wild guess.


This filter just adds delayed sub frequencies set by cut off frequency

back

to output. Decay sets decay of old sub echo in buffer and feedback sets

how

much
new sub frequencies are added to the delay buffer.

Sorry, I didn't understand it. I think this filter could be explained
much better with a block diagram.



-- Is it correct that the "slope" parameter effectively changes the
cutoff frequency, but doesn't change the steepness (dB / octave) of the
filter?


It changes steepness, not cutoff.

What's the unit of the "slope" option?



1 / octave, can be set as dB unit, eg. -20dB, 0dB or 1 is max value.

Use mentioned mpv command for the reason, as it displays curve.


The command does not work on my computer. I did download mpv.exe and run 
the command line. mpv exits immediately, no output, no error message. 
When I run mpv without any options, then a mpv window opens.


Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] asubboost and asupercut

2020-11-27 Thread Michael Koch

Am 27.11.2020 um 19:25 schrieb Paul B Mahol:

On Fri, Nov 27, 2020 at 7:09 PM Michael Koch 
wrote:


Hello,

I have a few questions about the asubboost and asupercut filters.

-- In asubboost it's not yet clear how the block diagram of the filter
looks like. Especially the "decay" and "feedback" options are unclear.
What's the input of the the delay line? Before or after the low pass
filter? Where does the feedback go to? Before or after the lowpass
filter? I have attached a sketch of a possible block diagram, but it's
only a wild guess.


This filter just adds delayed sub frequencies set by cut off frequency back
to output. Decay sets decay of old sub echo in buffer and feedback sets how
much
new sub frequencies are added to the delay buffer.


Sorry, I didn't understand it. I think this filter could be explained 
much better with a block diagram.




-- Is it correct that the "slope" parameter effectively changes the
cutoff frequency, but doesn't change the steepness (dB / octave) of the
filter?


It changes steepness, not cutoff.


What's the unit of the "slope" option?

Michael
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] emerge has unconnected output? Works on FFMPEG 4, not 3

2020-11-23 Thread Michael Koch

Am 23.11.2020 um 16:14 schrieb Karl Messner:

Michael,

Thank you so much for your help. I made the changes you suggested and now I'm 
getting this:


ffmpeg -report -i videos/intro720.mp4 -i videos/raw.mp4 -i videos/outro720.mp4 -i 
videos/music.mp3 -filter_complex " \
[0]scale=1280:720:force_original_aspect_ratio=decrease,pad=1280:720:(ow-iw)/2:(oh-ih)/2,setsar=1[v0];\
[1]scale=1280:720:force_original_aspect_ratio=decrease,pad=1280:720:(ow-iw)/2:(oh-ih)/2,setsar=1[v1];\
[2]scale=1280:720:force_original_aspect_ratio=decrease,pad=1280:720:(ow-iw)/2:(oh-ih)/2,setsar=1[v2];\
[3:a]volume=.015[a0];\
[1:a][a0]amerge[amix];\
[v0][0:a:0] [v1][amix] [v2][2:a:0]concat=n=3:v=1:a=1[v][a]" -map "[v]" -map "[a]" -f 
mp4 videos/out1606144376.mp4 2>&1





[Parsed_amerge_10 @ 0x21c96c0] No channel layout for input 1
[Parsed_amerge_10 @ 0x21c96c0] No channel layout for input 2
[AVFilterGraph @ 0x1a76460] The following filters could not choose their 
formats: Parsed_amerge_10
Consider inserting the (a)format filter near their input or output.
Error configuring complex filters.
Input/output error


Are you sure that "amerge" is the correct filter for your purpose? Why 
not the "amix" filter?


Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] emerge has unconnected output? Works on FFMPEG 4, not 3

2020-11-23 Thread Michael Koch

Am 23.11.2020 um 13:50 schrieb KarlMessner:

Good morning folks,

Any idea what I'm doing wrong here:

/usr/local/bin/ffmpeg -report -i videos/intro720.mp4 -i videos/raw.mp4 -i
videos/outro720.mp4 -i videos/music.mp3 -filter_complex " \
[0]
scale=1280:720:force_original_aspect_ratio=decrease,pad=1280:720:(ow-iw)/2:(oh-ih)/2,setsar=1[v0];
\
[1]
scale=1280:720:force_original_aspect_ratio=decrease,pad=1280:720:(ow-iw)/2:(oh-ih)/2,setsar=1[v1];
\
[2]
scale=1280:720:force_original_aspect_ratio=decrease,pad=1280:720:(ow-iw)/2:(oh-ih)/2,setsar=1[v2];
\
[3:a]volume=.015[a0]; \
*[1:a][a0]amerge[amix]; \
*[v0][0:a:0] [v1][aMix] [v2][2:a:0] concat=n=3:v=1:a=1[v][a]" -map "[v]"
-map "[a]" -f mp4 videos/out1606134367mp4 2>&1


it's another error: amix != aMix

Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] emerge has unconnected output? Works on FFMPEG 4, not 3

2020-11-23 Thread Michael Koch

Am 23.11.2020 um 13:50 schrieb KarlMessner:

Good morning folks,

Any idea what I'm doing wrong here:

/usr/local/bin/ffmpeg -report -i videos/intro720.mp4 -i videos/raw.mp4 -i
videos/outro720.mp4 -i videos/music.mp3 -filter_complex " \
[0]
scale=1280:720:force_original_aspect_ratio=decrease,pad=1280:720:(ow-iw)/2:(oh-ih)/2,setsar=1[v0];
\
[1]
scale=1280:720:force_original_aspect_ratio=decrease,pad=1280:720:(ow-iw)/2:(oh-ih)/2,setsar=1[v1];
\
[2]
scale=1280:720:force_original_aspect_ratio=decrease,pad=1280:720:(ow-iw)/2:(oh-ih)/2,setsar=1[v2];
\
[3:a]volume=.015[a0]; \
*[1:a][a0]amerge[amix]; \
*[v0][0:a:0] [v1][aMix] [v2][2:a:0] concat=n=3:v=1:a=1[v][a]" -map "[v]"
-map "[a]" -f mp4 videos/out1606134367mp4 2>&1


Try to remove all space characters from the filter chain. Especially 
those between the inputs of the concat filter. I'm not sure if this 
solves the problem, but it's worth a try.


Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Directly converting DNGs to MP4

2020-11-22 Thread Michael Koch

Am 08.11.2020 um 18:12 schrieb ZEITFELD:
  


Hello nice girls ans guys!

I have a highspeed camera that records raw data to DNG files. Now I try
to have the fastest possible preview video of what is recorded. I
succeeded using the GPU via FFMPEG, which is absolutely the fastest way
I found until now.

This is the command line I use:

FFMPEG -R 25 -F IMAGE2 -I FRAME_%06D.DNG -VCODEC LIBX264 -CRF 25
-PIX_FMT YUV420P PREVIEW.MP4

This gives me a MP4, but the colors are broken, the image is too dark
and green and the quality is very bad, with a lot of banding and
artifacts.

Does anyone know how to make ffmpeg apply reasonably balanced settings
when reading the DNGs?


did you (or anybody else) find a solution for this problem?

Paul did already give a few hints:
-- swscale is old and ignores color_trc metadata that is required for 
correct display

-- use zscale for converting pixel formats
-- set output trc
-- make sure to set right pixel format prior to calling zscale as it 
does not support bayer formats


Open questions:
-- How can zscale be used for converting the pixel format, if it doesn't 
support the "bayer_rggb16le" format that the input images have? zscale 
doesn't have any options for pixel formats.
-- Is "output trc" a synonym for "transfer characteristic"? Is that the 
"transfer" option in zscale? To which value must it be set?
-- How can the pixel format be set prior to calling zscale? With 
"-pix_fmt" before the input? Or after the input? Or with 
"-pixel_format"? Or with "format=" at the beginning of the filter chain?

-- The input color space is "sRGB". How can this be set?

I did already spend about 20 hours of try-and-error on this problem 
before I gave up.
Part of the problem is the missing documentation for the zscale filter. 
That's only a list of options and possible values. No explanations what 
the options mean, and no examples.


Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] {Spam?} Re: crop and fade

2020-11-20 Thread Michael Koch

Am 20.11.2020 um 11:59 schrieb RAPPAZ Francois via ffmpeg-user:

I think the command line is correct, and the hint from Gyan was of course 
correct.
The problem might be that your video player can't display 25fps at this 
extremely high resolution.
Try a smaller resolution. It makes no sense to use higher resultion than your 
monitor can display.
Michael

I did
ffmpeg -y -i %%02d.JPG -i SligoAir_WhiteBlanket.mp3 ^
-vf 
zoompan=d=10:fps=1:s=5152x3864,framerate=fps=5:interp_start=0:interp_end=255:scene=100
 ^
-pix_fmt yuv420p out.mp4

Which gives a nice result.
Thanks !


What I meant was to reduce the size, for example to 2000x1500. Reducing 
the framerate to 5 might be a problem for some players.



Is there a way to say : do the fade effect for all pictures, except the last 
(which is 14.JPG) ?


Duplicate the last picture, so that the last two pictures are the same, 
and/or limit the total duration of the video with the -t option to a 
suitable length.


Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] {Spam?} Re: crop and fade

2020-11-20 Thread Michael Koch

Am 20.11.2020 um 10:58 schrieb RAPPAZ Francois via ffmpeg-user:

Am 19.11.2020 um 16:37 schrieb RAPPAZ Francois via ffmpeg-user:

Hi

I tried the fading between pictures and cropping to there original
   size (5152 x 3864), so that they are not distorted

   ffmpeg -y -i %%02d.JPG -i SligoAir_WhiteBlanket.mp3 ^ -vf
   zoompan=d=10:fps=1,framerate=fps=25:interp_start=0:interp_end=255:sc
   en
   e=100 ^ -pix_fmt yuv420p -s 5152x3864 out.mp4

   It seems that -s 5152x3864 destroy the fading effect achieved with
   the second line

   How can I correct this ?

   You must bring all pictures to the same size _before_ you read them with 
FFmpeg.
   Re-scaling one image with FFmpeg:
   ffmpeg -i input.jpg -s 5152x3864 output.jpg Re-scaling all images
   with FFmpeg:
I think that's only possible with a loop in a script. Too complicated.
Re-scaling one image with IrfanView:
Open the image, Image --> Resize/Resample, Save as...

Michael

But my individual pictures are already at 5152x3864. The problem is
that when I do the above command without -s ...x... they are
stretched in the horizontal axis What should be the size to use for my pictures 
to be displayed  without distortion ?

The zoompan filter automatically rescales its input to 1280x720.
Override it by adding `s=5152x3864` as an option to the zoompan filter.
Remove the `-s` option from the command.


I did this

ffmpeg -y -i %%02d.JPG -i SligoAir_WhiteBlanket.mp3 ^
-vf 
zoompan=d=10:fps=1:s=5152x3864,framerate=fps=25:interp_start=0:interp_end=255:scene=100
 ^
-pix_fmt yuv420p out.mp4

Which do not give any error but destroys the fading effect
Could you give the complete command line ?


I think the command line is correct, and the hint from Gyan was of 
course correct. The problem might be that your video player can't 
display 25fps at this extremely high resolution. Try a smaller 
resolution. It makes no sense to use higher resultion than your monitor 
can display.


Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] {Spam?} Re: crop and fade

2020-11-19 Thread Michael Koch

Am 20.11.2020 um 07:27 schrieb RAPPAZ Francois via ffmpeg-user:

Am 19.11.2020 um 16:37 schrieb RAPPAZ Francois via ffmpeg-user:

Hi

I tried the fading between pictures and cropping to there original
size (5152 x 3864), so that they are not distorted

ffmpeg -y -i %%02d.JPG -i SligoAir_WhiteBlanket.mp3 ^ -vf
zoompan=d=10:fps=1,framerate=fps=25:interp_start=0:interp_end=255:scen
e=100 ^ -pix_fmt yuv420p -s 5152x3864 out.mp4

It seems that -s 5152x3864 destroy the fading effect achieved with the
second line

How can I correct this ?

You must bring all pictures to the same size _before_ you read them with FFmpeg.
Re-scaling one image with FFmpeg:
ffmpeg -i input.jpg -s 5152x3864 output.jpg
Re-scaling all images with FFmpeg:
I think that's only possible with a loop in a script. Too complicated.
Re-scaling one image with IrfanView:
Open the image, Image --> Resize/Resample, Save as...

Michael

But my individual pictures are already at 5152x3864. The problem is that when I 
do the above command without -s ...x... they are stretched in the horizontal 
axis
What should be the size to use for my pictures to be displayed  without 
distortion ?


And the individual images are not stretched in the horizontal axis?
Please show the uncut console output.

Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Best place for despill filter?

2020-11-19 Thread Michael Koch

Am 19.11.2020 um 13:19 schrieb Paul B Mahol:

Added commands to despill and frei0r filters.


thank you! I'll test it in a few days when the Windows build is available.

Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] crop and fade

2020-11-19 Thread Michael Koch

Am 19.11.2020 um 16:37 schrieb RAPPAZ Francois via ffmpeg-user:

Hi

I tried the fading between pictures and cropping to there original size (5152 x 
3864), so that they are not distorted

ffmpeg -y -i %%02d.JPG -i SligoAir_WhiteBlanket.mp3 ^
-vf zoompan=d=10:fps=1,framerate=fps=25:interp_start=0:interp_end=255:scene=100 
^
-pix_fmt yuv420p -s 5152x3864 out.mp4

It seems that -s 5152x3864 destroy the fading effect achieved with the second 
line

How can I correct this ?


You must bring all pictures to the same size _before_ you read them with 
FFmpeg.


Re-scaling one image with FFmpeg:
ffmpeg -i input.jpg -s 5152x3864 output.jpg

Re-scaling all images with FFmpeg:
I think that's only possible with a loop in a script. Too complicated.

Re-scaling one image with IrfanView:
Open the image, Image --> Resize/Resample, Save as...

Re-scaling all images with IrfanView:
File --> Batch Conversion, then select all images, then click on 
"advanced", the rest is self-explaining.


Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Slide show with transition

2020-11-18 Thread Michael Koch

Am 18.11.2020 um 16:37 schrieb RAPPAZ Francois via ffmpeg-user:

The problem is that your images 3 and 4 have different size. As far as I know, FFmpeg 
can't combine images with different sizes to a slideshow (please correct me if I'm 
>wrong). By the way, all images must also have the same pixel format. I did't 
check that in your images. For example, if one image has yuv420 and another has 
yuv422, that >won't work.
Possible workaround: Use batch processing in IrfanView to bring all images to 
the same size.
Michael

Size is the size in pixels or in bytes ?


width and height

Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Best place for despill filter?

2020-11-18 Thread Michael Koch

Am 18.11.2020 um 14:40 schrieb Moritz Barsnick:

On Mon, Nov 16, 2020 at 20:15:54 +0100, Michael Koch wrote:

Oh, what a pity. We need more filters with support for commands. Now
that I have found out how to send zmq commands from C# code, there are
so many nice things that could be done with FFmpeg in real time.
Especially when adjusting parameters that need visual feedback, for
example brightness, contrast, gamma, color corrections, colorkey
parameters, despill parameters and so on...

Paul has worked heavily over the last few months to add command support
to filters (and to properly document it). I guess it just isn't trivial
in many cases, as the filters sometimes have not been designed with
runtime reconfiguration in mind. (Any it may not be possible at all in
certain cases.)


Yes, I know that Paul has added command support to many filters and I 
really appreciate this work!

Paul, is it possible to add command support also for the despill filter?
I'm asking because I've written a C# tool for bluescreening in real 
time. The foreground video comes from a camera via HDMI to USB converter 
(these cheap chinese converters are great!). The background comes from a 
file and is played in an endless loop. I have attached a screenshot. The 
color bars are the built-in test image from the HDMI to USB converter. 
All parameters can be adjusted with the scrollbars in real time and are 
sent to the FFmpeg process as zmq messages. It's already running fine. 
The only problem is that the despill parameters can't be updated in real 
time. They become effective after stopping and re-starting FFmpeg (which 
takes about 2 seconds).


If you want to see how this was programmed, have a look at chapter 2.56 
of my book:

www.astro-electronic.de/FFmpeg_Book.pdf

Michael
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Slide show with transition

2020-11-18 Thread Michael Koch

Am 18.11.2020 um 09:50 schrieb RAPPAZ Francois via ffmpeg-user:

I have 15 JPEG file to be used for a slide (duration: 135 sec): Each image is 
display 9 seconds and without transition the mkv file is ok.
I tried this for a transition 1 second long

ffmpeg -y -framerate 1/9 -i %%02d.JPG -i SligoAir_WhiteBlanket.mp3 ^
-vf
zoompan=d=9:fps=1,framerate=fps=15:interp_start=0:interp_end=255:scene
=100 ^ -r 15 -vsync vfr -pix_fmt yuv420p output.mkv

The transition is ok for the first 3 image then the slide jump to image at 
position 10 and that picture remains blocked...
What am I doing wrong ?

this works for me:

rem  make 15 test images

ffmpeg -f lavfi -i testsrc2=size=vga:duration=15:rate=1 -y test%%2d.jpg
rem  make slideshow, each image is shown 9 sec + 1 sec crossfade

ffmpeg -i test%%02d.jpg -vf
zoompan=d=10:fps=1,framerate=fps=25:interp_start=0:interp_end=255:scene=100
out.mp4
Michael

My images files are 01.JPG, 02.JPG ... 10.JPG upto 15.JPG
I wonder if " -i %%02d.JPG " is realy working or is correct. I still have a 
jump to 10.JPG after 03.JPG
You can access my files here if you have the time
https://drive.switch.ch/index.php/s/DbAVwtai3mOT43j


The problem is that your images 3 and 4 have different size. As far as I 
know, FFmpeg can't combine images with different sizes to a slideshow 
(please correct me if I'm wrong). By the way, all images must also have 
the same pixel format. I did't check that in your images. For example, 
if one image has yuv420 and another has yuv422, that won't work.


Possible workaround: Use batch processing in IrfanView to bring all 
images to the same size.


Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Slide show with transition

2020-11-17 Thread Michael Koch

Am 17.11.2020 um 16:05 schrieb RAPPAZ Francois via ffmpeg-user:

I have 15 JPEG file to be used for a slide (duration: 135 sec): Each image is 
display 9 seconds and without transition the mkv file is ok.
I tried this for a transition 1 second long

ffmpeg -y -framerate 1/9 -i %%02d.JPG -i SligoAir_WhiteBlanket.mp3 ^
-vf zoompan=d=9:fps=1,framerate=fps=15:interp_start=0:interp_end=255:scene=100 ^
-r 15 -vsync vfr -pix_fmt yuv420p output.mkv

The transition is ok for the first 3 image then the slide jump to image at 
position 10 and that picture remains blocked...
What am I doing wrong ?


this works for me:

rem  make 15 test images

ffmpeg -f lavfi -i testsrc2=size=vga:duration=15:rate=1 -y test%%2d.jpg


rem  make slideshow, each image is shown 9 sec + 1 sec crossfade

ffmpeg -i test%%02d.jpg -vf 
zoompan=d=10:fps=1,framerate=fps=25:interp_start=0:interp_end=255:scene=100 
out.mp4


Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg: slide show on Windows

2020-11-17 Thread Michael Koch

Am 17.11.2020 um 14:09 schrieb RAPPAZ Francois via ffmpeg-user:

Thanks that was it

And if I Would like to have each image shown during 15 seconds (I have 15 
pictures and  whants the total time being 225 sec), How should I specified the 
-framerate and -r ? I tried
ffmpeg -y -f image2 -framerate 1/5 -i %%02d.JPG -i SligoAir_WhiteBlanket.mp3 
-vsync vfr  -r 15 -pix_fmt yuv420p output.mkv

But that it's too short a time for each picture


"-framerate 1/5" specifies the framerate at which you read the pictures, 
1/5 means 5 seconds.
"-r" is the output framerate. If you omit this option, then the default 
25fps is used. Some players have problems with small framerates.
If you want crossfadings between the pictures, have a look at chapter 
2.3 in my book:

http://www.astro-electronic.de/FFmpeg_Book.pdf

Michael


___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg: slide show on Windows

2020-11-17 Thread Michael Koch

Am 17.11.2020 um 11:50 schrieb RAPPAZ Francois via ffmpeg-user:

I'm on Windows 10 with ffmpeg ...



Then I tried with
ffmpeg -y -f image2 -framerate 8 -i "%02d.JPG" -i SligoAir_WhiteBlanket.mp3 
-vsync vfr -pix_fmt yuv420p output.mkv


If you are starting this command line from a batch file, you must escape 
the % character:

-i %%02d.jpg
The "" quotation marks can be omitted.
"-f image2" can also be omitted.
I think "-vsync vfr" can also be omittred (but here I'm not sure).

Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Best place for despill filter?

2020-11-16 Thread Michael Koch

Am 16.11.2020 um 20:01 schrieb Paul B Mahol:

Is it possible to send zmq messages to the despill filter? I want to
adjust the mix, expand and brightness parameters in real time. I'm
sending the zmq messages from my own C# program. Messages to colorkey
(color, similarity) and overlay (x, y) are already working. When I send
messages to the despill filter, there is no error message but nothing
changes in the output video. Either I have something wrong in my code,
or the despill filter doesn't use the new values.


Nowhere is documented that despill have support for commands and thus it
does not have
support for commands.


Oh, what a pity. We need more filters with support for commands. Now 
that I have found out how to send zmq commands from C# code, there are 
so many nice things that could be done with FFmpeg in real time. 
Especially when adjusting parameters that need visual feedback, for 
example brightness, contrast, gamma, color corrections, colorkey 
parameters, despill parameters and so on...


Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Best place for despill filter?

2020-11-16 Thread Michael Koch

Am 16.11.2020 um 16:12 schrieb Paul B Mahol:

On Mon, Nov 16, 2020 at 12:41 PM Michael Koch 
wrote:


Am 16.11.2020 um 12:22 schrieb Paul B Mahol:

On Mon, Nov 16, 2020 at 12:13 PM Michael Koch <

astroelectro...@t-online.de>

wrote:


When I overlay a bluescreen foreground video over a background video,
which is the best place for the despill filter? (a) before colorkey or
(b) after colorkey or (c) after overlay?

[foreground](a),colorkey,(b)[fg];[background][fg]overlay,(c)

I found here an example for (c) but I doubt that this makes sense:
https://gist.github.com/kerbeh/fbe0cd0d89c424708c119e0a0d00ca88



Why do you doubt that it does make sense?

I doubt that (c) is the right place because at position (c) the despill
filter also affects the background video, which surely doesn't have any
contamination from the bluescreen.


Then use it after colorkey.


Is it possible to send zmq messages to the despill filter? I want to 
adjust the mix, expand and brightness parameters in real time. I'm 
sending the zmq messages from my own C# program. Messages to colorkey 
(color, similarity) and overlay (x, y) are already working. When I send 
messages to the despill filter, there is no error message but nothing 
changes in the output video. Either I have something wrong in my code, 
or the despill filter doesn't use the new values.


Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Best place for despill filter?

2020-11-16 Thread Michael Koch

Am 16.11.2020 um 16:12 schrieb Paul B Mahol:

On Mon, Nov 16, 2020 at 12:41 PM Michael Koch 
wrote:


Am 16.11.2020 um 12:22 schrieb Paul B Mahol:

On Mon, Nov 16, 2020 at 12:13 PM Michael Koch <

astroelectro...@t-online.de>

wrote:


When I overlay a bluescreen foreground video over a background video,
which is the best place for the despill filter? (a) before colorkey or
(b) after colorkey or (c) after overlay?

[foreground](a),colorkey,(b)[fg];[background][fg]overlay,(c)

I found here an example for (c) but I doubt that this makes sense:
https://gist.github.com/kerbeh/fbe0cd0d89c424708c119e0a0d00ca88



Why do you doubt that it does make sense?

I doubt that (c) is the right place because at position (c) the despill
filter also affects the background video, which surely doesn't have any
contamination from the bluescreen.


Then use it after colorkey.


Thank you, you confirmed what I also thought is the best place for the 
despill filter. But I wasn't sure.
Might be a good idea to add a short note to the documentation of the 
despill filter.


Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Best place for despill filter?

2020-11-16 Thread Michael Koch

Am 16.11.2020 um 12:22 schrieb Paul B Mahol:

On Mon, Nov 16, 2020 at 12:13 PM Michael Koch 
wrote:


When I overlay a bluescreen foreground video over a background video,
which is the best place for the despill filter? (a) before colorkey or
(b) after colorkey or (c) after overlay?

[foreground](a),colorkey,(b)[fg];[background][fg]overlay,(c)

I found here an example for (c) but I doubt that this makes sense:
https://gist.github.com/kerbeh/fbe0cd0d89c424708c119e0a0d00ca88



Why do you doubt that it does make sense?


I doubt that (c) is the right place because at position (c) the despill 
filter also affects the background video, which surely doesn't have any 
contamination from the bluescreen.


Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

[FFmpeg-user] Best place for despill filter?

2020-11-16 Thread Michael Koch
When I overlay a bluescreen foreground video over a background video, 
which is the best place for the despill filter? (a) before colorkey or 
(b) after colorkey or (c) after overlay?


[foreground](a),colorkey,(b)[fg];[background][fg]overlay,(c)

I found here an example for (c) but I doubt that this makes sense:
https://gist.github.com/kerbeh/fbe0cd0d89c424708c119e0a0d00ca88

Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

[FFmpeg-user] Add a silent audio stream if no audio stream exists

2020-11-11 Thread Michael Koch

Am 11.11.2020 um 21:59 schrieb Jim DeLaHunt:


2. Generate a more elaborate FFmpeg invocation which will generate 
silent audio streams for input files which have none. The details of 
this should be in a separate thread with a separate Subject: line.




That's an interesting question. Can this be done in one command line?
If the input video has an audio stream, then leave it as it is. If no 
audio stream exists, then add silent audio.


Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Stream specifier ':a' in filtergraph Error

2020-11-10 Thread Michael Koch

Am 10.11.2020 um 22:27 schrieb Randy Johnson via ffmpeg-user:

Hello,

When running the following command:

```
ffmpeg \
-loop 1 -framerate 30 -t 1.4 -i /assets/img/filler640480.jpg -i 0d.mp4 -
y \
-loop 1 -framerate 30 -t 891.113 -i /assets/img/filler640480.jpg -i
9f.mp4 -y \
-f lavfi -t 0.1 -i anullsrc=channel_layout=mono:sample_rate=48000 \
-filter_complex "
[0:v]setsar=1[v0];[2:v]setsar=1[v1];[v0][4:a][1:v][1:a][v1][4:a][3:v][3:a]concat=n=4:v=1:a=1"
-vsync 2 -vcodec libx264 -pix_fmt yuv420p 0245-grid.mp4

```

I am getting the following error:

```
Stream specifier ':a' in filtergraph description
[0:v]setsar=1[v0];[2:v]setsar=1[v1];[v0][4:a][1:v][1:a][v1][4:a][3:v][3:a]concat=n=4:v=1:a=1
matches no streams.
```

This only happens on some video processing not all, cannot seem to
figure out why.


You could check those videos with FFprobe. May be they have no audio 
stream, or something else is different.


Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] dshow video source in FFplay

2020-11-01 Thread Michael Koch

Am 01.11.2020 um 19:07 schrieb Carl Eugen Hoyos:

Am So., 1. Nov. 2020 um 11:03 Uhr schrieb Michael Koch
:


I'm using a cheap HDMI to USB converter for video input. The converter
supports different sizes and framerates and two different video codecs.
This command line works as expected:

ffplay -f dshow -video_size 1920x1080 -framerate 30 -vcodec mjpeg
video="USB Video"

(Complete, uncut console output missing.)


However for my application I need two input streams, and because FFplay
doesn't allow filter_complex

Use ffmpeg instead of ffplay, there is an sdl output device.


That's a good idea, it's already working. Thank you!

Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

[FFmpeg-user] dshow video source in FFplay

2020-11-01 Thread Michael Koch

Hello,

I'm using a cheap HDMI to USB converter for video input. The converter 
supports different sizes and framerates and two different video codecs. 
This command line works as expected:


ffplay -f dshow -video_size 1920x1080 -framerate 30 -vcodec mjpeg 
video="USB Video"


However for my application I need two input streams, and because FFplay 
doesn't allow filter_complex, I have to use the workaround with a 
"movie" source inside "-lavfi". The second input will be added later. 
This command line does also work as expected:


ffplay -f lavfi movie=filename="USB Video":f=dshow:s=dv

When I use the "movie" source, how can I specify size, framerate and 
codec? The movie source doesn't have these options.


Michael


___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] start FFplay without console window?

2020-10-30 Thread Michael Koch

Am 30.10.2020 um 13:53 schrieb Moritz Barsnick:

Hallo Michael,

On Fri, Oct 30, 2020 at 01:29:04 +0100, Michael Koch wrote:

Is it possible to start FFplay (from a C# application) so that only the
video window is visible, while the console window is hidden or minimized?

That's more of a C#/Windows question than an ffmpeg question. (Assuming
you're not using C# on a different platform here. ;-)) Windows seems to
require opening a console for launching such an external GUI.

I think this SO answer, with its amendments and comments, should cover
it (untested):

https://stackoverflow.com/a/19756925


In the meantime I found another solution.
This starts FFplay with two windows (console and video):

ProcessStartInfo startInfo = new ProcessStartInfo();
startInfo.FileName = "ffplay";
startInfo.Arguments = "-f lavfi testsrc2=s=vga";
Process p = Process.Start(startInfo);

This starts FFplay without the console window:

ProcessStartInfo startInfo = new ProcessStartInfo();
startInfo.UseShellExecute = false;
startInfo.CreateNoWindow = true;
startInfo.FileName = "ffplay";
startInfo.Arguments = "-f lavfi testsrc2=s=vga";
Process p = Process.Start(startInfo);

Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

[FFmpeg-user] start FFplay without console window?

2020-10-29 Thread Michael Koch
Is it possible to start FFplay (from a C# application) so that only the 
video window is visible, while the console window is hidden or minimized?


Michael
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] zmq example

2020-10-28 Thread Michael Koch

Am 28.10.2020 um 21:07 schrieb Gyan Doshi:



On 28-10-2020 12:21 am, Michael Koch wrote:

Hello,

I'm testing this command line which is copied from the documentation:
https://www.ffmpeg.org/ffmpeg-all.html#zmq_002c-azmq

ffplay -dumpgraph 1 -f lavfi 
"color=s=100x100:c=red[l];color=s=100x100:c=blue[r];nullsrc=s=200x100,zmq[bg];[bg][l]overlay[bg+l];[bg+l][r]overlay@my=x=100"


The problem is that ffplay exits after 1-2 seconds without any error 
message. I don't understand why. Shouldn't this command run forever? 
I'm not sending any commands to the zmq filter. The console output is 
below.


Crashes here. Open a ticket at trac.


have already done that, ticket #8955.

Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Problem in v360?

2020-10-28 Thread Michael Koch

Am 23.10.2020 um 17:19 schrieb Michael Koch:

Hello,

It seems there is a problem in the v360 filter. This command line 
doesn't rotate the correct point to the image center. The default 
rotation order should be yaw pitch roll, which means after the 
rotation the point with azimut 180° and height +45° should be in the 
center. It is nearby, but not in the center. Tested with latest 
Windows build, 2 days old.


ffmpeg -i equirectangular_test.png -vf v360=e:e:yaw=-90:pitch=45 -y 
out.png


Input image: http://www.astro-electronic.de/equirectangular_test.png



I've just tested the latest build and now it's working fine again. 
Thanks for fixing this!


Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

[FFmpeg-user] zmq example

2020-10-27 Thread Michael Koch

Hello,

I'm testing this command line which is copied from the documentation:
https://www.ffmpeg.org/ffmpeg-all.html#zmq_002c-azmq

ffplay -dumpgraph 1 -f lavfi 
"color=s=100x100:c=red[l];color=s=100x100:c=blue[r];nullsrc=s=200x100,zmq[bg];[bg][l]overlay[bg+l];[bg+l][r]overlay@my=x=100"


The problem is that ffplay exits after 1-2 seconds without any error 
message. I don't understand why. Shouldn't this command run forever? I'm 
not sending any commands to the zmq filter. The console output is below.


Michael

P.S. It works when I change the sizes to 200x200 and 400x200 !



C:\Users\astro\Desktop>c:\ffmpeg\ffplay -dumpgraph 1 -f lavfi 
"color=s=100x100:c=red[l];color=s=100x100:c=blue[r];nullsrc=s=200x100,zmq[bg];[bg][l]overlay[bg+l];[bg+l][r]overlay@my=x=100"
ffplay version 2020-10-21-git-289e964873-essentials_build-www.gyan.dev 
Copyright (c) 2003-2020 the FFmpeg developers

  built with gcc 10.2.0 (Rev3, Built by MSYS2 project)
  configuration: --enable-gpl --enable-version3 --enable-static 
--disable-w32threads --disable-autodetect --enable-fontconfig 
--enable-iconv --enable-gnutls --enable-libxml2 --enable-gmp 
--enable-lzma --enable-zlib --enable-libsrt --enable-libssh 
--enable-libzmq --enable-avisynth --enable-sdl2 --enable-libwebp 
--enable-libx264 --enable-libx265 --enable-libxvid --enable-libaom 
--enable-libopenjpeg --enable-libvpx --enable-libass 
--enable-libfreetype --enable-libfribidi --enable-libvidstab 
--enable-libvmaf --enable-libzimg --enable-amf --enable-cuda-llvm 
--enable-cuvid --enable-ffnvcodec --enable-nvdec --enable-nvenc 
--enable-d3d11va --enable-dxva2 --enable-libmfx --enable-libgme 
--enable-libopenmpt --enable-libopencore-amrwb --enable-libmp3lame 
--enable-libtheora --enable-libvo-amrwbenc --enable-libgsm 
--enable-libopencore-amrnb --enable-libopus --enable-libspeex 
--enable-libvorbis --enable-librubberband

  libavutil  56. 60.100 / 56. 60.100
  libavcodec 58.111.101 / 58.111.101
  libavformat    58. 62.100 / 58. 62.100
  libavdevice    58. 11.102 / 58. 11.102
  libavfilter 7. 88.100 /  7. 88.100
  libswscale  5.  8.100 /  5.  8.100
  libswresample   3.  8.100 /  3.  8.100
  libpostproc    55.  8.100 / 55.  8.100
++0 fd=   0 aq=    0KB vq=    0KB sq=    0B f=0/0
| Parsed_color_0 |default--[100x100 1:1 yuva420p]--Parsed_overlay_4:overlay
|    (color) |
++

++
| Parsed_color_1 |default--[100x100 1:1 yuva420p]--overlay@my:overlay
|    (color) |
++

+--+
| Parsed_nullsrc_2 |default--[200x100 1:1 yuva420p]--Parsed_zmq_3:default
|    (nullsrc) |
+--+

+--+
Parsed_nullsrc_2:default--[200x100 1:1 yuva420p]--default| Parsed_zmq_3 
|default--[200x100 1:1 yuva420p]--Parsed_overlay_4:main

 | (zmq) |
+--+

+--+
Parsed_zmq_3:default[200x100 1:1 yuva420p]-main| 
Parsed_overlay_4 |default--[200x100 1:1 yuva420p]--overlay@my:main

Parsed_color_0:default--[100x100 1:1 yuva420p]--overlay| (overlay) |
+--+

++
Parsed_overlay_4:default--[200x100 1:1 yuva420p]-main| overlay@my 
|default--[200x100 1:1 yuva420p]--out:default

Parsed_color_1:default[100x100 1:1 yuva420p]--overlay| (overlay)  |
++

   +--+
overlay@my:default--[200x100 1:1 yuva420p]--default| out  |
   | (buffersink) |
   +--+

Input #0, lavfi, from 
'color=s=100x100:c=red[l];color=s=100x100:c=blue[r];nullsrc=s=200x100,zmq[bg];[bg][l]overlay[bg+l];[bg+l][r]overlay@my=x=100':

  Duration: N/A, start: 0.00, bitrate: N/A
    Stream #0:0: Video: rawvideo (Y4[11][8] / 0x80B3459), yuva420p, 
200x100 [SAR 1:1 DAR 2:1], 25 tbr, 25 tbn, 25 tbc



___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] repair black frames?

2020-10-25 Thread Michael Koch

Am 25.10.2020 um 16:23 schrieb Simon Roberts:

Greetings all,

I have just noticed that some of my video recordings have a few random
black frames in them. These will be spaced out a few minutes, don't seem to
have any obvious cause (though my recording system is a little Heath
Robinson / Rube Goldberg, so I have some ideas)

Anyway, the thought occurred to me that rather than watching, never
blinking, all the material, and looking for this "manually", ffmpeg might
be able to help.

The solution I have in mind would probably be something that recognizes a
black frame (I *think* it's "pure" black, but I don't have numbers) and
replaces it with the immediately preceding or following frame).

Does such a filter exist, or could such behavior be tied together out of
the features that ffmpeg has?


search for "blackdetect" in the documentation

Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Problem in v360?

2020-10-25 Thread Michael Koch

Am 25.10.2020 um 09:05 schrieb Michael Koch:

Am 25.10.2020 um 03:02 schrieb Edward Park:

Hi,

After a deeper look into the quaternion multiplication, it seems to 
be correct. The terms are only written in a different order in the 
Wikipedia article.


But it's a fact that the v360 rotations are broken, as can be shown 
by this simple test:

You can use any equirectangular input image.
First make a combined rotation yaw=90 and pitch=45 in the default 
rotation order (ypr),

then rotate back pitch = -45,
then rotate back yaw = -90.
The output image should be the same as the input image. But it isn't.


It seems to me it assumes a 45° vertical fov and 90° horizontal fov 
on both input and output, does that affect the behavior at all? (Or 
am I not understanding right)


If the input and output images are specified as "equirectangular" 
(V360=e:e:...) then the field of view is always 360° x 180°. There are 
options for for input and output field of view, but these aren't used 
in the case of equirectangular images. Something seems to be wrong 
with the quaternions that were added October 7th.

This is difficult stuff. A wild guess that might be worth testing:
Swap the order of quaternion multiplication, 2nd and 3rd argument of 
multiply_quaternion()


In this FFmpeg version from October 1th the v360 rotations are working 
correctly:


C:\Users\astro\Desktop>c:\ffmpeg\ffmpeg -i equirectangular_test.png -vf 
v360=e:e:yaw=90:pitch=45,v360=e:e:pitch=-45,v360=e:e:yaw=-90 -y out.png
ffmpeg version 4.3.1-2020-10-01-essentials_build-www.gyan.dev Copyright 
(c) 2000-2020 the FFmpeg developers

  built with gcc 10.2.0 (Rev3, Built by MSYS2 project)
  configuration: --enable-gpl --enable-version3 --enable-static 
--disable-w32threads --disable-autodetect --enable-fontconfig 
--enable-iconv --enable-gnutls --enable-libxml2 --enable-gmp 
--enable-lzma --enable-zlib --enable-libsrt --enable-libssh 
--enable-libzmq --enable-avisynth --enable-sdl2 --enable-libwebp 
--enable-libx264 --enable-libx265 --enable-libxvid --enable-libaom 
--enable-libopenjpeg --enable-libvpx --enable-libass 
--enable-libfreetype --enable-libfribidi --enable-libvidstab 
--enable-libvmaf --enable-libzimg --enable-amf --enable-cuda-llvm 
--enable-cuvid --enable-ffnvcodec --enable-nvdec --enable-nvenc 
--enable-d3d11va --enable-dxva2 --enable-libmfx --enable-libgme 
--enable-libopenmpt --enable-libopencore-amrwb --enable-libmp3lame 
--enable-libtheora --enable-libvo-amrwbenc --enable-libgsm 
--enable-libopencore-amrnb --enable-libopus --enable-libspeex 
--enable-libvorbis --enable-librubberband

  libavutil  56. 51.100 / 56. 51.100
  libavcodec 58. 91.100 / 58. 91.100
  libavformat    58. 45.100 / 58. 45.100
  libavdevice    58. 10.100 / 58. 10.100
  libavfilter 7. 85.100 /  7. 85.100
  libswscale  5.  7.100 /  5.  7.100
  libswresample   3.  7.100 /  3.  7.100
  libpostproc    55.  7.100 / 55.  7.100
Input #0, png_pipe, from 'equirectangular_test.png':
  Duration: N/A, bitrate: N/A
    Stream #0:0: Video: png, rgb24(pc), 2400x1200, 25 tbr, 25 tbn, 25 tbc
Stream mapping:
  Stream #0:0 -> #0:0 (png (native) -> png (native))
Press [q] to stop, [?] for help
Output #0, image2, to 'out.png':
  Metadata:
    encoder : Lavf58.45.100
    Stream #0:0: Video: png, rgb24, 2400x1200, q=2-31, 200 kb/s, 25 
fps, 25 tbn, 25 tbc

    Metadata:
  encoder : Lavc58.91.100 png
frame=    1 fps=0.0 q=-0.0 Lsize=N/A time=00:00:00.04 bitrate=N/A 
speed=0.0459x
video:1851kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB 
muxing overhead: unknown



___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Problem in v360?

2020-10-25 Thread Michael Koch

Am 25.10.2020 um 03:02 schrieb Edward Park:

Hi,


After a deeper look into the quaternion multiplication, it seems to be correct. 
The terms are only written in a different order in the Wikipedia article.

But it's a fact that the v360 rotations are broken, as can be shown by this 
simple test:
You can use any equirectangular input image.
First make a combined rotation yaw=90 and pitch=45 in the default rotation 
order (ypr),
then rotate back pitch = -45,
then rotate back yaw = -90.
The output image should be the same as the input image. But it isn't.


It seems to me it assumes a 45° vertical fov and 90° horizontal fov on both 
input and output, does that affect the behavior at all? (Or am I not 
understanding right)


If the input and output images are specified as "equirectangular" 
(V360=e:e:...) then the field of view is always 360° x 180°. There are 
options for for input and output field of view, but these aren't used in 
the case of equirectangular images. Something seems to be wrong with the 
quaternions that were added October 7th.

This is difficult stuff. A wild guess that might be worth testing:
Swap the order of quaternion multiplication, 2nd and 3rd argument of 
multiply_quaternion()


Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Problem in v360?

2020-10-24 Thread Michael Koch

Am 24.10.2020 um 11:03 schrieb Michael Koch:

Am 23.10.2020 um 18:46 schrieb Michael Koch:

Am 23.10.2020 um 17:19 schrieb Michael Koch:

Hello,

It seems there is a problem in the v360 filter. This command line 
doesn't rotate the correct point to the image center. The default 
rotation order should be yaw pitch roll, which means after the 
rotation the point with azimut 180° and height +45° should be in the 
center. It is nearby, but not in the center. Tested with latest 
Windows build, 2 days old.


ffmpeg -i equirectangular_test.png -vf v360=e:e:yaw=-90:pitch=45 -y 
out.png


If the rotation is only yaw or only pitch or only roll, then it's 
working fine. But all rotations around two or more axes fail.


I'm not sure, but the problem might be in the function 
multiply_quaternion() in vf_v360.c

Please compare the signs with this Wikipedia article:
https://en.wikipedia.org/wiki/Quaternion#Hamilton_product


After a deeper look into the quaternion multiplication, it seems to be 
correct. The terms are only written in a different order in the 
Wikipedia article.


But it's a fact that the v360 rotations are broken, as can be shown by 
this simple test:

You can use any equirectangular input image.
First make a combined rotation yaw=90 and pitch=45 in the default 
rotation order (ypr),

then rotate back pitch = -45,
then rotate back yaw = -90.
The output image should be the same as the input image. But it isn't.

ffmpeg -i input.png -vf 
v360=e:e:yaw=90:pitch=45,v360=e:e:pitch=-45,v360=e:e:yaw=-90 -y out.png


Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Problem in v360?

2020-10-24 Thread Michael Koch

Am 23.10.2020 um 18:46 schrieb Michael Koch:

Am 23.10.2020 um 17:19 schrieb Michael Koch:

Hello,

It seems there is a problem in the v360 filter. This command line 
doesn't rotate the correct point to the image center. The default 
rotation order should be yaw pitch roll, which means after the 
rotation the point with azimut 180° and height +45° should be in the 
center. It is nearby, but not in the center. Tested with latest 
Windows build, 2 days old.


ffmpeg -i equirectangular_test.png -vf v360=e:e:yaw=-90:pitch=45 -y 
out.png


If the rotation is only yaw or only pitch or only roll, then it's 
working fine. But all rotations around two or more axes fail.


I'm not sure, but the problem might be in the function 
multiply_quaternion() in vf_v360.c

Please compare the signs with this Wikipedia article:
https://en.wikipedia.org/wiki/Quaternion#Hamilton_product

Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Problem in v360?

2020-10-23 Thread Michael Koch

Am 23.10.2020 um 17:19 schrieb Michael Koch:

Hello,

It seems there is a problem in the v360 filter. This command line 
doesn't rotate the correct point to the image center. The default 
rotation order should be yaw pitch roll, which means after the 
rotation the point with azimut 180° and height +45° should be in the 
center. It is nearby, but not in the center. Tested with latest 
Windows build, 2 days old.


ffmpeg -i equirectangular_test.png -vf v360=e:e:yaw=-90:pitch=45 -y 
out.png


If the rotation is only yaw or only pitch or only roll, then it's 
working fine. But all rotations around two or more axes fail.


Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

[FFmpeg-user] Problem in v360?

2020-10-23 Thread Michael Koch

Hello,

It seems there is a problem in the v360 filter. This command line 
doesn't rotate the correct point to the image center. The default 
rotation order should be yaw pitch roll, which means after the rotation 
the point with azimut 180° and height +45° should be in the center. It 
is nearby, but not in the center. Tested with latest Windows build, 2 
days old.


ffmpeg -i equirectangular_test.png -vf v360=e:e:yaw=-90:pitch=45 -y out.png

Input image: http://www.astro-electronic.de/equirectangular_test.png

Michael




___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] afreqshift

2020-10-21 Thread Michael Koch

Am 21.10.2020 um 21:02 schrieb Michael Koch:

Am 21.10.2020 um 20:36 schrieb Paul B Mahol:

On Wed, Oct 21, 2020 at 08:14:50PM +0200, Michael Koch wrote:

Hi,

I'm just testing the new afreqshift filter.
In my first test I did use only *.wav audio files. This test was 
successful.


In my next test I'm using MP4 files. First I make a test video with 
a 5kHz

tone:
ffmpeg -f lavfi -i sine=f=5000:r=48000:d=5 -f lavfi -i color=black 
-lavfi

showspectrum=legend=1 -y test.mp4

Now I try to shift the frequency down by 4kHz:
ffmpeg -i test.mp4 -lavfi afreqshift=-4000,showspectrum=legend=1 -y 
out.mp4


The spectrum display is showing the expected 1kHz, but what I hear 
is 5kHz.

What's wrong with my command line? Why is the output of the afreqshift
filter not mapped to the output file?


Something like this:

ffmpeg -i test.mp4 -lavfi 
"asplit=2[a][b],[b]afreqshift=-4000,showspectrum=legend=1[b]" -map 
"[a]" -map "[b]" -y out.mp4


You ignored fact that showspectrum is just A->V filter.


That did also not work, the audio output is 5kHz. But thanks for 
pointing me in the right direction. The asplit filter must be after 
the afreqshift filter. This works:


ffmpeg -i test.mp4 -lavfi 
afreqshift=-4000,asplit[a][b];[b]showspectrum=legend=1 -map "[a]" -y 
out.mp4




Would it be possible to add an option for deleting all frequencies which 
are smaller than the shift frequency, before shifting down? Because 
otherwise these freqencies are shifted into the negative frequency range 
and can still be heard.
Example: An audio signal contains normal audio frequencies in the 
[0...12kHz] range and ultrasonic frequencies in the [12kHz...20kHz] 
range. If we want to hear these ultrasonic frequencies, we must shift by 
-12kHz. But then the [0...12kHz] range is shifted to the [-12kHz...0] 
range which means we can still hear it. That could be solved by an 
additional steep highpass filter before the afreqshift filter.
Maybe it's easier to do that in the frequency domain in the afreqshift 
filter?


Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Converting text to video for dictation

2020-10-20 Thread Michael Koch

Am 20.10.2020 um 06:46 schrieb John B Morris:

Hello all,

typography effects based video
So that one is basically what I'm looking to do with ffmpeg - turn a
subtitle file (of any type) into a transparent backgrounded (?) video for
use in my video editor to combine the necessary contents together - or some
other file/method that has/involves time stamps/tags or "keyframes". Since
I don't intend to use my voice in my videos, I "voice" them via text
elements either in the center of the screen or some other places where it
makes sense to do so. (I do this with my current video editor already, but
it isn't very efficient, time and complexity wise, especially for videos
that heavily rely on this... so hopefully ffmpeg can help with that?) Also
if possible (not needed), font and color changing, and maybe some movement.

Maybe subtitles in the future, but I sort of already know how to do those,
however the first point would be good to know.


I don't yet understand why you want to create a video with transparent 
background. I think that can be done, but I don't have an example ready.
It seems easier to write the texts/subtitles directly into the final 
video. You find a few examples in chapters 2.118 and 2.119 of my book:  
http://www.astro-electronic.de/FFmpeg_Book.pdf


Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

[FFmpeg-user] find_rect, suggestion for improvement

2020-10-19 Thread Michael Koch

Hello,

I'd like to propose an improvement for the find_rect filter.
In my application the object is quite small (10x10 pixels) and slowly 
moving over half of the image size.
I did already narrow down the search area with the xmin, xmax, ymin, 
ymax options, and I did optimize the threshold value. But sometimes the 
algorithm is still finding false positives, because other parts of the 
image look similar to the object. I know that my object can't move 
faster than about 10 pixels from one frame to the next.


My suggestion:
-- Add a new option "radius" which is 0 by default.
-- If radius = 0, then the behavior is the same as before.
-- If radius > 0, then use the xmin, xmax, ymin, ymax values only for 
the first frame.
-- For all subsequent frames, the search area is defined by the x,y 
result from the previous frame, plus and minus the radius value.
-- If the algorithm doesn't find the object, it shall stay in this 
"object lost" state for all subsequent frames. In this case it would be 
nice to see a warning in the log file.


That means for all frames (except the first one) the search area can be 
made quite small, so that the algorithm becomes much faster and the 
probability of finding false positives is reduced.


Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] DNG images

2020-10-13 Thread Michael Koch

Am 11.10.2020 um 21:07 schrieb Michael Koch:

Am 11.10.2020 um 20:56 schrieb Paul B Mahol:

On Sun, Oct 11, 2020 at 07:37:25PM +0200, Michael Koch wrote:
I tested all 4 DNG images again with the latest FFmpeg version 
(Gyan's build

from today).
I tested with and without zscale filter. All tests failed. When 
testing with

IrfanView, all DNG input images seem to be ok.


Your issues is that you do not know how to use zscale properly.

DNG decoding is just fine, what is missing is colorspace stuff in 
swscale,

for proper display.


Can you please give an example for the correct zscale options?



Finally I found a command line that converts DNG (from Adobe DNG 
converter) to JPG with a more or less acceptable result:


ffmpeg -i IMG_3459.dng -vf 
zscale=t=linear,tonemap=gamma:param=1.85,tonemap=linear:param=64,colorlevels=rimin=0.0:gimin=0.0:bimin=0.0:rimax=0.53:gimax=1:bimax=0.57 
-y out.jpg


But I think everything after "zscale" is only a dirty workaround to 
correct errors that were made somewhere else. Especially the strong 
amplification by factor 64 and strong color correction doesn't look 
right. Please also note that the size of the output image is wrong. It's 
bigger than the input image and has black borders at the left and top. 
The console output is below.


Michael



C:\Users\astro\Desktop\dng>c:\ffmpeg\ffmpeg -i IMG_3459.dng -vf 
zscale=t=linear,tonemap=gamma:param=1.85,tonemap=linear:param=64,colorlevels=rimin=0.0:gimin=0.0:bimin=0.0:rimax=0.53:gimax=1:bimax=0.57 
-y out.jpg
ffmpeg version 2020-10-11-git-7ea4bcff7b-full_build-www.gyan.dev 
Copyright (c) 2000-2020 the FFmpeg developers

  built with gcc 10.2.0 (Rev3, Built by MSYS2 project)
  configuration: --enable-gpl --enable-version3 --enable-static 
--disable-w32threads --disable-autodetect --enable-fontconfig 
--enable-iconv --enable-gnutls --enable-libxml2 --enable-gmp 
--enable-lzma --enable-libsnappy --enable-zlib --enable-libsrt 
--enable-libssh --enable-libzmq --enable-avisynth --enable-libbluray 
--enable-libcaca --enable-sdl2 --enable-libdav1d --enable-libzvbi 
--enable-librav1e --enable-libsvtav1 --enable-libwebp --enable-libx264 
--enable-libx265 --enable-libxvid --enable-libaom --enable-libopenjpeg 
--enable-libvpx --enable-libass --enable-frei0r --enable-libfreetype 
--enable-libfribidi --enable-libvidstab --enable-libvmaf 
--enable-libzimg --enable-amf --enable-cuda-llvm --enable-cuvid 
--enable-ffnvcodec --enable-nvdec --enable-nvenc --enable-d3d11va 
--enable-dxva2 --enable-libmfx --enable-libglslang --enable-vulkan 
--enable-libcdio --enable-libgme --enable-libmodplug --enable-libopenmpt 
--enable-libopencore-amrwb --enable-libmp3lame --enable-libshine 
--enable-libtheora --enable-libtwolame --enable-libvo-amrwbenc 
--enable-libilbc --enable-libgsm --enable-libopencore-amrnb 
--enable-libopus --enable-libspeex --enable-libvorbis --enable-ladspa 
--enable-libbs2b --enable-libflite --enable-libmysofa 
--enable-librubberband --enable-libsoxr --enable-chromaprint

  libavutil  56. 60.100 / 56. 60.100
  libavcodec 58.111.100 / 58.111.100
  libavformat    58. 62.100 / 58. 62.100
  libavdevice    58. 11.102 / 58. 11.102
  libavfilter 7. 87.100 /  7. 87.100
  libswscale  5.  8.100 /  5.  8.100
  libswresample   3.  8.100 /  3.  8.100
  libpostproc    55.  8.100 / 55.  8.100
[tiff @ 01d36e8cf800] Assuming black level pattern values are identical
[tiff @ 01d36e8cf800] Tiled TIFF is not allowed to strip
[tiff_pipe @ 01d36e8cd800] Stream #0: not enough frames to estimate 
rate; consider increasing probesize

Input #0, tiff_pipe, from 'IMG_3459.dng':
  Duration: N/A, bitrate: N/A
    Stream #0:0: Video: tiff, bayer_rggb16le, 5568x3708, 25 tbr, 25 
tbn, 25 tbc

Stream mapping:
  Stream #0:0 -> #0:0 (tiff (native) -> mjpeg (native))
Press [q] to stop, [?] for help
[tiff @ 01d36e8d4900] Assuming black level pattern values are identical
[tiff @ 01d36e8d4900] Tiled TIFF is not allowed to strip
[swscaler @ 01d3706eed40] deprecated pixel format used, make sure 
you did set range correctly
[tonemap @ 01d36e901040] Missing color space information, 
desaturation is disabled
[tonemap @ 01d36e900240] Missing color space information, 
desaturation is disabled

Output #0, image2, to 'out.jpg':
  Metadata:
    encoder : Lavf58.62.100
    Stream #0:0: Video: mjpeg, yuvj444p(pc), 5568x3708, q=2-31, 200 
kb/s, 25 fps, 25 tbn, 25 tbc

    Metadata:
  encoder : Lavc58.111.100 mjpeg
    Side data:
  cpb: bitrate max/min/avg: 0/0/20 buffer size: 0 vbv_delay: N/A
frame=    1 fps=0.5 q=12.1 Lsize=N/A time=00:00:00.04 bitrate=N/A 
speed=0.0185x
video:853kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB 
muxing overhead: unknown



___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmp

Re: [FFmpeg-user] DNG images

2020-10-11 Thread Michael Koch

Am 11.10.2020 um 20:56 schrieb Paul B Mahol:

On Sun, Oct 11, 2020 at 07:37:25PM +0200, Michael Koch wrote:

I tested all 4 DNG images again with the latest FFmpeg version (Gyan's build
from today).
I tested with and without zscale filter. All tests failed. When testing with
IrfanView, all DNG input images seem to be ok.


Your issues is that you do not know how to use zscale properly.

DNG decoding is just fine, what is missing is colorspace stuff in swscale,
for proper display.


Can you please give an example for the correct zscale options?



[tiff @ 019996d6fe40] non increasing IFD offset
[tiff @ 019996d6fe40]  is not implemented. Update your FFmpeg version to
the newest one from Git. If the problem still occurs, it means that your
file has a feature which has not been implemented.
[tiff @ 019996d6fe40] If you want to help, upload a sample of this file
to https://streams.videolan.org/upload/ and contact the ffmpeg-devel mailing
list. (ffmpeg-de...@ffmpeg.org)
Error while decoding stream #0:0: Invalid data found when processing input
Cannot determine format of input stream 0:0 after EOF
Error marking filters as finisheda

As message simple says, if you want to help upload that image that fails to
decode somewhere and post link to upload here.


The link was already in my last message.
www.astro-electronic.de/Pentax_K5.DNG

Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] V360 stabilization

2020-10-11 Thread Michael Koch

Am 29.09.2020 um 22:54 schrieb Michael Koch:

Hello all,

I've programmed a C# workaround for stabilization of 360° videos. The 
procedure is as follows:


1. FFmpeg: From each frame of the equirectangular input video, extract 
two small images which are 90° apart in the input video. I call them A 
and B images.


2. C# code: Analyze the x and y image shift from subsequent A and B 
images. Calculate how the equirectangular frames must be rotated (yaw, 
pitch, roll) to compensate the image shifts. This part wasn't easy. 
Two rotation matrices and one matrix multiplication are required. 
Write the results to a *.cmd file.


3. FFmpeg: Read the *.cmd file and apply the rotations with the v360 
filter. The output video is stabilized.


For details and source code please have a look at chapter 2.78 in my 
book:

http://www.astro-electronic.de/FFmpeg_Book.pdf

If anyone wants to implement this in FFmpeg, please feel free to do it.


I've written and tested an improved version for 360° stabilization, it's 
in chapter 2.79.


Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

[FFmpeg-user] DNG images

2020-10-11 Thread Michael Koch
I tested all 4 DNG images again with the latest FFmpeg version (Gyan's 
build from today).
I tested with and without zscale filter. All tests failed. When testing 
with IrfanView, all DNG input images seem to be ok.



Test 1:

Images from 
https://drive.google.com/drive/folders/1u5m7aVAFPpEsL4YtDVsfSpfcve9PFG4m


c:\ffmpeg\ffmpeg -i sample_r0.dng -y out.jpg

The image is converted without error message, but the result is much too 
dark.


c:\ffmpeg\ffmpeg -i sample_r0.dng -vf zscale -y out.jpg

Error message: "code 3074: no path between colorspaces"
The full console output is copied below.


Test 2:

This is a RAW image from a Canon 6D which was converted to DNG with 
Adobe DNG

Converter V12.4:  www.astro-electronic.de/IMG_3459.dng

c:\ffmpeg\ffmpeg -i IMG_3459.dng -y out.jpg

The image is converted without error message, but the result is much too 
dark.


c:\ffmpeg\ffmpeg -i IMG_3459.dng -vf zscale -y out.jpg

Error message: "code 3074: no path between colorspaces"
The full console output is copied below.


Test 3:

This is a RAW image from a Canon 5D-MK4 which was converted to DNG with 
Adobe DNG

Converter V12.4. The problem is exactly the same as in test 2.


Test 4:

This is a DNG image that was directly written by a Pentax K5 camera.
www.astro-electronic.de/Pentax_K5.DNG

c:\ffmpeg\ffmpeg -i Pentax_K5.DNG -y out.jpg

Error message: "Error while decoding stream #0:0: Invalid data found 
when processing input"

The full console output is copied below.

c:\ffmpeg\ffmpeg -i Pentax_K5.DNG -vf zscale -y out.jpg

Same error message as above.


Michael





C:\Users\astro\Desktop\dng>c:\ffmpeg\ffmpeg -i sample_r0.dng -vf 
zscale -y out.jpg
ffmpeg version 2020-10-11-git-7ea4bcff7b-full_build-www.gyan.dev 
Copyright (c) 2000-2020 the FFmpeg developers

  built with gcc 10.2.0 (Rev3, Built by MSYS2 project)
  configuration: --enable-gpl --enable-version3 --enable-static 
--disable-w32threads --disable-autodetect --enable-fontconfig 
--enable-iconv --enable-gnutls --enable-libxml2 --enable-gmp 
--enable-lzma --enable-libsnappy --enable-zlib --enable-libsrt 
--enable-libssh --enable-libzmq --enable-avisynth --enable-libbluray 
--enable-libcaca --enable-sdl2 --enable-libdav1d --enable-libzvbi 
--enable-librav1e --enable-libsvtav1 --enable-libwebp --enable-libx264 
--enable-libx265 --enable-libxvid --enable-libaom --enable-libopenjpeg 
--enable-libvpx --enable-libass --enable-frei0r --enable-libfreetype 
--enable-libfribidi --enable-libvidstab --enable-libvmaf 
--enable-libzimg --enable-amf --enable-cuda-llvm --enable-cuvid 
--enable-ffnvcodec --enable-nvdec --enable-nvenc --enable-d3d11va 
--enable-dxva2 --enable-libmfx --enable-libglslang --enable-vulkan 
--enable-libcdio --enable-libgme --enable-libmodplug --enable-libopenmpt 
--enable-libopencore-amrwb --enable-libmp3lame --enable-libshine 
--enable-libtheora --enable-libtwolame --enable-libvo-amrwbenc 
--enable-libilbc --enable-libgsm --enable-libopencore-amrnb 
--enable-libopus --enable-libspeex --enable-libvorbis --enable-ladspa 
--enable-libbs2b --enable-libflite --enable-libmysofa 
--enable-librubberband --enable-libsoxr --enable-chromaprint

  libavutil  56. 60.100 / 56. 60.100
  libavcodec 58.111.100 / 58.111.100
  libavformat    58. 62.100 / 58. 62.100
  libavdevice    58. 11.102 / 58. 11.102
  libavfilter 7. 87.100 /  7. 87.100
  libswscale  5.  8.100 /  5.  8.100
  libswresample   3.  8.100 /  3.  8.100
  libpostproc    55.  8.100 / 55.  8.100
[tiff @ 0234bf57f780] Assuming black level pattern values are identical
[tiff_pipe @ 0234bf57d700] Stream #0: not enough frames to estimate 
rate; consider increasing probesize

Input #0, tiff_pipe, from 'sample_r0.dng':
  Duration: N/A, bitrate: N/A
    Stream #0:0: Video: tiff, bayer_rggb16le, 6016x3200, 25 tbr, 25 
tbn, 25 tbc

Stream mapping:
  Stream #0:0 -> #0:0 (tiff (native) -> mjpeg (native))
Press [q] to stop, [?] for help
[tiff @ 0234bf5900c0] Assuming black level pattern values are identical
code 3074: no path between colorspaces
Error while filtering: Generic error in an external library
Failed to inject frame into filter network: Generic error in an external 
library

Error while processing the decoded data for stream #0:0
Conversion failed!





C:\Users\astro\Desktop\dng>c:\ffmpeg\ffmpeg -i IMG_3459.dng -vf zscale 
-y out.jpg
ffmpeg version 2020-10-11-git-7ea4bcff7b-full_build-www.gyan.dev 
Copyright (c) 2000-2020 the FFmpeg developers

  built with gcc 10.2.0 (Rev3, Built by MSYS2 project)
  configuration: --enable-gpl --enable-version3 --enable-static 
--disable-w32threads --disable-autodetect --enable-fontconfig 
--enable-iconv --enable-gnutls --enable-libxml2 --enable-gmp 
--enable-lzma --enable-libsnappy --enable-zlib --enable-libsrt 
--enable-libssh --enable-libzmq --enable-avisynth --enable-libbluray 
--enable-libcaca --enable-sdl2 --enable-libdav1d --enable-libzvbi 
--enable-librav1e --enable-libsvtav1 --enable-libwebp --enable-libx264 

Re: [FFmpeg-user] Glossary: Nyquist

2020-10-03 Thread Michael Koch

Am 03.10.2020 um 22:43 schrieb Mark Filipak (ffmpeg):

On 10/03/2020 02:05 PM, Anatoly wrote:

On Sat, 3 Oct 2020 11:05:03 -0400

-snip-

You should learn than what spectrum is.


Oh, please. Be easy with me. I'm just a simple electrical engineer.


And how any complex waveform
(with it's "information density") may be represented as a sum of many
simple sinewaves.


Ah, now that would be a Taylor series, no?


Joseph Fourier just turned around in his grave...

Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] V360 stabilization

2020-10-03 Thread Michael Koch

Am 03.10.2020 um 13:25 schrieb Paul B Mahol:

On Sat, Oct 03, 2020 at 08:27:20AM +0200, Michael Koch wrote:

Use correct player like mpv, which does not ignore color_trc.

A player? The output is a jpg image and it's too dark.

Yes, player, ffmpeg is not correct in conversion. Because swscale is old
and ignores color_trc metadata that is required for correct display.

Use zscale instead for converting pixel formats.

Can you please give an example how to convert a DNG image to a JPG image
with FFmpeg?

I did try
ffmpeg -i sample_r0.dng -vf zscale -y out.jpg
but it gives an error message.
The input image is from
https://drive.google.com/drive/folders/1u5m7aVAFPpEsL4YtDVsfSpfcve9PFG4m
and IrfanView has no problem to open this image.

Then join IrfanView mailing list.


Why should I? IrfanView has no problem.


Every single DNG file you posted is invalid in some part.


Adobe is the creator of the DNG format and it sounds quite unlikely that 
their own DNG Converter produces invalid DNG files. I tested that the 
files can be opened with IrfanView, Windows Media Player, Fitswork, 3D 
LUT Creator and GIMP with Darktable plugin.

I saw that you fixed it (thanks for that!) and will test it soon.

The images from the above link are a different thing, as I don't know 
how they were created. I agree that they might be invalid.


Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] V360 stabilization

2020-10-03 Thread Michael Koch



Use correct player like mpv, which does not ignore color_trc.

A player? The output is a jpg image and it's too dark.

Yes, player, ffmpeg is not correct in conversion. Because swscale is old
and ignores color_trc metadata that is required for correct display.

Use zscale instead for converting pixel formats.


Can you please give an example how to convert a DNG image to a JPG 
image with FFmpeg?


I did try
ffmpeg -i sample_r0.dng -vf zscale -y out.jpg
but it gives an error message.
The input image is from
https://drive.google.com/drive/folders/1u5m7aVAFPpEsL4YtDVsfSpfcve9PFG4m
and IrfanView has no problem to open this image.

Michael


C:\Users\astro\Desktop\dng>c:\ffmpeg\ffmpeg -i sample_r0.dng -vf 
zscale -y out.jpg
ffmpeg version 2020-09-20-git-ef29e5bf42-full_build-www.gyan.dev 
Copyright (c) 2000-2020 the FFmpeg developers

  built with gcc 10.2.0 (Rev3, Built by MSYS2 project)
  configuration: --enable-gpl --enable-version3 --enable-static 
--disable-w32threads --disable-autodetect --enable-fontconfig 
--enable-iconv --enable-gnutls --enable-libxml2 --enable-gmp 
--enable-lzma --enable-libsnappy --enable-zlib --enable-libsrt 
--enable-libssh --enable-libzmq --enable-avisynth --enable-libbluray 
--enable-libcaca --enable-sdl2 --enable-libdav1d --enable-libzvbi 
--enable-librav1e --enable-libsvtav1 --enable-libwebp --enable-libx264 
--enable-libx265 --enable-libxvid --enable-libaom --enable-libopenjpeg 
--enable-libvpx --enable-libass --enable-frei0r --enable-libfreetype 
--enable-libfribidi --enable-libvidstab --enable-libvmaf 
--enable-libzimg --enable-amf --enable-cuda-llvm --enable-cuvid 
--enable-ffnvcodec --enable-nvdec --enable-nvenc --enable-d3d11va 
--enable-dxva2 --enable-libmfx --enable-libglslang --enable-vulkan 
--enable-libcdio --enable-libgme --enable-libmodplug --enable-libopenmpt 
--enable-libopencore-amrwb --enable-libmp3lame --enable-libshine 
--enable-libtheora --enable-libtwolame --enable-libvo-amrwbenc 
--enable-libwavpack --enable-libilbc --enable-libgsm 
--enable-libopencore-amrnb --enable-libopus --enable-libspeex 
--enable-libvorbis --enable-ladspa --enable-libbs2b --enable-libflite 
--enable-libmysofa --enable-librubberband --enable-libsoxr 
--enable-chromaprint

  libavutil  56. 59.100 / 56. 59.100
  libavcodec 58.106.100 / 58.106.100
  libavformat    58. 58.100 / 58. 58.100
  libavdevice    58. 11.102 / 58. 11.102
  libavfilter 7. 87.100 /  7. 87.100
  libswscale  5.  8.100 /  5.  8.100
  libswresample   3.  8.100 /  3.  8.100
  libpostproc    55.  8.100 / 55.  8.100
[tiff @ 025f5dbdf680] Assuming black level pattern values are identical
[mjpeg @ 025f5dbe4ec0] mjpeg_decode_dc: bad vlc: 0:0 (025f5dbe5588)
    Last message repeated 1 times
[tiff_pipe @ 025f5dbdd640] Stream #0: not enough frames to estimate 
rate; consider increasing probesize

Input #0, tiff_pipe, from 'sample_r0.dng':
  Duration: N/A, bitrate: N/A
    Stream #0:0: Video: tiff, bayer_rggb16le, 6016x3200, 25 tbr, 25 
tbn, 25 tbc

Stream mapping:
  Stream #0:0 -> #0:0 (tiff (native) -> mjpeg (native))
Press [q] to stop, [?] for help
[tiff @ 025f5dbefe40] Assuming black level pattern values are identical
[mjpeg @ 025f5dc15180] mjpeg_decode_dc: bad vlc: 0:0 (025f5dc38248)
    Last message repeated 1 times
code 3074: no path between colorspaces
Error while filtering: Generic error in an external library
Failed to inject frame into filter network: Generic error in an external 
library

Error while processing the decoded data for stream #0:0
Conversion failed!

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] V360 stabilization

2020-10-02 Thread Michael Koch

Am 02.10.2020 um 22:01 schrieb Paul B Mahol:



Use correct player like mpv, which does not ignore color_trc.

A player? The output is a jpg image and it's too dark.

Yes, player, ffmpeg is not correct in conversion. Because swscale is old
and ignores color_trc metadata that is required for correct display.

Use zscale instead for converting pixel formats.


Can you please give an example how to convert a DNG image to a JPG image 
with FFmpeg?


Michael
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] V360 stabilization

2020-10-02 Thread Michael Koch

Am 02.10.2020 um 10:10 schrieb Paul B Mahol:

On Thu, Oct 01, 2020 at 11:40:57PM +0200, Michael Koch wrote:

Am 01.10.2020 um 22:11 schrieb Paul B Mahol:

On Tue, Sep 29, 2020 at 10:54:54PM +0200, Michael Koch wrote:

Hello all,

I've programmed a C# workaround for stabilization of 360° videos. The
procedure is as follows:

1. FFmpeg: From each frame of the equirectangular input video, extract two
small images which are 90° apart in the input video. I call them A and B
images.

2. C# code: Analyze the x and y image shift from subsequent A and B images.
Calculate how the equirectangular frames must be rotated (yaw, pitch, roll)
to compensate the image shifts. This part wasn't easy. Two rotation matrices
and one matrix multiplication are required. Write the results to a *.cmd
file.

3. FFmpeg: Read the *.cmd file and apply the rotations with the v360 filter.
The output video is stabilized.

For details and source code please have a look at chapter 2.78 in my book:
http://www.astro-electronic.de/FFmpeg_Book.pdf

If anyone wants to implement this in FFmpeg, please feel free to do it.

Better upload DNG files that do not decode with FFmpeg.

In this message
http://ffmpeg.org/pipermail/ffmpeg-user/2020-August/049681.html
you find a link to many DNG images which FFmpeg can't decode correctly.
There is no error message, but the result is much too dark with low
saturation.

Use correct player like mpv, which does not ignore color_trc.


A player? The output is a jpg image and it's too dark.

ffmpeg -i input.dng output.jpg



I did convert a RAW image from a Canon 6D to DNG with Adobe DNG Converter
V12.4. FFmpeg is unable to decode this DNG image. See this message for
details:
http://ffmpeg.org/pipermail/ffmpeg-user/2020-August/049738.html
You can download the DNG image here (I will delete it from my webspace in a
few days):
www.astro-electronic.de/IMG_3459.dng

I did also try a RAW image from a Canon 5D-MK4 with the same negative
result.

A friend gave me a DNG image that was written by his Pentax K5 camera. Same
negative result.

Summary: I did try DNG images from 4 different sources and in 4 of 4 cases

In my testcases, 30 out of 30 DNGs decoded just fine.


Did any of your 30 DNGs come from Adobe DNG converter V12.4, or from a 
Pentax K5 camera?


Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] V360 stabilization

2020-10-01 Thread Michael Koch

Am 01.10.2020 um 22:11 schrieb Paul B Mahol:

On Tue, Sep 29, 2020 at 10:54:54PM +0200, Michael Koch wrote:

Hello all,

I've programmed a C# workaround for stabilization of 360° videos. The
procedure is as follows:

1. FFmpeg: From each frame of the equirectangular input video, extract two
small images which are 90° apart in the input video. I call them A and B
images.

2. C# code: Analyze the x and y image shift from subsequent A and B images.
Calculate how the equirectangular frames must be rotated (yaw, pitch, roll)
to compensate the image shifts. This part wasn't easy. Two rotation matrices
and one matrix multiplication are required. Write the results to a *.cmd
file.

3. FFmpeg: Read the *.cmd file and apply the rotations with the v360 filter.
The output video is stabilized.

For details and source code please have a look at chapter 2.78 in my book:
http://www.astro-electronic.de/FFmpeg_Book.pdf

If anyone wants to implement this in FFmpeg, please feel free to do it.

Better upload DNG files that do not decode with FFmpeg.


In this message
http://ffmpeg.org/pipermail/ffmpeg-user/2020-August/049681.html
you find a link to many DNG images which FFmpeg can't decode correctly. 
There is no error message, but the result is much too dark with low 
saturation.


I did convert a RAW image from a Canon 6D to DNG with Adobe DNG 
Converter V12.4. FFmpeg is unable to decode this DNG image. See this 
message for details:

http://ffmpeg.org/pipermail/ffmpeg-user/2020-August/049738.html
You can download the DNG image here (I will delete it from my webspace 
in a few days):

www.astro-electronic.de/IMG_3459.dng

I did also try a RAW image from a Canon 5D-MK4 with the same negative 
result.


A friend gave me a DNG image that was written by his Pentax K5 camera. 
Same negative result.


Summary: I did try DNG images from 4 different sources and in 4 of 4 
cases FFmpeg failed.


Michael


___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] slow motion by changing frame rate from 60 to 30

2020-10-01 Thread Michael Koch

Am 01.10.2020 um 16:57 schrieb Cristian Secară:

I want to make a short .mp4 clip filmed in 1080p/60 format, then change somehow 
the playing fame rate to 30Hz so that the playing speed becomes twice slower. 
Audio is not important (may be dropped completely).

As far as I know (?), a "simple" frame rate change will try to preserve the 
playing speed, so when changing from 60Hz to 30Hz, it will drop half of the movie.
At the same time, a "simple" slow motion processing will try to keep the 
original frame rate, so in this case it will interpolate the transitions or something, 
while keeping the frame rate still at 60Hz.

What options are available for preserving all frames "as is" and just slowing 
down the playing speed ?


It's described here:
https://trac.ffmpeg.org/wiki/How%20to%20speed%20up%20/%20slow%20down%20a%20video

Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] HIGHPASS FILTER AND ID£ METADATA

2020-10-01 Thread Michael Koch

Am 01.10.2020 um 14:06 schrieb Michael Koch:

Am 01.10.2020 um 13:25 schrieb Marco Mircoli:

Hello',
 I'm a newbie.
Just bought a php script that use ffmpeg.
it converts to mp3/96Kbps all media uploaded in a unique format.
this is the line, and now it normalizes to R128 (thanks Moritz)

$shell = shell_exec("$ffmpeg_b -i $audio_file_full_path -map 
0:a:0 -af

loudnorm -b:a 96k $audio_output_mp3 2>&1");

I'm wondering if it is possible to include a simple hi-pass filter 
(highest

db per octave @ 70Hz)


add  highpass=70,    before  loudnorm

This filter attenuated 6dB per octave. If you need higher attenuation, 
you can use several highpass filters, for example 
highpass=70,highpass=70,highpass=70,loudnorm


Correction: The filter has 12dB per octave.

Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] HIGHPASS FILTER AND ID£ METADATA

2020-10-01 Thread Michael Koch

Am 01.10.2020 um 13:25 schrieb Marco Mircoli:

Hello',
 I'm a newbie.
Just bought a php script that use ffmpeg.
it converts to mp3/96Kbps all media uploaded in a unique format.
this is the line, and now it normalizes to R128 (thanks Moritz)

$shell = shell_exec("$ffmpeg_b -i $audio_file_full_path -map 0:a:0 -af
loudnorm -b:a 96k $audio_output_mp3 2>&1");

I'm wondering if it is possible to include a simple hi-pass filter (highest
db per octave @ 70Hz)


add  highpass=70,    before  loudnorm

This filter attenuated 6dB per octave. If you need higher attenuation, 
you can use several highpass filters, for example 
highpass=70,highpass=70,highpass=70,loudnorm


Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

[FFmpeg-user] V360 stabilization

2020-09-29 Thread Michael Koch

Hello all,

I've programmed a C# workaround for stabilization of 360° videos. The 
procedure is as follows:


1. FFmpeg: From each frame of the equirectangular input video, extract 
two small images which are 90° apart in the input video. I call them A 
and B images.


2. C# code: Analyze the x and y image shift from subsequent A and B 
images. Calculate how the equirectangular frames must be rotated (yaw, 
pitch, roll) to compensate the image shifts. This part wasn't easy. Two 
rotation matrices and one matrix multiplication are required. Write the 
results to a *.cmd file.


3. FFmpeg: Read the *.cmd file and apply the rotations with the v360 
filter. The output video is stabilized.


For details and source code please have a look at chapter 2.78 in my book:
http://www.astro-electronic.de/FFmpeg_Book.pdf

If anyone wants to implement this in FFmpeg, please feel free to do it.

Michael




___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] should I shoot the dog?

2020-09-29 Thread Michael Koch

Am 29.09.2020 um 16:26 schrieb Mark Filipak (ffmpeg):

On 09/29/2020 09:37 AM, Michael Koch wrote:

Am 29.09.2020 um 14:58 schrieb Mark Filipak (ffmpeg):

On 09/29/2020 04:06 AM, Michael Koch wrote:

Am 29.09.2020 um 04:28 schrieb Mark Filipak (ffmpeg):


I just want to understand the frame structures that ffmpeg 
creates, and that ffmpeg uses in processing and filtering. Are Y, 
Cb, Cr separate buffers? That would be logical. Or are the Y, Cb, 
Cr values combined and organized similarly to macroblocks? I've 
found some code that supports that. Or are the Y, Cb, Cr values 
thrown together, pixel-by-pixel. That would be logical, too.


As far as I understood it, that depends on the pixel format.
For example there are "packed" pixel formats rgb24, bgr24, argb, 
rgba, abgr, bgra,rgb48be, rgb48le, bgr48be, bgr48le.

And there are "planar" pixel formats gbrp, bgrp16be, bgrp16le.


Hi Michael,

"Packed" and "planar", eh? What evidence do you have? ...Share the 
candy!


As far as I know, this is not described in the official 
documentation. You can find it for example here:
https://video.stackexchange.com/questions/16374/ffmpeg-pixel-format-definitions 



Thanks for that. It saved me some time. ...So, what does "planar" 
mean? What does "packed" mean?


Here is an example for a very small image with 3 x 2 pixels.
In (packed) RGB24 format:   RGBRGBRGBRGBRGBRGB
In (planar) GBRP format: GGBBRR

Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] should I shoot the dog?

2020-09-29 Thread Michael Koch

Am 29.09.2020 um 14:58 schrieb Mark Filipak (ffmpeg):

On 09/29/2020 04:06 AM, Michael Koch wrote:

Am 29.09.2020 um 04:28 schrieb Mark Filipak (ffmpeg):


I just want to understand the frame structures that ffmpeg creates, 
and that ffmpeg uses in processing and filtering. Are Y, Cb, Cr 
separate buffers? That would be logical. Or are the Y, Cb, Cr values 
combined and organized similarly to macroblocks? I've found some 
code that supports that. Or are the Y, Cb, Cr values thrown 
together, pixel-by-pixel. That would be logical, too.


As far as I understood it, that depends on the pixel format.
For example there are "packed" pixel formats rgb24, bgr24, argb, 
rgba, abgr, bgra,rgb48be, rgb48le, bgr48be, bgr48le.

And there are "planar" pixel formats gbrp, bgrp16be, bgrp16le.


Hi Michael,

"Packed" and "planar", eh? What evidence do you have? ...Share the candy!


As far as I know, this is not described in the official documentation. 
You can find it for example here:

https://video.stackexchange.com/questions/16374/ffmpeg-pixel-format-definitions



Now, I'm not talking about streams. I'm talking about after decoding. 
I'm talking about the buffers. I would think that a single, consistent 
format would be used.




There is no single consistent format used internally. See Gyan's answer 
here:

http://ffmpeg.org/pipermail/ffmpeg-user/2020-September/050031.html

Michael


___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] should I shoot the dog?

2020-09-29 Thread Michael Koch

Am 29.09.2020 um 04:28 schrieb Mark Filipak (ffmpeg):


I just want to understand the frame structures that ffmpeg creates, 
and that ffmpeg uses in processing and filtering. Are Y, Cb, Cr 
separate buffers? That would be logical. Or are the Y, Cb, Cr values 
combined and organized similarly to macroblocks? I've found some code 
that supports that. Or are the Y, Cb, Cr values thrown together, 
pixel-by-pixel. That would be logical, too.


As far as I understood it, that depends on the pixel format.
For example there are "packed" pixel formats rgb24, bgr24, argb, rgba, 
abgr, bgra,rgb48be, rgb48le, bgr48be, bgr48le.

And there are "planar" pixel formats gbrp, bgrp16be, bgrp16le.

Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] cover_rect without find_rect?

2020-09-26 Thread Michael Koch

Am 26.09.2020 um 13:02 schrieb Michael Koch:

Hello,

is it possible to use the cover_rect filter without using the 
find_rect filter?
I don't want to use find_rect because I already know the coordinates 
and size of the object to be covered.
Or is there another filter for covering an object with "blur" mode, 
using interpolation from the surrounding pixels?




Found the solution myself: delogo filter

Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

[FFmpeg-user] cover_rect without find_rect?

2020-09-26 Thread Michael Koch

Hello,

is it possible to use the cover_rect filter without using the find_rect 
filter?
I don't want to use find_rect because I already know the coordinates and 
size of the object to be covered.
Or is there another filter for covering an object with "blur" mode, 
using interpolation from the surrounding pixels?


Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] chroma colour

2020-09-24 Thread Michael Koch

Am 24.09.2020 um 14:52 schrieb Paul Bourke:

Also, I really dunno why people keep using remap filter with its only
available neighbour pixel interpolator.

Yeah, not ideal.

I don't use it for images, I have my own solution for that with
supersampling antialising.
But for movie conversion, what is the alternative?


If it can be done with the v360 filter, then this is the recommended 
alternative.

For other transformations, I don't know.

Michael
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] chroma colour

2020-09-24 Thread Michael Koch

Am 24.09.2020 um 14:30 schrieb Paul Bourke:

Hmmm

ffmpeg -i Ajanta_cave26b.jpg -i x.pgm -i y.pgm -lavfi remap=fill=green a.png

I get

[Parsed_remap_0 @ 0x7fd6cac05200] Option 'fill' not found
[AVFilterGraph @ 0x7fd6cae1c5c0] Error initializing filter 'remap'
with args 'fill=green'
Error initializing complex filters.
Option not found

ffmpeg 4.2.2


too old, get a newer version

Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] chroma colour

2020-09-24 Thread Michael Koch

Am 24.09.2020 um 13:50 schrieb Paul Bourke:

How do I change the colour actually used as the chromakey colour?
I am creating remap filters to do various image mappings and would
like undefined pixels to be black rather than green.
Apologies in advance if I simply didn't search the documentation
sufficiently well.


Use the "fill" option of the remap filter.
There is an example in chapter 2.66 of my book:
http://www.astro-electronic.de/FFmpeg_Book.pdf

Michael


___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Create a 10-bit VLog test video

2020-09-20 Thread Michael Koch

Am 20.09.2020 um 15:19 schrieb Michael Koch:

Am 19.09.2020 um 17:01 schrieb Paul B Mahol:
On Sat, Sep 19, 2020 at 12:10 PM Michael Koch 


wrote:


Hello all,

I want to create a 10-bit VLog test video which contains 18 levels of
gray, according to Panasonic's VLog curve.
The curve is documented here:

https://pro-av.panasonic.net/en/cinema_camera_varicam_eva/support/pdf/VARICAM_V-Log_V-Gamut.pdf 



I want the leftmost bar to be black (128) and the other 17 bars are for
-8 to +8 stops, so that the rightmost bar is at level 1023. I'm using
this command line and the console output is copied below:

ffmpeg -f lavfi -i nullsrc=s=svga,format=gray16 -lavfi
geq=lum='st(0,trunc(18*X/W));64*(128*eq(ld(0),0)+132*eq(ld(0),1)+136*eq(ld(0),2)+144*eq(ld(0),3)+160*eq(ld(0),4)+192*eq(ld(0),5)+240*eq(ld(0),6)+298*eq(ld(0),7)+363*eq(ld(0),8)+433*eq(ld(0),9)+505*eq(ld(0),10)+578*eq(ld(0),11)+652*eq(ld(0),12)+726*eq(ld(0),13)+800*eq(ld(0),14)+874*eq(ld(0),15)+949*eq(ld(0),16)+1023*eq(ld(0),17))' 



-pix_fmt yuv444p10le -crf 10 -c:v h264 -t 5 -y VLog_10bit.mov

The video looks perfect when played with FFplay. The leftmost bars are
indistinguishable on a 8-bit monitor because the levels are too close
together, but the rightmost bars look as expected.

However when I play the same video with VLC, the two brightest bars at
the right side have the same shade of gray. I don't understand why. Is
this a problem in my video, or is it a problem in VLC?

When I extraxt a 16-bit PNG image from the video, this looks as 
expected

with 18 levels of gray.
ffmpeg -i VLog_10bit.mov -frames 1 -y out.png

Unrelated: I did try to add the oscilloscope filter at the end of the
filter chain, but it seems it doesn't work with 16-bit data. There
should be a warning or an error message. oscilloscope=tw=1:s=1


That happened only with gray>8 formats and have been fixed already.


Thanks, oscilloscope works fine now.
Do you have any idea why the brightest bar looks correct in FFplay and 
wrong in VLC?

Is the problem reproducible on other computers?


I've solved the problem. If "-color_range pc" is added to the command 
line, the video plays fine in FFplay and also in VLC.


Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Create a 10-bit VLog test video

2020-09-20 Thread Michael Koch

Am 19.09.2020 um 15:06 schrieb Mick Finn:

Does vlc even support 10 bit playback on supported monitors? I have 10 bit 
calibrated monitor and don’t notice the same difference with vlc as with say 
resolve


I don't have a 10-bit monitor. I'm assuming that FFplay or VLC do 
automatically convert 10-bit videos to 8-bit.
The question is: Why do the two brightest bars look correct when played 
with FFplay, but wrong when played with VLC?


Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Create a 10-bit VLog test video

2020-09-20 Thread Michael Koch

Am 19.09.2020 um 17:01 schrieb Paul B Mahol:

On Sat, Sep 19, 2020 at 12:10 PM Michael Koch 
wrote:


Hello all,

I want to create a 10-bit VLog test video which contains 18 levels of
gray, according to Panasonic's VLog curve.
The curve is documented here:

https://pro-av.panasonic.net/en/cinema_camera_varicam_eva/support/pdf/VARICAM_V-Log_V-Gamut.pdf

I want the leftmost bar to be black (128) and the other 17 bars are for
-8 to +8 stops, so that the rightmost bar is at level 1023. I'm using
this command line and the console output is copied below:

ffmpeg -f lavfi -i nullsrc=s=svga,format=gray16 -lavfi
geq=lum='st(0,trunc(18*X/W));64*(128*eq(ld(0),0)+132*eq(ld(0),1)+136*eq(ld(0),2)+144*eq(ld(0),3)+160*eq(ld(0),4)+192*eq(ld(0),5)+240*eq(ld(0),6)+298*eq(ld(0),7)+363*eq(ld(0),8)+433*eq(ld(0),9)+505*eq(ld(0),10)+578*eq(ld(0),11)+652*eq(ld(0),12)+726*eq(ld(0),13)+800*eq(ld(0),14)+874*eq(ld(0),15)+949*eq(ld(0),16)+1023*eq(ld(0),17))'

-pix_fmt yuv444p10le -crf 10 -c:v h264 -t 5 -y VLog_10bit.mov

The video looks perfect when played with FFplay. The leftmost bars are
indistinguishable on a 8-bit monitor because the levels are too close
together, but the rightmost bars look as expected.

However when I play the same video with VLC, the two brightest bars at
the right side have the same shade of gray. I don't understand why. Is
this a problem in my video, or is it a problem in VLC?

When I extraxt a 16-bit PNG image from the video, this looks as expected
with 18 levels of gray.
ffmpeg -i VLog_10bit.mov -frames 1 -y out.png

Unrelated: I did try to add the oscilloscope filter at the end of the
filter chain, but it seems it doesn't work with 16-bit data. There
should be a warning or an error message. oscilloscope=tw=1:s=1


That happened only with gray>8 formats and have been fixed already.


Thanks, oscilloscope works fine now.
Do you have any idea why the brightest bar looks correct in FFplay and 
wrong in VLC?

Is the problem reproducible on other computers?

Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

[FFmpeg-user] Create a 10-bit VLog test video

2020-09-19 Thread Michael Koch

Hello all,

I want to create a 10-bit VLog test video which contains 18 levels of 
gray, according to Panasonic's VLog curve.

The curve is documented here:
https://pro-av.panasonic.net/en/cinema_camera_varicam_eva/support/pdf/VARICAM_V-Log_V-Gamut.pdf

I want the leftmost bar to be black (128) and the other 17 bars are for 
-8 to +8 stops, so that the rightmost bar is at level 1023. I'm using 
this command line and the console output is copied below:


ffmpeg -f lavfi -i nullsrc=s=svga,format=gray16 -lavfi 
geq=lum='st(0,trunc(18*X/W));64*(128*eq(ld(0),0)+132*eq(ld(0),1)+136*eq(ld(0),2)+144*eq(ld(0),3)+160*eq(ld(0),4)+192*eq(ld(0),5)+240*eq(ld(0),6)+298*eq(ld(0),7)+363*eq(ld(0),8)+433*eq(ld(0),9)+505*eq(ld(0),10)+578*eq(ld(0),11)+652*eq(ld(0),12)+726*eq(ld(0),13)+800*eq(ld(0),14)+874*eq(ld(0),15)+949*eq(ld(0),16)+1023*eq(ld(0),17))' 
-pix_fmt yuv444p10le -crf 10 -c:v h264 -t 5 -y VLog_10bit.mov


The video looks perfect when played with FFplay. The leftmost bars are 
indistinguishable on a 8-bit monitor because the levels are too close 
together, but the rightmost bars look as expected.


However when I play the same video with VLC, the two brightest bars at 
the right side have the same shade of gray. I don't understand why. Is 
this a problem in my video, or is it a problem in VLC?


When I extraxt a 16-bit PNG image from the video, this looks as expected 
with 18 levels of gray.

ffmpeg -i VLog_10bit.mov -frames 1 -y out.png

Unrelated: I did try to add the oscilloscope filter at the end of the 
filter chain, but it seems it doesn't work with 16-bit data. There 
should be a warning or an error message. oscilloscope=tw=1:s=1


Michael



C:\Users\astro\Desktop\Test_10-bit>c:\ffmpeg\ffmpeg -f lavfi -i 
nullsrc=s=svga,format=gray16 -lavfi 
geq=lum='st(0,trunc(18*X/W));64*(128*eq(ld(0),0)+132*eq(ld(0),1)+136*eq(ld(0),2)+144*eq(ld(0),3)+160*eq(ld(0),4)+192*eq(ld(0),5)+240*eq(ld(0),6)+298*eq(ld(0),7)+363*eq(ld(0),8)+433*eq(ld(0),9)+505*eq(ld(0),10)+578*eq(ld(0),11)+652*eq(ld(0),12)+726*eq(ld(0),13)+800*eq(ld(0),14)+874*eq(ld(0),15)+949*eq(ld(0),16)+1023*eq(ld(0),17))' 
-pix_fmt yuv444p10le -crf 10 -c:v h264 -t 10 -y VLog_10bit.mov
ffmpeg version N-99135-gaa8935b395-2020-09-13-gyan-beta2 Copyright (c) 
2000-2020 the FFmpeg developers

  built with gcc 10.2.0 (Rev1, Built by MSYS2 project)
  configuration: --enable-gpl --enable-version3 --enable-sdl2 
--enable-fontconfig --enable-gnutls --enable-iconv --enable-libdav1d 
--enable-libbluray --enable-libfreetype --enable-libmp3lame 
--enable-libass --enable-libopencore-amrnb --enable-libopencore-amrwb 
--enable-libopenjpeg --enable-libopus --enable-libshine 
--enable-libsnappy --enable-libsoxr --enable-libsrt --enable-libtheora 
--enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp 
--enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg 
--enable-lzma --enable-zlib --enable-gmp --enable-libvidstab 
--enable-libvmaf --enable-libvorbis --enable-libvo-amrwbenc 
--enable-libmysofa --enable-libspeex --enable-libxvid --enable-libaom 
--enable-libgsm --enable-librav1e --enable-libsvtav1 --enable-avisynth 
--enable-libopenmpt --enable-chromaprint --enable-frei0r 
--enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite 
--enable-libfribidi --enable-libgme --enable-libilbc --enable-libmodplug 
--enable-librubberband --enable-libssh --enable-libzmq --enable-libzvbi 
--enable-ladspa --enable-libglslang --enable-vulkan --disable-w32threads 
--disable-autodetect --enable-libmfx --enable-ffnvcodec 
--enable-cuda-llvm --enable-cuvid --enable-d3d11va --enable-nvenc 
--enable-nvdec --enable-dxva2 --enable-amf --enable-static

  libavutil  56. 58.100 / 56. 58.100
  libavcodec 58.106.100 / 58.106.100
  libavformat    58. 54.100 / 58. 54.100
  libavdevice    58. 11.101 / 58. 11.101
  libavfilter 7. 87.100 /  7. 87.100
  libswscale  5.  8.100 /  5.  8.100
  libswresample   3.  8.100 /  3.  8.100
  libpostproc    55.  8.100 / 55.  8.100
Input #0, lavfi, from 'nullsrc=s=svga,format=gray16':
  Duration: N/A, start: 0.00, bitrate: N/A
    Stream #0:0: Video: rawvideo (Y1[0][16] / 0x10003159), gray16le, 
800x600 [SAR 1:1 DAR 4:3], 25 tbr, 25 tbn, 25 tbc

Stream mapping:
  Stream #0:0 (rawvideo) -> geq
  geq -> Stream #0:0 (libx264)
Press [q] to stop, [?] for help
[libx264 @ 018100489740] using SAR=1/1
[libx264 @ 018100489740] using cpu capabilities: MMX2 SSE2Fast SSSE3 
SSE4.2 AVX FMA3 BMI2 AVX2
[libx264 @ 018100489740] profile High 4:4:4 Predictive, level 3.1, 
4:4:4, 10-bit
[libx264 @ 018100489740] 264 - core 161 r3018 db0d417 - H.264/MPEG-4 
AVC codec - Copyleft 2003-2020 - http://www.videolan.org/x264.html - 
options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 
psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 
8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=4 threads=12 
lookahead_threads=2 sliced_threads=0 nr=0 decimate=1 interlaced=0 
bluray_compat=0 

Re: [FFmpeg-user] Question about -noauto_conversion_filters

2020-09-14 Thread Michael Koch

Am 14.09.2020 um 13:24 schrieb Gyan Doshi:



On 14-09-2020 03:41 pm, Michael Koch wrote:

Am 14.09.2020 um 11:26 schrieb Gyan Doshi:



On 14-09-2020 02:47 pm, Michael Koch wrote:
ffmpeg -v verbose -f lavfi -i 
testsrc2=s=svga:d=5,format=yuv422p10le -vf 
format=rgb48le,lut3d="VLog_to_V709.cube",format=yuv422p10le 
-noauto_conversion_filters -pix_fmt yuv422p10le -c:v h264 -y out.mov 


Format conversion is carried out by libswscale and auto conversion 
inserts the scale filter.


So,

    ffmpeg -v verbose -f lavfi -i 
testsrc2=s=svga:d=5,format=yuv422p10le -vf 
scale,format=rgb48le,lut3d="VLog_to_V709.cube",scale 
-noauto_conversion_filters -pix_fmt yuv422p10le -c:v h264 -y out.mov


The final format filter is redundant with -pix_fmt, so I removed one 
of them.


Thank you, with "scale" it works fine. Although it's hard to 
understand what "scale" (without any options) is actually doing.


I have another question. Is this the correct and easiest way to make 
a 10-bit test video?

-f lavfi -i testsrc2=s=svga:d=5,format=yuv422p10le

In the documentation is written
"The |testsrc2|source is similar to testsrc, but supports more pixel 
formats instead of just |rgb24|. This allows using it as an input for 
other tests without requiring a format conversion."

^

But in my above command, I think "format=yuv422p10le" is a format 
conversion.


Each filter presents a list of input formats they can work with and a 
list of output formats they can directly generate. The framework 
inspects adjacent filters and sets a compatible common format for the 
outputs and inputs when possible. If not, it sets one of the available 
output formats for the preceding filter and one from input formats for 
the following filter and inserts a scale filter to convert  between 
those. This process is format negotiation. The format filter doesn't 
carry out the conversion itself - it inserts scale which in turn 
invokes libswscale. scale without any args defaults to the source W 
and H. But for pixel formats, its output format is constrained by the 
following format filter. That triggers a format conversion by libswscale.


Thanks for good explanation!

Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Question about -noauto_conversion_filters

2020-09-14 Thread Michael Koch

Am 14.09.2020 um 11:26 schrieb Gyan Doshi:



On 14-09-2020 02:47 pm, Michael Koch wrote:
ffmpeg -v verbose -f lavfi -i testsrc2=s=svga:d=5,format=yuv422p10le 
-vf format=rgb48le,lut3d="VLog_to_V709.cube",format=yuv422p10le 
-noauto_conversion_filters -pix_fmt yuv422p10le -c:v h264 -y out.mov 


Format conversion is carried out by libswscale and auto conversion 
inserts the scale filter.


So,

    ffmpeg -v verbose -f lavfi -i 
testsrc2=s=svga:d=5,format=yuv422p10le -vf 
scale,format=rgb48le,lut3d="VLog_to_V709.cube",scale 
-noauto_conversion_filters -pix_fmt yuv422p10le -c:v h264 -y out.mov


The final format filter is redundant with -pix_fmt, so I removed one 
of them.


Thank you, with "scale" it works fine. Although it's hard to understand 
what "scale" (without any options) is actually doing.


I have another question. Is this the correct and easiest way to make a 
10-bit test video?

-f lavfi -i testsrc2=s=svga:d=5,format=yuv422p10le

In the documentation is written
"The |testsrc2|source is similar to testsrc, but supports more pixel 
formats instead of just |rgb24|. This allows using it as an input for 
other tests without requiring a format conversion."

^

But in my above command, I think "format=yuv422p10le" is a format 
conversion.


Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Question about -noauto_conversion_filters

2020-09-14 Thread Michael Koch

Am 13.09.2020 um 20:18 schrieb Michael Koch:

Hello all,

I'm just testing the new -noauto_conversion_filters option.

As the first step, I'm running this command line (without the new 
option):


ffmpeg -v verbose -f lavfi -i testsrc2=s=svga:d=5,format=yuv422p10le 
-vf lut3d="VLog_to_V709.cube" -pix_fmt yuv422p10le -c:v h264 -y out.mov


This is running fine and here is the console output:

C:\Users\astro\Desktop\Format_Test>c:\ffmpeg2\ffmpeg -v verbose -f 
lavfi -i testsrc2=s=svga:d=5,format=yuv422p10le -vf 
lut3d="VLog_to_V709.cube" -pix_fmt yuv422p10le -c:v h264 -y out.mov
ffmpeg version N-99135-gaa8935b395-2020-09-13-gyan-beta2 Copyright (c) 
2000-2020 the FFmpeg developers

  built with gcc 10.2.0 (Rev1, Built by MSYS2 project)
  configuration: --enable-gpl --enable-version3 --enable-sdl2 
--enable-fontconfig --enable-gnutls --enable-iconv --enable-libdav1d 
--enable-libbluray --enable-libfreetype --enable-libmp3lame 
--enable-libass --enable-libopencore-amrnb --enable-libopencore-amrwb 
--enable-libopenjpeg --enable-libopus --enable-libshine 
--enable-libsnappy --enable-libsoxr --enable-libsrt --enable-libtheora 
--enable-libtwolame --enable-libvpx --enable-libwavpack 
--enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 
--enable-libzimg --enable-lzma --enable-zlib --enable-gmp 
--enable-libvidstab --enable-libvmaf --enable-libvorbis 
--enable-libvo-amrwbenc --enable-libmysofa --enable-libspeex 
--enable-libxvid --enable-libaom --enable-libgsm --enable-librav1e 
--enable-libsvtav1 --enable-avisynth --enable-libopenmpt 
--enable-chromaprint --enable-frei0r --enable-libbs2b --enable-libcaca 
--enable-libcdio --enable-libflite --enable-libfribidi --enable-libgme 
--enable-libilbc --enable-libmodplug --enable-librubberband 
--enable-libssh --enable-libzmq --enable-libzvbi --enable-ladspa 
--enable-libglslang --enable-vulkan --disable-w32threads 
--disable-autodetect --enable-libmfx --enable-ffnvcodec 
--enable-cuda-llvm --enable-cuvid --enable-d3d11va --enable-nvenc 
--enable-nvdec --enable-dxva2 --enable-amf --enable-static

  libavutil  56. 58.100 / 56. 58.100
  libavcodec 58.106.100 / 58.106.100
  libavformat    58. 54.100 / 58. 54.100
  libavdevice    58. 11.101 / 58. 11.101
  libavfilter 7. 87.100 /  7. 87.100
  libswscale  5.  8.100 /  5.  8.100
  libswresample   3.  8.100 /  3.  8.100
  libpostproc    55.  8.100 / 55.  8.100
[Parsed_testsrc2_0 @ 017dee1d1e80] size:800x600 rate:25/1 
duration:5.00 sar:1/1

Input #0, lavfi, from 'testsrc2=s=svga:d=5,format=yuv422p10le':
  Duration: N/A, start: 0.00, bitrate: N/A
    Stream #0:0: Video: rawvideo, 1 reference frame (Y3[10][10] / 
0xA0A3359), yuv422p10le, 800x600 [SAR 1:1 DAR 4:3], 25 tbr, 25 tbn, 25 
tbc

Matched encoder 'libx264' for codec 'h264'.
Stream mapping:
  Stream #0:0 -> #0:0 (rawvideo (native) -> h264 (libx264))
Press [q] to stop, [?] for help
[graph 0 input from stream 0:0 @ 017df0369580] w:800 h:600 
pixfmt:yuv422p10le tb:1/25 fr:25/1 sar:1/1

[auto_scaler_0 @ 017dee220180] w:iw h:ih flags:'bicubic' interl:0
[Parsed_lut3d_0 @ 017dee2a7540] auto-inserting filter 
'auto_scaler_0' between the filter 'graph 0 input from stream 0:0' and 
the filter 'Parsed_lut3d_0'

[auto_scaler_1 @ 017dee220280] w:iw h:ih flags:'bicubic' interl:0
[format @ 017dee21f880] auto-inserting filter 'auto_scaler_1' 
between the filter 'Parsed_lut3d_0' and the filter 'format'
[auto_scaler_0 @ 017dee220180] w:800 h:600 fmt:yuv422p10le sar:1/1 
-> w:800 h:600 fmt:rgb48le sar:1/1 flags:0x4
[auto_scaler_1 @ 017dee220280] w:800 h:600 fmt:rgb48le sar:1/1 -> 
w:800 h:600 fmt:yuv422p10le sar:1/1 flags:0x4

[libx264 @ 017dee1d8300] using SAR=1/1
[libx264 @ 017dee1d8300] using cpu capabilities: MMX2 SSE2Fast 
SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2

[libx264 @ 017dee1d8300] profile High 4:2:2, level 3.1, 4:2:2, 10-bit
[libx264 @ 017dee1d8300] 264 - core 161 r3018 db0d417 - 
H.264/MPEG-4 AVC codec - Copyleft 2003-2020 - 
http://www.videolan.org/x264.html - options: cabac=1 ref=3 
deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 
mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 
deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=12 
lookahead_threads=2 sliced_threads=0 nr=0 decimate=1 interlaced=0 
bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 
b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 
keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf 
mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=81 qpstep=4 ip_ratio=1.40 
aq=1:1.00

Output #0, mov, to 'out.mov':
  Metadata:
    encoder : Lavf58.54.100
    Stream #0:0: Video: h264 (libx264), 1 reference frame (avc1 / 
0x31637661), yuv422p10le, 800x600 [SAR 1:1 DAR 4:3], q=-1--1, 25 fps, 
12800 tbn, 25 tbc

    Metadata:
  encoder : Lavc58.106.100 libx264
    Side data:
  cpb: bitrate max/min/avg: 0/0/0 buffer si

[FFmpeg-user] Question about -noauto_conversion_filters

2020-09-13 Thread Michael Koch

Hello all,

I'm just testing the new -noauto_conversion_filters option.

As the first step, I'm running this command line (without the new option):

ffmpeg -v verbose -f lavfi -i testsrc2=s=svga:d=5,format=yuv422p10le -vf 
lut3d="VLog_to_V709.cube" -pix_fmt yuv422p10le -c:v h264 -y out.mov


This is running fine and here is the console output:

C:\Users\astro\Desktop\Format_Test>c:\ffmpeg2\ffmpeg -v verbose -f lavfi 
-i testsrc2=s=svga:d=5,format=yuv422p10le -vf lut3d="VLog_to_V709.cube" 
-pix_fmt yuv422p10le -c:v h264 -y out.mov
ffmpeg version N-99135-gaa8935b395-2020-09-13-gyan-beta2 Copyright (c) 
2000-2020 the FFmpeg developers

  built with gcc 10.2.0 (Rev1, Built by MSYS2 project)
  configuration: --enable-gpl --enable-version3 --enable-sdl2 
--enable-fontconfig --enable-gnutls --enable-iconv --enable-libdav1d 
--enable-libbluray --enable-libfreetype --enable-libmp3lame 
--enable-libass --enable-libopencore-amrnb --enable-libopencore-amrwb 
--enable-libopenjpeg --enable-libopus --enable-libshine 
--enable-libsnappy --enable-libsoxr --enable-libsrt --enable-libtheora 
--enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp 
--enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg 
--enable-lzma --enable-zlib --enable-gmp --enable-libvidstab 
--enable-libvmaf --enable-libvorbis --enable-libvo-amrwbenc 
--enable-libmysofa --enable-libspeex --enable-libxvid --enable-libaom 
--enable-libgsm --enable-librav1e --enable-libsvtav1 --enable-avisynth 
--enable-libopenmpt --enable-chromaprint --enable-frei0r 
--enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite 
--enable-libfribidi --enable-libgme --enable-libilbc --enable-libmodplug 
--enable-librubberband --enable-libssh --enable-libzmq --enable-libzvbi 
--enable-ladspa --enable-libglslang --enable-vulkan --disable-w32threads 
--disable-autodetect --enable-libmfx --enable-ffnvcodec 
--enable-cuda-llvm --enable-cuvid --enable-d3d11va --enable-nvenc 
--enable-nvdec --enable-dxva2 --enable-amf --enable-static

  libavutil  56. 58.100 / 56. 58.100
  libavcodec 58.106.100 / 58.106.100
  libavformat    58. 54.100 / 58. 54.100
  libavdevice    58. 11.101 / 58. 11.101
  libavfilter 7. 87.100 /  7. 87.100
  libswscale  5.  8.100 /  5.  8.100
  libswresample   3.  8.100 /  3.  8.100
  libpostproc    55.  8.100 / 55.  8.100
[Parsed_testsrc2_0 @ 017dee1d1e80] size:800x600 rate:25/1 
duration:5.00 sar:1/1

Input #0, lavfi, from 'testsrc2=s=svga:d=5,format=yuv422p10le':
  Duration: N/A, start: 0.00, bitrate: N/A
    Stream #0:0: Video: rawvideo, 1 reference frame (Y3[10][10] / 
0xA0A3359), yuv422p10le, 800x600 [SAR 1:1 DAR 4:3], 25 tbr, 25 tbn, 25 tbc

Matched encoder 'libx264' for codec 'h264'.
Stream mapping:
  Stream #0:0 -> #0:0 (rawvideo (native) -> h264 (libx264))
Press [q] to stop, [?] for help
[graph 0 input from stream 0:0 @ 017df0369580] w:800 h:600 
pixfmt:yuv422p10le tb:1/25 fr:25/1 sar:1/1

[auto_scaler_0 @ 017dee220180] w:iw h:ih flags:'bicubic' interl:0
[Parsed_lut3d_0 @ 017dee2a7540] auto-inserting filter 
'auto_scaler_0' between the filter 'graph 0 input from stream 0:0' and 
the filter 'Parsed_lut3d_0'

[auto_scaler_1 @ 017dee220280] w:iw h:ih flags:'bicubic' interl:0
[format @ 017dee21f880] auto-inserting filter 'auto_scaler_1' 
between the filter 'Parsed_lut3d_0' and the filter 'format'
[auto_scaler_0 @ 017dee220180] w:800 h:600 fmt:yuv422p10le sar:1/1 
-> w:800 h:600 fmt:rgb48le sar:1/1 flags:0x4
[auto_scaler_1 @ 017dee220280] w:800 h:600 fmt:rgb48le sar:1/1 -> 
w:800 h:600 fmt:yuv422p10le sar:1/1 flags:0x4

[libx264 @ 017dee1d8300] using SAR=1/1
[libx264 @ 017dee1d8300] using cpu capabilities: MMX2 SSE2Fast SSSE3 
SSE4.2 AVX FMA3 BMI2 AVX2

[libx264 @ 017dee1d8300] profile High 4:2:2, level 3.1, 4:2:2, 10-bit
[libx264 @ 017dee1d8300] 264 - core 161 r3018 db0d417 - H.264/MPEG-4 
AVC codec - Copyleft 2003-2020 - http://www.videolan.org/x264.html - 
options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 
psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 
8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 
threads=12 lookahead_threads=2 sliced_threads=0 nr=0 decimate=1 
interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 
b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 
keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf 
mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=81 qpstep=4 ip_ratio=1.40 
aq=1:1.00

Output #0, mov, to 'out.mov':
  Metadata:
    encoder : Lavf58.54.100
    Stream #0:0: Video: h264 (libx264), 1 reference frame (avc1 / 
0x31637661), yuv422p10le, 800x600 [SAR 1:1 DAR 4:3], q=-1--1, 25 fps, 
12800 tbn, 25 tbc

    Metadata:
  encoder : Lavc58.106.100 libx264
    Side data:
  cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: N/A
No more output streams to write to, finishing.e=00:00:01.72 

Re: [FFmpeg-user] Pixel format: default and filter?

2020-09-13 Thread Michael Koch

Am 13.09.2020 um 17:27 schrieb amin...@mailbox.org:

On Sun, Sep 13, 2020 at 07:06:56AM -0400, Edward Park wrote:

Hi,


ffprobe now reports out.mov being yuv420p. Is this an implicit conversion to a 
lower bit depth?

It's just the default output format for overlay. It's commonly used for stuff 
across colorspaces (like yuv420p video and argb png logos overlaid) especially 
with alpha.

You can set format option in the filter itself to force output format. I assume 
it doesn't do any conversions when you overlay two sources with same format 
with no alpha internally.


Thanks both, that's helpful. How can I determine the pixel formats that ffmpeg 
has chosen for the filters' input and output pads?


Add -v verbose to the command line, then you get a longer console 
listing, look for the green lines.


Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Problem with time-lapse

2020-09-10 Thread Michael Koch

Am 10.09.2020 um 19:10 schrieb Jippo12:

Hi!

Yes,  I was confused and didn't realize that ffmpeg can do all this :)
because I first made a python script that creates star trails videos and
they are copied when processing the next step.

But for normal video, i use

ffmpeg -i /mnt/ramdisk3g/workdir/%d.jpg -r 16 -vcodec mpeg4 -qsale 1 -y
-filter:v "framestep=1,setpts=1.0*PTS" /mnt/ramdisk/mp4/

So I left the image multiplying step (from the python side) and Now there
is now only 1 of each image in the workdir. Dropped out tblend because it
makes image blur or something.

https://drive.google.com/file/d/1HfG9WsUtE70DZT62SsqIUeIiZ9ZMwvbR/view?usp=sharing

I'm going to use the above to finish star trails videos later too.


You can also use the zoompan filter for duplicating the frames and then 
the framerate filter for crossfading between successive frames. I've 
described that in chapter 2.3 in my book:

http://www.astro-electronic.de/FFmpeg_Book.pdf

Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Convert DNG

2020-09-07 Thread Michael Koch

Am 07.09.2020 um 10:41 schrieb Paul B Mahol:

On Mon, Sep 07, 2020 at 09:12:47AM +0200, Michael Koch wrote:

Am 07.09.2020 um 08:46 schrieb Frei Sibylle Nora:

Hello,


I'm working in an Archive where we digitize analog movies. Our master-files are 
usually DPX but now we get a new scanner which save the movies in DNG files. Is 
there a possibility to convert de DNG files with FFMPEG to DPX files

Do you mean CinemaDNG? That's not the same as DNG. DNG is a file format for
RAW still images, while CinemaDNG is for videos.
FFmpeg has an undocumented decoder for DNG images, but it's in an early
experimental state and doesn't work with most images (in fact, I didn't find
any DNG image that could be decoded correctly).

Again spreading misinformation, this needs to stop.

I kindly ask people who do not want good to this project,
to just unsubsribe from the list.


What's wrong in my answer?

Michael
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Convert DNG

2020-09-07 Thread Michael Koch

Am 07.09.2020 um 08:46 schrieb Frei Sibylle Nora:

Hello,


I'm working in an Archive where we digitize analog movies. Our master-files are 
usually DPX but now we get a new scanner which save the movies in DNG files. Is 
there a possibility to convert de DNG files with FFMPEG to DPX files


Do you mean CinemaDNG? That's not the same as DNG. DNG is a file format 
for RAW still images, while CinemaDNG is for videos.
FFmpeg has an undocumented decoder for DNG images, but it's in an early 
experimental state and doesn't work with most images (in fact, I didn't 
find any DNG image that could be decoded correctly).

I don't know if FFmpeg has a decoder for CinemaDNG.

Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] builds for Windows

2020-09-03 Thread Michael Koch

Am 03.09.2020 um 07:42 schrieb Gyan Doshi:



On 02-09-2020 11:53 pm, Michael Koch wrote:
"ffmpeg.zeranoe.com will close on Sep 18, 2020, and all builds will 
be removed."


Any idea where we can get builds for Windows after this date?


I plan to provide 64-bit static builds starting the 18th. Will update 
doc with link once I make arrangements.




That's very good!

Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

[FFmpeg-user] builds for Windows

2020-09-02 Thread Michael Koch
"ffmpeg.zeranoe.com will close on Sep 18, 2020, and all builds will be 
removed."


Any idea where we can get builds for Windows after this date?

Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Problem while converting DNG sequnece to video file

2020-08-28 Thread Michael Koch

Am 21.08.2020 um 21:34 schrieb Michael Koch:

Am 20.08.2020 um 09:57 schrieb Masala Kentaro:

hello!

I have problems using FFmpeg, while trying to convert DNG-sequence
into mp4/mov/avi video file.
While converting, I also need to downgrade the resolution of the video
from original 6016x3200 to 2030x1080.

First problem: I got almost black screen in the resulting video. Had
to play with gamma and brightness options, but there were still
obvious problems with saturation and contrast after that.
(I tried simple DNG to PNG conversion of a single file, and it also
results in almost black screen, command:
ffmpeg -v 9 -loglevel 99 -i e:\12345\sample_r1.dng e:\output.tga)



Let's not forget that the original poster of this thread had a 
question. I can't answer it, but I did try to reproduce it.
I took a RAW CR2 image from my Canon 6D, converted it to DNG with 
Adobe DNG Converter 12.4, and then tried to convert it to JPG with 
FFmpeg:


ffmpeg -i IMG_3459.dng out_6D.jpg

This didn't work, see the console output below.
I also tested with a RAW image from a Canon 5D-MK4, with the same 
negative result.


I made another test with a DNG image which came directly out of a Pentax 
K5 camera. Same negative result.
I also tested a few of the images from the original poster. With these 
images ffmpeg doesn't throw an error message, but the output image is 
either black, or has very low contrast and saturation. Just as he described.

It seems FFmpeg's DNG decoder has many problems.
Workaround: Convert the DNG images to PNG or (if 8-bit is sufficient) to 
high quality JPG. Then it should be no problem for FFmpeg to read them.


Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Some questions about PTS

2020-08-28 Thread Michael Koch

Am 28.08.2020 um 13:38 schrieb Edward Park:

Hi,


Let's assume the framerate is constant. For example, I want to delay a video by 
5 frames and then hstack the original video and the delayed version:

ffmpeg -i test.mp4 -vf "split[a][b];[b]setpts=PTS+5/(FR*TB)[c];[a][c]hstack" -y 
out.mp4


I would try tpad=start=5, but I'm not sure what happens for the first 5 
frames... If your example works I'm pretty sure it would work.


ffmpeg -i test.mp4 -vf "split[a],tpad=start=5[b];[a][b]hstack" -y out.mp4


yes, that does also work fine. For the first 5 frames it's black, or the 
first frame can be cloned if "start_mode=clone" is added.


Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Some questions about PTS

2020-08-28 Thread Michael Koch

Am 28.08.2020 um 11:25 schrieb Edward Park:

Hello,

I am not confident about this info but I've always thought the timebase is usually the 
reciprocal of the framerate *or smaller*. As in, the duration of a frame can be 
represented accurately enough as the difference between the timestamps, which aren't 
counted using seconds, but "ticks" in whatever timebase. So smaller fractions 
could be used as the timebase as long as the system is capable, and on the other hand, 
when you create low-fps video like 4fps, obviously the timebase isn't going to be 1/4, 
it'll probably have the same timebase as any other export. (I think of it as an analog 
wall clock, except it doesn't go tick-tock every second, it goes tick-tock every 1/9 
seconds.)

Actually, I think for the codec timebase, it is more common for it to be 1/2 
the reciprocal of the frame rate; if that's codec-specific, I don't know why 
that is. Maybe you've also seen some setpts examples where you divide/multiply 
something by 2 for some arcane reason? Hopefully someone can explain further..

When you delay some frames by whatever amount, it necessarily effects a change 
in the frame rate (but not the timebase). I'm not sure where the FR value for 
setpts comes from, maybe it wouldn't matter if it stays the same as the nominal 
framerate if indicated by the media, but if it is something that can change, 
maybe the effective rate at the end of the chain, obviously it wouldn't work as 
expected.

Just for the sake of curiosity, what has you looking to delay frames using 
setpts? I feel there are easier methods.


Let's assume the framerate is constant. For example, I want to delay a 
video by 5 frames and then hstack the original video and the delayed 
version:


ffmpeg -i test.mp4 -vf 
"split[a][b];[b]setpts=PTS+5/(FR*TB)[c];[a][c]hstack" -y out.mp4


Are there other / better / easier methods to do the same thing?

Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

[FFmpeg-user] Some questions about PTS

2020-08-28 Thread Michael Koch

Hello all,

please comment if the following is right or wrong:

-- The timebase of a video (TB in setpts filter) is expressed in the 
unit [s] (seconds).
-- The framerate of a video (FR in setpts filter) is expressed in the 
unit [s^-1] (1/seconds).
-- In many cases the timebase is the reciprocal of the framerate, but 
this isn't always the case.
-- If the timebase is the reciprocal of the framerate, a stream can be 
delayed by x frames using setpts=PTS+x
-- In the more general case for arbitrary timebase and framerate, a 
stream can be delayed by x frames using setpts=PTS+x/(FR*TB)


Michael


___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Why is format=rgb24 required after maskedmerge?

2020-08-22 Thread Michael Koch

Am 22.08.2020 um 11:03 schrieb Nicolas George:

Carl Zwanzig (12020-08-21):

IMHO, it would make more sense to remove the -all file as it's unweildly and
doesn't seem to contain to contain everything anyway.

It does contain everything, or there is a bug somewhere. And it is
useful precisely because it contains everything: it stays.


I agree that it's useful to have one file which contains _all_ 
documentation. Searching in one file is easier than searching at 
multiple places. But the content of general.html is missing in 
ffmpeg-all.html (and also in ffplay-all.html and ffprobe-all.html).


Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Why is format=rgb24 required after maskedmerge?

2020-08-21 Thread Michael Koch

Am 21.08.2020 um 21:38 schrieb Gyan Doshi:



Git mystifies me, too. I also did a lot of reading but it was written 
my Martians.


For the purposes of only generating doc patches, you only need to know 
and apply a handful of git commands.
Most of the supposed difficulties of working with git revolves around 
nonlinear histories. I see very little potential for that with pure 
docs work.


You aren't tied to linux either.  ffmpeg is perfectly compile-able and 
testable on Windows. I do it regularly, but even that's not needed here.


It's nice to know help is available, and that's appreciated. But let 
me first look around at my old notes and see what all needs to be done.
What can be done in the meantime is for someone to start a wiki or 
blog or forum and solicit, collect & curate requests for documentation 
changes.

What do users most want answered or documented first?


I have collected some things in chapter 2.115 of my book:
http://www.astro-electronic.de/FFmpeg_Book.pdf

An important first step would be to re-write git-howto.html and explain 
the git commands more detailed, the whole workflow step by step with 
many examples. Only the handfull of commands that we really need. Also 
the new introduced terms should be explained (cloning, push back, remote 
repository, checkout, tracked branch, rebasing, local branches, commit, 
master tree, merge commits, patchset).


Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Problem while converting DNG sequnece to video file

2020-08-21 Thread Michael Koch

Am 20.08.2020 um 09:57 schrieb Masala Kentaro:

hello!

I have problems using FFmpeg, while trying to convert DNG-sequence
into mp4/mov/avi video file.
While converting, I also need to downgrade the resolution of the video
from original 6016x3200 to 2030x1080.

First problem: I got almost black screen in the resulting video. Had
to play with gamma and brightness options, but there were still
obvious problems with saturation and contrast after that.
(I tried simple DNG to PNG conversion of a single file, and it also
results in almost black screen, command:
ffmpeg -v 9 -loglevel 99 -i e:\12345\sample_r1.dng e:\output.tga)



Let's not forget that the original poster of this thread had a question. 
I can't answer it, but I did try to reproduce it.
I took a RAW CR2 image from my Canon 6D, converted it to DNG with Adobe 
DNG Converter 12.4, and then tried to convert it to JPG with FFmpeg:


ffmpeg -i IMG_3459.dng out_6D.jpg

This didn't work, see the console output below.
I also tested with a RAW image from a Canon 5D-MK4, with the same 
negative result.


Michael


C:\Users\astro\Desktop\dng>c:\ffmpeg\ffmpeg -i IMG_3459.dng out_6D.jpg
ffmpeg version git-2020-08-21-412d63f Copyright (c) 2000-2020 the FFmpeg 
developers

  built with gcc 10.2.1 (GCC) 20200805
  configuration: --enable-gpl --enable-version3 --enable-sdl2 
--enable-fontconfig --enable-gnutls --enable-iconv --enable-libass 
--enable-libdav1d --enable-libbluray --enable-libfreetype 
--enable-libmp3lame --enable-libopencore-amrnb 
--enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus 
--enable-libshine --enable-libsnappy --enable-libsoxr --enable-libsrt 
--enable-libtheora --enable-libtwolame --enable-libvpx 
--enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 
--enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib 
--enable-gmp --enable-libvidstab --enable-libvmaf --enable-libvorbis 
--enable-libvo-amrwbenc --enable-libmysofa --enable-libspeex 
--enable-libxvid --enable-libaom --enable-libgsm --enable-librav1e 
--enable-libsvtav1 --disable-w32threads --enable-libmfx 
--enable-ffnvcodec --enable-cuda-llvm --enable-cuvid --enable-d3d11va 
--enable-nvenc --enable-nvdec --enable-dxva2 --enable-avisynth 
--enable-libopenmpt --enable-amf

  libavutil  56. 58.100 / 56. 58.100
  libavcodec 58.100.100 / 58.100.100
  libavformat    58. 51.100 / 58. 51.100
  libavdevice    58. 11.101 / 58. 11.101
  libavfilter 7. 87.100 /  7. 87.100
  libswscale  5.  8.100 /  5.  8.100
  libswresample   3.  8.100 /  3.  8.100
  libpostproc    55.  8.100 / 55.  8.100
[tiff @ 01f73e39f440] Assuming black level pattern values are identical
[tiff @ 01f73e39f440] Tiled TIFF is not allowed to strip
[tiff_pipe @ 01f73e39d400] Stream #0: not enough frames to estimate 
rate; consider increasing probesize

[tiff_pipe @ 01f73e39d400] decoding for stream 0 failed
[tiff_pipe @ 01f73e39d400] Could not find codec parameters for 
stream 0 (Video: tiff, none): unspecified size
Consider increasing the value for the 'analyzeduration' (0) and 
'probesize' (500) options

Input #0, tiff_pipe, from 'IMG_3459.dng':
  Duration: N/A, bitrate: N/A
    Stream #0:0: Video: tiff, none, 25 tbr, 25 tbn, 25 tbc
Stream mapping:
  Stream #0:0 -> #0:0 (tiff (native) -> mjpeg (native))
Press [q] to stop, [?] for help
[tiff @ 01f73e3a5f80] Assuming black level pattern values are identical
[tiff @ 01f73e3a5f80] Tiled TIFF is not allowed to strip
Error while decoding stream #0:0: Invalid data found when processing input
Cannot determine format of input stream 0:0 after EOF
Error marking filters as finished
Conversion failed!


___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Why is format=rgb24 required after maskedmerge?

2020-08-21 Thread Michael Koch

Am 21.08.2020 um 18:41 schrieb Nicolas George:

Michael Koch (12020-08-21):

The page https://www.ffmpeg.org/git-howto.html  was obviously not written
for beginners. Many terms are introduced without explaining them, and
examples are missing.

I think it was written for people who were familiar with subversion.
Have you considered following a general-purpose Git tutorial?


yes, I did read that some time ago, without success.

Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Why is format=rgb24 required after maskedmerge?

2020-08-21 Thread Michael Koch

Am 21.08.2020 um 17:06 schrieb Gyan Doshi:



On 21-08-2020 07:30 pm, Michael Koch wrote:

Am 21.08.2020 um 15:42 schrieb Mark Filipak:

On 08/21/2020 06:44 AM, Gyan Doshi wrote:



On 21-08-2020 04:03 pm, Michael Koch wrote:

Am 21.08.2020 um 12:09 schrieb Gyan Doshi:



On 21-08-2020 02:36 pm, Michael Koch wrote:

Am 21.08.2020 um 10:50 schrieb Paul B Mahol:

On 8/21/20, Michael Koch  wrote:
Please add this to the documentation (for example in chapter 
20.9 image2)


FFmpeg supports the following image formats: BMP, DNG, GIF, 
JPG, PAM,
PGM (binary P5 files are read/write, ascii P2 files are 
read-only),

PGMYUV, PNG, PPM, TGA, TIFF.

Sorry but that list is incomplete.


Then please add what's missing.


FFmpeg's documentation is very incomplete. I had a rough roadmap 
at the start of the year for plugging the gaps, but other events 
transpired.


I hope to start a systematic effort in September.


May I assist you?


I'm also ready to help, if it can be done without git.


It would be cumbersome, having someone else merge the changes in a git 
repo.


What concerns do you have about using git?


I don't understand how git works. Have tried it some time ago, but 
didn't work, so I gave up. No good instructions available. A second 
computer with Linux seems to be required. Too complicated. I have no 
experience with Linux. I know C and C# programming, but I'm not a 
professional programmer. The page https://www.ffmpeg.org/git-howto.html  
was obviously not written for beginners. Many terms are introduced 
without explaining them, and examples are missing.


Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Why is format=rgb24 required after maskedmerge?

2020-08-21 Thread Michael Koch

Am 21.08.2020 um 15:42 schrieb Mark Filipak:

On 08/21/2020 06:44 AM, Gyan Doshi wrote:



On 21-08-2020 04:03 pm, Michael Koch wrote:

Am 21.08.2020 um 12:09 schrieb Gyan Doshi:



On 21-08-2020 02:36 pm, Michael Koch wrote:

Am 21.08.2020 um 10:50 schrieb Paul B Mahol:

On 8/21/20, Michael Koch  wrote:
Please add this to the documentation (for example in chapter 
20.9 image2)


FFmpeg supports the following image formats: BMP, DNG, GIF, JPG, 
PAM,

PGM (binary P5 files are read/write, ascii P2 files are read-only),
PGMYUV, PNG, PPM, TGA, TIFF.

Sorry but that list is incomplete.


Then please add what's missing.


FFmpeg's documentation is very incomplete. I had a rough roadmap at 
the start of the year for plugging the gaps, but other events 
transpired.


I hope to start a systematic effort in September.


May I assist you?


I'm also ready to help, if it can be done without git.

Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Why is format=rgb24 required after maskedmerge?

2020-08-21 Thread Michael Koch

Am 21.08.2020 um 12:44 schrieb Gyan Doshi:



On 21-08-2020 04:03 pm, Michael Koch wrote:

Am 21.08.2020 um 12:09 schrieb Gyan Doshi:



On 21-08-2020 02:36 pm, Michael Koch wrote:

Am 21.08.2020 um 10:50 schrieb Paul B Mahol:

On 8/21/20, Michael Koch  wrote:
Please add this to the documentation (for example in chapter 20.9 
image2)


FFmpeg supports the following image formats: BMP, DNG, GIF, JPG, 
PAM,

PGM (binary P5 files are read/write, ascii P2 files are read-only),
PGMYUV, PNG, PPM, TGA, TIFF.

Sorry but that list is incomplete.


Then please add what's missing.


FFmpeg's documentation is very incomplete. I had a rough roadmap at 
the start of the year for plugging the gaps, but other events 
transpired.


I hope to start a systematic effort in September.

Gyan

P.S. ffmpeg supports roughly a few dozen image formats. I'll add the 
missing ones to https://ffmpeg.org/general.html#Image-Formats


That's very interesting. I never found that "General Documentation" 
file because I thought "ffmpeg-all" contains what the filename 
implies: All. This is obviously not the case. The filename is 
misleading, if the file doesn't contain all documentation. Wouldn't 
it be better to combine these two files into one file, which contains 
really all?


ffmpeg-all consolidates the documentation for all component options 
and the options of the ffmpeg core binary. tool. Start exploring at 
https://ffmpeg.org/documentation.html


It would be helpful if you add a link in ffmpeg-all.html, chapter 20.9 
(image2) which points to general.html, chapter 2.2 where the supported 
image formats are listed.


Michael
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Why is format=rgb24 required after maskedmerge?

2020-08-21 Thread Michael Koch

Am 21.08.2020 um 12:09 schrieb Gyan Doshi:



On 21-08-2020 02:36 pm, Michael Koch wrote:

Am 21.08.2020 um 10:50 schrieb Paul B Mahol:

On 8/21/20, Michael Koch  wrote:
Please add this to the documentation (for example in chapter 20.9 
image2)


FFmpeg supports the following image formats: BMP, DNG, GIF, JPG, PAM,
PGM (binary P5 files are read/write, ascii P2 files are read-only),
PGMYUV, PNG, PPM, TGA, TIFF.

Sorry but that list is incomplete.


Then please add what's missing.


FFmpeg's documentation is very incomplete. I had a rough roadmap at 
the start of the year for plugging the gaps, but other events transpired.


I hope to start a systematic effort in September.

Gyan

P.S. ffmpeg supports roughly a few dozen image formats. I'll add the 
missing ones to https://ffmpeg.org/general.html#Image-Formats


That's very interesting. I never found that "General Documentation" file 
because I thought "ffmpeg-all" contains what the filename implies: All. 
This is obviously not the case. The filename is misleading, if the file 
doesn't contain all documentation. Wouldn't it be better to combine 
these two files into one file, which contains really all?


Michael
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Why is format=rgb24 required after maskedmerge?

2020-08-21 Thread Michael Koch

Am 21.08.2020 um 10:50 schrieb Paul B Mahol:

On 8/21/20, Michael Koch  wrote:

Please add this to the documentation (for example in chapter 20.9 image2)

FFmpeg supports the following image formats: BMP, DNG, GIF, JPG, PAM,
PGM (binary P5 files are read/write, ascii P2 files are read-only),
PGMYUV, PNG, PPM, TGA, TIFF.

Sorry but that list is incomplete.


Then please add what's missing.

Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Why is format=rgb24 required after maskedmerge?

2020-08-21 Thread Michael Koch

Please add this to the documentation (for example in chapter 20.9 image2)

FFmpeg supports the following image formats: BMP, DNG, GIF, JPG, PAM, 
PGM (binary P5 files are read/write, ascii P2 files are read-only), 
PGMYUV, PNG, PPM, TGA, TIFF.


Michael
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

<    1   2   3   4   5   6   7   8   9   10   >