Re: [FFmpeg-user] Question about "normalize" filter

2023-01-30 Thread Michael Koch

Am 30.01.2023 um 08:47 schrieb Paul B Mahol:

On Mon, Jan 30, 2023 at 12:23 AM Michael Koch 
wrote:


Am 29.01.2023 um 23:36 schrieb Paul B Mahol:

On 1/29/23, Michael Koch  wrote:

Am 29.01.2023 um 23:07 schrieb Paul B Mahol:

On 1/29/23, Michael Koch  wrote:

Am 29.01.2023 um 22:05 schrieb Paul B Mahol:

On 1/29/23, Michael Koch  wrote:

Am 29.01.2023 um 19:32 schrieb Paul B Mahol:

On 1/29/23, Michael Koch  wrote:

Hello,

if I understood the documentation correctly, the normalize filter
maps
the darkest input pixel to blackpt and the brightest input pixel

to

whitept:
darkest pixel --> blackpt
brightest pixel --> whitept

However I need a slightly different mapping:
A black input pixel shall remain black, and the brightest input
pixel
shall become white.
black --> blackpt
brightest pixel --> whitept

With other words: Just multiply all pixels by a suitable constant.
Don't
add or subtract anything.
Is this possible?

Known workaround: Make sure that the input frame contains a black
pixel,
by inserting one in a corner.

Try attached patch.

How must I set the options for the desired behaviour?

Set first strength to reverse of second strength.
So 1.0 and 0.0 or 0.0 and 1.0

I did try with strength=0:strength2=1 but the output isn't as

expected.

I'm using this input image:
http://www.astro-electronic.de/flat.png

The pixel values are about 171 in the center and 107 in the top right
corner.
The center to corner ratio is 171 / 107 = 1.6

In the output image I measure 248 in the center (which is almost as
expected, probably correct because I'm measuring the average of a 7x7
neighborhood), but I measure 122 in the top right corner.
The center to corner ratio is 248 / 122 = 2.03
The corner is too dark.


I checked with oscilloscope filter (s=1:tw=1:t=1:x=0), far left pixels
(as they are darkest) and they are not changing (min values are same
with and without filter run)
With default parameters and just strength(2) set to your values, so
the darkest pixels are left  untouched. Did not checked brightest
pixels output, but they should be correct too.

But that's not the behaviour I need. All pixels shall be multiplied by
the same suitable constant, so that the brightest pixel becomes white.

Input center: 171
Input corner: 107

constant c = 255 / 171 =1.49
Output center: 171 * c = 255
Output corner: 107 * c = 160


Normalization does not do that and that functionality does not belong
to such filter, it stretches ranges of all pixel values so they reach
maximal possible range.

Can't you just add an option that disables the minimum finding
algorithm, and set the minimum to zero (=black)? That would do the job.


I just did, but for whatever reason you think its incorrect.


You wrote:

"With default parameters and just strength(2) set to your values, so
the darkest pixels are left  untouched."

However I want that black remains untouched. That's not the same, because the 
darkest pixels in the image aren't black.

The workaround is to insert a black pixel before normalizing:
-vf drawbox=w=1:h=1:color=black,normalize

Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] Question about "normalize" filter

2023-01-29 Thread Michael Koch

Am 29.01.2023 um 23:36 schrieb Paul B Mahol:

On 1/29/23, Michael Koch  wrote:

Am 29.01.2023 um 23:07 schrieb Paul B Mahol:

On 1/29/23, Michael Koch  wrote:

Am 29.01.2023 um 22:05 schrieb Paul B Mahol:

On 1/29/23, Michael Koch  wrote:

Am 29.01.2023 um 19:32 schrieb Paul B Mahol:

On 1/29/23, Michael Koch  wrote:

Hello,

if I understood the documentation correctly, the normalize filter
maps
the darkest input pixel to blackpt and the brightest input pixel to
whitept:
darkest pixel --> blackpt
brightest pixel --> whitept

However I need a slightly different mapping:
A black input pixel shall remain black, and the brightest input
pixel
shall become white.
black --> blackpt
brightest pixel --> whitept

With other words: Just multiply all pixels by a suitable constant.
Don't
add or subtract anything.
Is this possible?

Known workaround: Make sure that the input frame contains a black
pixel,
by inserting one in a corner.

Try attached patch.

How must I set the options for the desired behaviour?

Set first strength to reverse of second strength.
So 1.0 and 0.0 or 0.0 and 1.0

I did try with strength=0:strength2=1 but the output isn't as expected.

I'm using this input image:
http://www.astro-electronic.de/flat.png

The pixel values are about 171 in the center and 107 in the top right
corner.
The center to corner ratio is 171 / 107 = 1.6

In the output image I measure 248 in the center (which is almost as
expected, probably correct because I'm measuring the average of a 7x7
neighborhood), but I measure 122 in the top right corner.
The center to corner ratio is 248 / 122 = 2.03
The corner is too dark.


I checked with oscilloscope filter (s=1:tw=1:t=1:x=0), far left pixels
(as they are darkest) and they are not changing (min values are same
with and without filter run)
With default parameters and just strength(2) set to your values, so
the darkest pixels are left  untouched. Did not checked brightest
pixels output, but they should be correct too.

But that's not the behaviour I need. All pixels shall be multiplied by
the same suitable constant, so that the brightest pixel becomes white.

Input center: 171
Input corner: 107

constant c = 255 / 171 =1.49
Output center: 171 * c = 255
Output corner: 107 * c = 160


Normalization does not do that and that functionality does not belong
to such filter, it stretches ranges of all pixel values so they reach
maximal possible range.


Can't you just add an option that disables the minimum finding 
algorithm, and set the minimum to zero (=black)? That would do the job.


Let me explain why I need this kind of normalization.
Most lenses have vignetting. Especially in astronomical images, 
vignetting must be corrected before any other image processing can be 
done. For this purpose a flatfield image is taken with the same lens at 
the same aperture, but with a uniform white screen in front of the lens. 
Normally the flatfield image is exposed at roughly 50% gray level, to 
avoid problems with nonlinearity near white level. The linked image is 
the flatfield.
Vignetting in an astronomical image is corrected by dividing the image 
by the flatfield. That can be done with the blend filter.
If I use the flatfied as-is (with roughly 50% gray level), then the 
image is effectively multiplied by a factor 2. That's a problem because 
bright pixels might get clipped. To minimize this problem, I want to 
normalize the flatfield as close to white level as possible, before 
using it.


Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] Question about "normalize" filter

2023-01-29 Thread Michael Koch

Am 29.01.2023 um 23:07 schrieb Paul B Mahol:

On 1/29/23, Michael Koch  wrote:

Am 29.01.2023 um 22:05 schrieb Paul B Mahol:

On 1/29/23, Michael Koch  wrote:

Am 29.01.2023 um 19:32 schrieb Paul B Mahol:

On 1/29/23, Michael Koch  wrote:

Hello,

if I understood the documentation correctly, the normalize filter maps
the darkest input pixel to blackpt and the brightest input pixel to
whitept:
darkest pixel --> blackpt
brightest pixel --> whitept

However I need a slightly different mapping:
A black input pixel shall remain black, and the brightest input pixel
shall become white.
black --> blackpt
brightest pixel --> whitept

With other words: Just multiply all pixels by a suitable constant.
Don't
add or subtract anything.
Is this possible?

Known workaround: Make sure that the input frame contains a black
pixel,
by inserting one in a corner.

Try attached patch.

How must I set the options for the desired behaviour?

Set first strength to reverse of second strength.
So 1.0 and 0.0 or 0.0 and 1.0

I did try with strength=0:strength2=1 but the output isn't as expected.

I'm using this input image:
http://www.astro-electronic.de/flat.png

The pixel values are about 171 in the center and 107 in the top right
corner.
The center to corner ratio is 171 / 107 = 1.6

In the output image I measure 248 in the center (which is almost as
expected, probably correct because I'm measuring the average of a 7x7
neighborhood), but I measure 122 in the top right corner.
The center to corner ratio is 248 / 122 = 2.03
The corner is too dark.


I checked with oscilloscope filter (s=1:tw=1:t=1:x=0), far left pixels
(as they are darkest) and they are not changing (min values are same
with and without filter run)
With default parameters and just strength(2) set to your values, so
the darkest pixels are left  untouched. Did not checked brightest
pixels output, but they should be correct too.


But that's not the behaviour I need. All pixels shall be multiplied by 
the same suitable constant, so that the brightest pixel becomes white.


Input center: 171
Input corner: 107

constant c = 255 / 171 =1.49
Output center: 171 * c = 255
Output corner: 107 * c = 160

Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] Question about "normalize" filter

2023-01-29 Thread Michael Koch

Am 29.01.2023 um 22:05 schrieb Paul B Mahol:

On 1/29/23, Michael Koch  wrote:

Am 29.01.2023 um 19:32 schrieb Paul B Mahol:

On 1/29/23, Michael Koch  wrote:

Hello,

if I understood the documentation correctly, the normalize filter maps
the darkest input pixel to blackpt and the brightest input pixel to
whitept:
darkest pixel --> blackpt
brightest pixel --> whitept

However I need a slightly different mapping:
A black input pixel shall remain black, and the brightest input pixel
shall become white.
black --> blackpt
brightest pixel --> whitept

With other words: Just multiply all pixels by a suitable constant. Don't
add or subtract anything.
Is this possible?

Known workaround: Make sure that the input frame contains a black pixel,
by inserting one in a corner.

Try attached patch.

How must I set the options for the desired behaviour?

Set first strength to reverse of second strength.
So 1.0 and 0.0 or 0.0 and 1.0


I did try with strength=0:strength2=1 but the output isn't as expected.

I'm using this input image:
http://www.astro-electronic.de/flat.png

The pixel values are about 171 in the center and 107 in the top right 
corner.

The center to corner ratio is 171 / 107 = 1.6

In the output image I measure 248 in the center (which is almost as 
expected, probably correct because I'm measuring the average of a 7x7 
neighborhood), but I measure 122 in the top right corner.

The center to corner ratio is 248 / 122 = 2.03
The corner is too dark.

Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] Question about "normalize" filter

2023-01-29 Thread Michael Koch

Am 29.01.2023 um 19:32 schrieb Paul B Mahol:

On 1/29/23, Michael Koch  wrote:

Hello,

if I understood the documentation correctly, the normalize filter maps
the darkest input pixel to blackpt and the brightest input pixel to
whitept:
darkest pixel --> blackpt
brightest pixel --> whitept

However I need a slightly different mapping:
A black input pixel shall remain black, and the brightest input pixel
shall become white.
black --> blackpt
brightest pixel --> whitept

With other words: Just multiply all pixels by a suitable constant. Don't
add or subtract anything.
Is this possible?

Known workaround: Make sure that the input frame contains a black pixel,
by inserting one in a corner.

Try attached patch.


How must I set the options for the desired behaviour?

Micheal

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


[FFmpeg-user] Question about "normalize" filter

2023-01-29 Thread Michael Koch

Hello,

if I understood the documentation correctly, the normalize filter maps 
the darkest input pixel to blackpt and the brightest input pixel to whitept:

darkest pixel --> blackpt
brightest pixel --> whitept

However I need a slightly different mapping:
A black input pixel shall remain black, and the brightest input pixel 
shall become white.

black --> blackpt
brightest pixel --> whitept

With other words: Just multiply all pixels by a suitable constant. Don't 
add or subtract anything.

Is this possible?

Known workaround: Make sure that the input frame contains a black pixel, 
by inserting one in a corner.


Michael
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


[FFmpeg-user] Mathematical Art

2023-01-28 Thread Michael Koch

Hello all,

this is not a question. This cool equation was designed by Hamid Naderi 
Yeganeh. I did only translate it into a FFmpeg command. Let it run and 
see what comes out.


ffmpeg -f lavfi -i color=black:s=hd1080,format=gray -vf 
geq=lum='st(0,(X-W/3)/450);st(1,(H/2-Y)/450);255*gt(-7/20-1/PI*atan(175*pow((ld(0)-63/100),2)+3500*pow(abs(ld(1)-1/10),(26/10-6/10*ld(0)))+160*pow(sin(ld(0)+2/5),8)-50*pow(sin(ld(0)+4/5),100)-500)-exp(-2000*(pow((ld(0)-7/4),2)+pow((ld(1)-33/200),2)-1/1000))-exp(1000*(ld(0)-2/5*ld(1)-43/20))+10*(exp(-exp(1000*(-1/4+pow(ld(1)+1/2,2)))-300*(1/10+pow(ld(1),2))*pow((ld(0)-1/2+13/10+(9/100+23*-1/100)*(ld(1)+2/5)+1/10*pow(cos(ld(1)+7/10),20)),2))+exp(-exp(1000*(-1/4+pow(ld(1)+1/2,2)))-300*(1/10+pow(ld(1),2))*pow((ld(0)-2/2+13/10+(9/100+23*1/100)*(ld(1)+2/5)+1/10*pow(cos(ld(1)+7/10),20)),2))+exp(-exp(1000*(-1/4+pow(ld(1)+1/2,2)))-300*(1/10+pow(ld(1),2))*pow((ld(0)-3/2+13/10+(9/100+23*-1/100)*(ld(1)+2/5)+1/10*pow(cos(ld(1)+7/10),20)),2))+exp(-exp(1000*(-1/4+pow(ld(1)+1/2,2)))-300*(1/10+pow(ld(1),2))*pow((ld(0)-4/2+13/10+(9/100+23*1/100)*(ld(1)+2/5)+1/10*pow(cos(ld(1)+7/10),20)),2))),0)' 
-frames 1 -y out.png


You find more mathematical art on his Facebook page:
https://www.facebook.com/HamidNaderiYeganeh

Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] Replace part of the audio

2023-01-23 Thread Michael Koch

Am 22.01.2023 um 23:52 schrieb Reino Wijnsma:

Hello Michael,

On 2023-01-22T18:50:20+0100, Michael Koch  wrote:

This command line works with asendcmd and astreamselect:

ffmpeg -i audio1.wav -i audio2.wav -lavfi asendcmd="4 astreamselect map 
1",asendcmd="6 astreamselect map 0",astreamselect=map=0 -y out.wav


However with amix filter I have no idea what's the syntax for the string inside 
the string. It doesn't work.

ffmpeg -i audio1.wav -i audio2.wav -lavfi asendcmd="4 amix weights '0 
1'",amix=weights='1 0' -y out.wav

You've got your quotes all wrong. Always surround the complete filter-chain 
with double quotes and use single quotes for the individual filters.

ffmpeg -i audio1.wav -i audio2.wav -lavfi "asendcmd='4 astreamselect map 
1',asendcmd='6 astreamselect map 0',astreamselect=map=0" -y out.wav

ffmpeg -i audio1.wav -i audio2.wav -lavfi "asendcmd='4 amix weights '\\\'0 
1\\\',amix=weights='1 0'" -y out.wav


Thank you!

Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] Replace part of the audio

2023-01-22 Thread Michael Koch

Am 22.01.2023 um 19:21 schrieb Paul B Mahol:

On 1/22/23, Michael Koch  wrote:

Am 22.01.2023 um 16:56 schrieb Michael Koch:

Am 22.01.2023 um 16:25 schrieb Paul B Mahol:

On 1/22/23, Michael Koch  wrote:

Am 21.01.2023 um 17:05 schrieb Paul B Mahol:

On 1/19/23, Michael Koch  wrote:

Am 19.01.2023 um 15:49 schrieb Alexander Bieliaev via ffmpeg-user:

How can I replace a part of the audio from and to a specific time
with
some
other audio/sound (I want to replace it with beep in this case)?


Here is an example:

ffmpeg -lavfi sine=500:d=10 -y audio1.wav
ffmpeg -lavfi sine=2000:d=10 -y audio2.wav

ffmpeg -i audio1.wav -i audio2.wav -lavfi
[0]volume='1-between(t,4,6)':eval=frame[a];[1]volume='between(t,4,6)':eval=frame[b];[a][b]amix



-y out.wav

That is never going to give smooth transitions.

Also overly complicated as can be simplified with commands to to amix
filter.

If my example is too complicated, then please show a simpler example.


asendcmd=10.0 amix weights 'X Y',

X/Y being wanted volume of first/second input to amix filter.

That doesn't work because the argument of asendcmd must be
encapsulated in quotes (because it contains spaces), and X Y must also
be encapsulated in quotes. Please show the whole command line.


This command line works with asendcmd and astreamselect:

ffmpeg -i audio1.wav -i audio2.wav -lavfi asendcmd="4 astreamselect map
1",asendcmd="6 astreamselect map 0",astreamselect=map=0 -y out.wav


However with amix filter I have no idea what's the syntax for the string
inside the string. It doesn't work.

ffmpeg -i audio1.wav -i audio2.wav -lavfi asendcmd="4 amix weights '0
1'",amix=weights='1 0' -y out.wav

there is way to escape stuff


Above you wrote that the command line can be simplified, and now 
escaping is required? That's too complicated. I'm out.


Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] Replace part of the audio

2023-01-22 Thread Michael Koch

Am 22.01.2023 um 16:56 schrieb Michael Koch:

Am 22.01.2023 um 16:25 schrieb Paul B Mahol:

On 1/22/23, Michael Koch  wrote:

Am 21.01.2023 um 17:05 schrieb Paul B Mahol:

On 1/19/23, Michael Koch  wrote:

Am 19.01.2023 um 15:49 schrieb Alexander Bieliaev via ffmpeg-user:
How can I replace a part of the audio from and to a specific time 
with

some
other audio/sound (I want to replace it with beep in this case)?


Here is an example:

ffmpeg -lavfi sine=500:d=10 -y audio1.wav
ffmpeg -lavfi sine=2000:d=10 -y audio2.wav

ffmpeg -i audio1.wav -i audio2.wav -lavfi
[0]volume='1-between(t,4,6)':eval=frame[a];[1]volume='between(t,4,6)':eval=frame[b];[a][b]amix 



-y out.wav

That is never going to give smooth transitions.

Also overly complicated as can be simplified with commands to to amix
filter.

If my example is too complicated, then please show a simpler example.


asendcmd=10.0 amix weights 'X Y',

X/Y being wanted volume of first/second input to amix filter.


That doesn't work because the argument of asendcmd must be 
encapsulated in quotes (because it contains spaces), and X Y must also 
be encapsulated in quotes. Please show the whole command line.




This command line works with asendcmd and astreamselect:

ffmpeg -i audio1.wav -i audio2.wav -lavfi asendcmd="4 astreamselect map 
1",asendcmd="6 astreamselect map 0",astreamselect=map=0 -y out.wav



However with amix filter I have no idea what's the syntax for the string 
inside the string. It doesn't work.


ffmpeg -i audio1.wav -i audio2.wav -lavfi asendcmd="4 amix weights '0 
1'",amix=weights='1 0' -y out.wav


Michael


___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] Replace part of the audio

2023-01-22 Thread Michael Koch

Am 22.01.2023 um 16:25 schrieb Paul B Mahol:

On 1/22/23, Michael Koch  wrote:

Am 21.01.2023 um 17:05 schrieb Paul B Mahol:

On 1/19/23, Michael Koch  wrote:

Am 19.01.2023 um 15:49 schrieb Alexander Bieliaev via ffmpeg-user:

How can I replace a part of the audio from and to a specific time with
some
other audio/sound (I want to replace it with beep in this case)?


Here is an example:

ffmpeg -lavfi sine=500:d=10 -y audio1.wav
ffmpeg -lavfi sine=2000:d=10 -y audio2.wav

ffmpeg -i audio1.wav -i audio2.wav -lavfi
[0]volume='1-between(t,4,6)':eval=frame[a];[1]volume='between(t,4,6)':eval=frame[b];[a][b]amix

-y out.wav

That is never going to give smooth transitions.

Also overly complicated as can be simplified with commands to to amix
filter.

If my example is too complicated, then please show a simpler example.


asendcmd=10.0 amix weights 'X Y',

X/Y being wanted volume of first/second input to amix filter.


That doesn't work because the argument of asendcmd must be encapsulated 
in quotes (because it contains spaces), and X Y must also be 
encapsulated in quotes. Please show the whole command line.


Michael
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] Replace part of the audio

2023-01-22 Thread Michael Koch

Am 21.01.2023 um 17:05 schrieb Paul B Mahol:

On 1/19/23, Michael Koch  wrote:

Am 19.01.2023 um 15:49 schrieb Alexander Bieliaev via ffmpeg-user:

How can I replace a part of the audio from and to a specific time with
some
other audio/sound (I want to replace it with beep in this case)?


Here is an example:

ffmpeg -lavfi sine=500:d=10 -y audio1.wav
ffmpeg -lavfi sine=2000:d=10 -y audio2.wav

ffmpeg -i audio1.wav -i audio2.wav -lavfi
[0]volume='1-between(t,4,6)':eval=frame[a];[1]volume='between(t,4,6)':eval=frame[b];[a][b]amix

-y out.wav

That is never going to give smooth transitions.

Also overly complicated as can be simplified with commands to to amix filter.


If my example is too complicated, then please show a simpler example.

Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] Issue with "%04d" when creating image sequence from a video

2023-01-20 Thread Michael Koch

Am 20.01.2023 um 00:37 schrieb JJ jo:

Hi,
I'm new and I don't know if it's the right way to ask for a problem. I have
a problem when attempting to create an image sequence from an mkv file.
This is the code i put in a *.bat file:

ffmpeg -i input.mkv %04d.jpg


If used in a (Windows) batch file, you must replace % by %%

Michael


___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] Replace part of the audio

2023-01-19 Thread Michael Koch

Am 19.01.2023 um 15:49 schrieb Alexander Bieliaev via ffmpeg-user:

How can I replace a part of the audio from and to a specific time with some
other audio/sound (I want to replace it with beep in this case)?



Here is an example:

ffmpeg -lavfi sine=500:d=10 -y audio1.wav
ffmpeg -lavfi sine=2000:d=10 -y audio2.wav

ffmpeg -i audio1.wav -i audio2.wav -lavfi 
[0]volume='1-between(t,4,6)':eval=frame[a];[1]volume='between(t,4,6)':eval=frame[b];[a][b]amix 
-y out.wav


Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] Create a black box over top of the video for a set duration at a set time.

2022-12-31 Thread Michael Koch

Am 31.12.2022 um 10:31 schrieb Bouke / Videotoolshed:

On 31 Dec 2022, at 04:53, Gyan Doshi  wrote:

On 2022-12-31 08:15 am, David Niklas wrote:

Sorry for taking so long to reply.
It is not working. Terminal output below.

% ffmpeg -i Ducksinarow.mp4 -filter_complex 
"[0:0]drawbox=color=black:t=fill=enable='between(t,1.0,2.0)'" -c:a copy -c:v 
libvpx ducks.mp4

There should be a colon after fill, not =

drawbox=color=black:t=fill:enable='between(t,1.0,2.0)'

And for your next question that might come up (since you want to do multiple 
filters, I do too and just got bitten):
There is a limitation on Windows that won’t accept long arguments.


Do you have an example? I have used command lines _much_ longer than the 
above example, and never found a limitation.


Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] Extract chapter names to a text file?

2022-12-05 Thread Michael Koch

Am 06.12.2022 um 01:10 schrieb Carl Zwanzig:

On 12/5/2022 1:36 PM, Laine wrote:

If you are able to generate “chapters.txt” but observe an overabundance
of information in that file, you might try the options that I used to 
get

just the video title and a list of the chapters.


'grep' does wonders for pulling info out of files
  grep '^title=' chapters.txt

(return all lines that start with "title=")
all *nix and the mac should have grep, windoze doesn't unless you 
installed it yourself


In Windows you can pipe the console output to "findstr". In my book is 
an example in chapter 8:

http://www.astro-electronic.de/FFmpeg_Book.pdf

Michael
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] Sending 2 camera outputs side by side to loopback device

2022-11-24 Thread Michael Koch

Am 25.11.2022 um 08:02 schrieb Jim Ruxton:

I am on Ubuntu 22.04 and trying to combine 2 cameras into one stream that
is being sent to a loopback device. The command appears to work but I can't
see the stream in any way. I've tried VLC Cheese ffplay guvcview and none
of them work. The command I am using is
:
*ffmpeg -f v4l2 -vcodec rawvideo -i /dev/video0 -f v4l2 -vcodec rawvideo -i
/dev/video2 -filter_complex hstack,"scale=iw*.5:ih*1" -f v4l2 -vcodec mjpeg
/dev/video4 *


It seems you forgot to specify an output file or pipe at the end of the 
command line.


Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] datascope 16-bit

2022-11-22 Thread Michael Koch

Am 22.11.2022 um 02:15 schrieb Jim DeLaHunt:

On 2022-11-20 00:15, Michael Koch wrote:


Am 20.11.2022 um 02:39 schrieb list+ffmpeg-u...@jdlh.com:


On 2022-11-19 07:34, Michael Koch wrote:

Am 03.11.2022 um 10:10 schrieb Paul B Mahol:


Some things in sea of myriad others:
[...]
Why you claim that datascope does not support >8 bit formats where in
fact it does support it?


In some cases datascope works with 16-bit data, but not in all cases.
Here is an example where it doesn't work:

ffmpeg -f lavfi -i color=black:s=32x32,format=gray10le -lavfi 
geq=lum='X+32*Y',format=gray10le -frames 1 -y test.png


ffmpeg -i test.png -vf 
format=rgb48,showinfo,datascope=s=1280x1152:mode=color2:format=hex 
-y out.png


Perhaps clarify what you observe as "doesn't work", and what 
behaviour you expect?


Both those commands run for me without error, and I can view both 
test.png and out.png without problem. out.png has a cascade of 32 
2-digit hex numbers on the left half, and solid black on the right 
half. The hex numbers run from 00 to 7F, in white on a black to grey 
gradient background, and from 80 to FF, in black on a grey to white 
background.


I would expect 4-digit hex numbers, because the rgb48 pixel format is 
16-bit per channel.

For example, it works fine if "rgb48" is replaced by "gray16".


The phrase "works fine" appeals to some notion of what is "correct" 
behaviour. It seems that you have your own idea of "correct" behaviour 
for this filter. But it seems more helpful for communication on this 
list to use FFmpeg's idea of "correct" behaviour. And the best source 
we have for FFmpeg's idea of "correct" is the filter documentation 
<https://ffmpeg.org/ffmpeg-all.html#datascope>: "Video data analysis 
filter. This filter shows hexadecimal pixel values of part of video."


I think the description in the documentation is incomplete and 
unclear. I wish FFmpeg had a better description. But the actual 
behaviour does not conflict with this incomplete description. The 
description does not promise that the datascope filter shows the 
full-precision, untruncated pixel values. It might be (I did not 
check) that the 8-bit values which datascope displays for an rgb48 
input image are the correct upper 8 bits of 16-bit pixel values.


So, you said, "In some cases datascope works with 16-bit data, but not 
in all cases."  If you had instead said, "In some cases datascope 
gives useful results with 16-bit data, but not in all cases", then I 
would be completely with you. It is clear that truncated 8 bit values 
for an rbg48 input are not as helpful as full-precision, untruncated 
16-bit pixel values.


But the sad reality is the FFmpeg only does not always document its 
intended behaviour clearly, and does not seem to have a goal of always 
providing the most helpful behaviour. The culture here understands 
"doesn't support" or "doesn't work" to mean, "ffmpeg terminates 
prematurely with an error" or "ffmpeg fails to generate output". If 
you use those phrases when your objection is actually, "runs to 
completion and generates output, but output is not as helpful as it 
could be", then your message gets diluted by misunderstanding.


Sorry for describing the issue not clear enough.
In ticket #10057 it should be clear.

Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] datascope 16-bit

2022-11-20 Thread Michael Koch

Am 20.11.2022 um 02:39 schrieb list+ffmpeg-u...@jdlh.com:


On 2022-11-19 07:34, Michael Koch wrote:

Am 03.11.2022 um 10:10 schrieb Paul B Mahol:


Some things in sea of myriad others:
[...]
Why you claim that datascope does not support >8 bit formats where in
fact it does support it?


In some cases datascope works with 16-bit data, but not in all cases.
Here is an example where it doesn't work:

ffmpeg -f lavfi -i color=black:s=32x32,format=gray10le -lavfi 
geq=lum='X+32*Y',format=gray10le -frames 1 -y test.png


ffmpeg -i test.png -vf 
format=rgb48,showinfo,datascope=s=1280x1152:mode=color2:format=hex -y 
out.png


Perhaps clarify what you observe as "doesn't work", and what behaviour 
you expect?


Both those commands run for me without error, and I can view both 
test.png and out.png without problem. out.png has a cascade of 32 
2-digit hex numbers on the left half, and solid black on the right 
half. The hex numbers run from 00 to 7F, in white on a black to grey 
gradient background, and from 80 to FF, in black on a grey to white 
background.


I would expect 4-digit hex numbers, because the rgb48 pixel format is 
16-bit per channel.

For example, it works fine if "rgb48" is replaced by "gray16".

Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


[FFmpeg-user] datascope 16-bit

2022-11-19 Thread Michael Koch

Am 03.11.2022 um 10:10 schrieb Paul B Mahol:


Some things in sea of myriad others:
[...]
Why you claim that datascope does not support >8 bit formats where in
fact it does support it?


In some cases datascope works with 16-bit data, but not in all cases.
Here is an example where it doesn't work:

ffmpeg -f lavfi -i color=black:s=32x32,format=gray10le -lavfi 
geq=lum='X+32*Y',format=gray10le -frames 1 -y test.png


ffmpeg -i test.png -vf 
format=rgb48,showinfo,datascope=s=1280x1152:mode=color2:format=hex -y 
out.png


Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] Piping from FFmpeg to FFplay

2022-11-13 Thread Michael Koch

Am 13.11.2022 um 22:52 schrieb Reino Wijnsma:

On 2022-11-13T20:45:42+0100, Michael Koch  wrote:

ffmpeg -f lavfi -i testsrc2 -f nut | ffplay -

Btw, I hope you know that ffplay can do this too:

ffplay -f lavfi testsrc2



yes, that's known.

Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] Piping from FFmpeg to FFplay

2022-11-13 Thread Michael Koch

Am 13.11.2022 um 22:47 schrieb Reino Wijnsma:

On 2022-11-13T22:18:15+0100, Michael Koch  wrote:

Is the "-" at the end an (undocumented) option of FFplay, or is it a batch file 
operator?

The first, I guess.
I've found 'opusenc' (Opus command-line encoder) to print:

Usage: opusenc [options] input_file output_file.opus

[...]

input_file can be:
  filename.wav  file
  - stdin

output_file can be:
  filename.opus compressed file
  - stdout

[...]

But I also have lots of other binaries where the help-section doesn't mention 
the dash or stdin.
So I guess the developers assume it's common knowledge.



thank you!

Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] Piping from FFmpeg to FFplay

2022-11-13 Thread Michael Koch

Am 13.11.2022 um 21:57 schrieb Reino Wijnsma:

Hello Michael,

On 2022-11-13T20:45:42+0100, Michael Koch  wrote:

ffmpeg -f lavfi -i testsrc2 -f nut | ffplay -

This command line works fine and I have used it many times.

It shouldn't, because you forgot a "-" after "-f nut".


oops, sorry I forgot to type that "-" in the email. You are right that 
it doesn't work without it.






But I don't know what's the meaning of the - character after ffplay.

Input or output. An url or a file-path. And in this case; stdout and stdin. 
FFmpeg's stdout is piped to FFplay's stdin.
I'm surprised you didn't know.


Is the "-" at the end an (undocumented) option of FFplay, or is it a 
batch file operator?


Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


[FFmpeg-user] Piping from FFmpeg to FFplay

2022-11-13 Thread Michael Koch
I have a question about piping from FFmpeg to FFplay (in a Windows batch 
file):


ffmpeg -f lavfi -i testsrc2 -f nut | ffplay -

This command line works fine and I have used it many times. But I don't 
know what's the meaning of the - character after ffplay. Does this - 
character belong to the ffplay command? What does it mean? Or is it a 
batch file operator, like the | character?
I can't find - without any following characters in the FFmpeg or FFplay 
documentation.


Michael


___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] Change fps from 50 to 25

2022-11-12 Thread Michael Koch

Am 12.11.2022 um 12:39 schrieb Cecil Westerhof via ffmpeg-user:

I have a few videos that have a 50 fps. But the computer I am playing
them on cannot handle that properly. Can I use ffmpeg to change the
fps to 25? While still having the same length.



sure, that's easy:
ffmpeg -i in.mp4 -r 25 out.mp4

Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] 16:9 to 2.35:1 and true HD 7.1 to DTS

2022-11-03 Thread Michael Koch

Am 03.11.2022 um 20:17 schrieb Bartosz Trzebuchowski:

Hi,
I’m completely new to ffmpeg and have 2 questions:

1. How can I convert 16:9 movies to 2.35:1? My projector can’t scale down and 
16:9 movies run outside my 2.35:1 canvas at the top and the bottom. I need to 
black out top and bottom so only 2.35:1 is visible.


Have a look at the documentation for the "pad" filter.
The following example might work, but I haven't tested it:

ffmpeg -i input.mp4 -vf pad=x=-1:y=-1:aspect=2.35 output.mp4

Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] V360 stabilization

2022-11-03 Thread Michael Koch

Am 03.11.2022 um 10:10 schrieb Paul B Mahol:

Your book is full or factual errors,

Notice to anyone: do not use this thing for anything serious.

Thank you for so much promotion for my book.
Sure, it's likely that in more than 900 pages there are a few errors. We
all make errors. If anyone finds an error, please let me know. You find
my e-mail address on the first page.

http://www.astro-loltronic.de/FFmpeg_Book.pdf


I did not write what you quoted above. You did edit the link.
It's very bad behaviour to edit what someone else has written, and make 
it look as if I wrote it.


Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] V360 stabilization

2022-11-03 Thread Michael Koch

Am 03.11.2022 um 10:10 schrieb Paul B Mahol:

On 11/3/22, Michael Koch  wrote:

Am 03.11.2022 um 09:35 schrieb Paul B Mahol:

On 10/11/20, Michael Koch  wrote:

Am 29.09.2020 um 22:54 schrieb Michael Koch:

Hello all,

I've programmed a C# workaround for stabilization of 360° videos. The
procedure is as follows:

1. FFmpeg: From each frame of the equirectangular input video, extract
two small images which are 90° apart in the input video. I call them A
and B images.

2. C# code: Analyze the x and y image shift from subsequent A and B
images. Calculate how the equirectangular frames must be rotated (yaw,
pitch, roll) to compensate the image shifts. This part wasn't easy.
Two rotation matrices and one matrix multiplication are required.
Write the results to a *.cmd file.

3. FFmpeg: Read the *.cmd file and apply the rotations with the v360
filter. The output video is stabilized.

For details and source code please have a look at chapter 2.78 in my
book:
http://www.astro-electronic.de/FFmpeg_Book.pdf

If anyone wants to implement this in FFmpeg, please feel free to do it.

I've written and tested an improved version for 360° stabilization, it's
in chapter 2.79.

Your book is full or factual errors,

Notice to anyone: do not use this thing for anything serious.

Thank you for so much promotion for my book.
Sure, it's likely that in more than 900 pages there are a few errors. We
all make errors. If anyone finds an error, please let me know. You find
my e-mail address on the first page.

http://www.astro-loltronic.de/FFmpeg_Book.pdf

Some things in sea of myriad others:
Why you claim that FFmpeg does not have SER format support while in
fact it have support for it?


You are right, it is supported now. But my first test to convert a 
16-bit grayscale SER file to MP4 failed. I will provide a short sample 
file later. At the moment I only have SER files that are too big for 
uploading.



Why you claim that datascope does not support >8 bit formats where in
fact it does support it?
Why you claim that drawbox does not support RGB formats while in fact
it does support them?


Perhaps these features weren't supported at the time when I wrote that, 
but they are supported now.
All three issues are already corrected in the book and an updated 
version is online.


Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] V360 stabilization

2022-11-03 Thread Michael Koch

Am 03.11.2022 um 09:35 schrieb Paul B Mahol:

On 10/11/20, Michael Koch  wrote:

Am 29.09.2020 um 22:54 schrieb Michael Koch:

Hello all,

I've programmed a C# workaround for stabilization of 360° videos. The
procedure is as follows:

1. FFmpeg: From each frame of the equirectangular input video, extract
two small images which are 90° apart in the input video. I call them A
and B images.

2. C# code: Analyze the x and y image shift from subsequent A and B
images. Calculate how the equirectangular frames must be rotated (yaw,
pitch, roll) to compensate the image shifts. This part wasn't easy.
Two rotation matrices and one matrix multiplication are required.
Write the results to a *.cmd file.

3. FFmpeg: Read the *.cmd file and apply the rotations with the v360
filter. The output video is stabilized.

For details and source code please have a look at chapter 2.78 in my
book:
http://www.astro-electronic.de/FFmpeg_Book.pdf

If anyone wants to implement this in FFmpeg, please feel free to do it.

I've written and tested an improved version for 360° stabilization, it's
in chapter 2.79.

Your book is full or factual errors,

Notice to anyone: do not use this thing for anything serious.


Thank you for so much promotion for my book.
Sure, it's likely that in more than 900 pages there are a few errors. We 
all make errors. If anyone finds an error, please let me know. You find 
my e-mail address on the first page.


http://www.astro-electronic.de/FFmpeg_Book.pdf

Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] Error in aeval?

2022-10-29 Thread Michael Koch

Am 29.10.2022 um 12:02 schrieb Paul B Mahol:

On 10/29/22, Michael Koch  wrote:

This command line works as expected:

ffmpeg -f lavfi -i sine,aeval="val(0)|val(0)*sin(2)*sin(2)" -ac 2 -t 5
-y out.wav


Why does it no longer work when I replace sin(2)*sin(2)  by pow(sin(2),2)  ?

ffmpeg -f lavfi -i sine,aeval="val(0)|val(0)*pow(sin(2),2)" -ac 2 -t 5
-y out.wav

Is this a bug? The console output is below.

PEBKAC

Escape ','


now it works, thank you

Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


[FFmpeg-user] Error in aeval?

2022-10-29 Thread Michael Koch

This command line works as expected:

ffmpeg -f lavfi -i sine,aeval="val(0)|val(0)*sin(2)*sin(2)" -ac 2 -t 5 
-y out.wav



Why does it no longer work when I replace sin(2)*sin(2)  by pow(sin(2),2)  ?

ffmpeg -f lavfi -i sine,aeval="val(0)|val(0)*pow(sin(2),2)" -ac 2 -t 5 
-y out.wav


Is this a bug? The console output is below.

Michael


C:\Users\astro\Desktop\Mosquito>ffmpeg -f lavfi -i 
sine,aeval="val(0)|val(0)*pow(sin(2),2)" -ac 2 -t 5 -y out.wav
ffmpeg version 2022-10-24-git-d79c240196-full_build-www.gyan.dev 
Copyright (c) 2000-2022 the FFmpeg developers

  built with gcc 12.1.0 (Rev2, Built by MSYS2 project)
  configuration: --enable-gpl --enable-version3 --enable-static 
--disable-w32threads --disable-autodetect --enable-fontconfig 
--enable-iconv --enable-gnutls --enable-libxml2 --enable-gmp 
--enable-bzlib --enable-lzma --enable-libsnappy --enable-zlib 
--enable-librist --enable-libsrt --enable-libssh --enable-libzmq 
--enable-avisynth --enable-libbluray --enable-libcaca --enable-sdl2 
--enable-libaribb24 --enable-libdav1d --enable-libdavs2 
--enable-libuavs3d --enable-libzvbi --enable-librav1e --enable-libsvtav1 
--enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs2 
--enable-libxvid --enable-libaom --enable-libjxl --enable-libopenjpeg 
--enable-libvpx --enable-mediafoundation --enable-libass --enable-frei0r 
--enable-libfreetype --enable-libfribidi --enable-liblensfun 
--enable-libvidstab --enable-libvmaf --enable-libzimg --enable-amf 
--enable-cuda-llvm --enable-cuvid --enable-ffnvcodec --enable-nvdec 
--enable-nvenc --enable-d3d11va --enable-dxva2 --enable-libvpl 
--enable-libshaderc --enable-vulkan --enable-libplacebo --enable-opencl 
--enable-libcdio --enable-libgme --enable-libmodplug --enable-libopenmpt 
--enable-libopencore-amrwb --enable-libmp3lame --enable-libshine 
--enable-libtheora --enable-libtwolame --enable-libvo-amrwbenc 
--enable-libilbc --enable-libgsm --enable-libopencore-amrnb 
--enable-libopus --enable-libspeex --enable-libvorbis --enable-ladspa 
--enable-libbs2b --enable-libflite --enable-libmysofa 
--enable-librubberband --enable-libsoxr --enable-chromaprint

  libavutil  57. 39.101 / 57. 39.101
  libavcodec 59. 51.100 / 59. 51.100
  libavformat    59. 34.101 / 59. 34.101
  libavdevice    59.  8.101 / 59.  8.101
  libavfilter 8. 49.101 /  8. 49.101
  libswscale  6.  8.112 /  6.  8.112
  libswresample   4.  9.100 /  4.  9.100
  libpostproc    56.  7.100 / 56.  7.100
[Parsed_aeval_1 @ 02d8c2939000] [Eval @ 00f7499fe440] Missing 
')' or too many args in 'pow(sin(2)'
[lavfi @ 02d8c29261c0] Error initializing filter 'aeval' with args 
'val(0)|val(0)*pow(sin(2)'

sine,aeval=val(0)|val(0)*pow(sin(2),2): Invalid argument


___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] What is a "pad" in the context of an "input pad", an "output pad" and a "filter pad"

2022-10-27 Thread Michael Koch

Am 27.10.2022 um 21:57 schrieb Clay via ffmpeg-user:

Dumb ffmpeg question alert:

What is a "pad" in the context of an "input pad", an "output pad" and a
"filter pad"?


"input pad" and "output pad" are described in chapter 32:
https://www.ffmpeg.org/ffmpeg-all.html#Filtergraph-description

"filter pad"? I don't know.

Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] encode to RAW video

2022-10-25 Thread Michael Koch

Am 25.10.2022 um 09:29 schrieb Naveen.B:


What do you want to do with the RAW video? You know that you can't play
it with most players?

Yes, I know it cannot be played with most of th players, I need to see the
size of the RAW video and make some compression on it for our further
analysis.

 Could you please help me on the missing parameters to generate the RAW
video?


I did some tests and came to the conclusion that it's impossible to 
import a group of numbered raw images.

It's possible to import a single raw image.
It's possible to import a group of numbered images (JPG, PNG, TIF...)
But it's not possible to import a numbered group of raw images.
(To the experts: Please correct me if I'm wrong).

Anyway it's a bad idea to save the images in raw format. You should use 
PNG images, which have a lossless compression, and 16-bit depth is possible.
I recommend that you convert your images to PNG, and then things become 
much easier.


Here is a (Windows) batch file with a few examples:

rem  Make 30 PNG images (48-bit per pixel):
ffmpeg -f lavfi -i testsrc2=s=320x200,format=rgb48 -frames 30 -f image2 
-y image-%%03d.png


rem  Convert these 30 images to a raw video (this conversion is lossless):
ffmpeg -i image-%%03d.png -s 320x200 -pixel_format rgb48 -f rawvideo 
-c:v rawvideo -y video.raw


rem  The size of the raw video is exactly 320 * 200 * 2 * 3 * 30 = 
1152 Bytes


rem  Convert the first frame from the raw video to a PNG image (48-bit 
per pixel):
ffmpeg -video_size 320x200 -pixel_format rgb48 -f rawvideo -i video.raw 
-frames 1 -y image.png


rem  Convert the raw video to a (almost lossless) MP4 video:
ffmpeg -video_size 320x200 -pixel_format rgb48 -f rawvideo -framerate 30 
-i video.raw -crf 0 -y video.mp4


pause


I have a question to the experts. The above example works fine with 
pixel format rgb48, but with pixel formats gray16 or gray10le the MP4 
output looks wrong. I don't yet understand why.


Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] encode to RAW video

2022-10-24 Thread Michael Koch

Am 24.10.2022 um 18:36 schrieb Naveen.B:

Are you sure that there is a file CapturedImage-000.raw ?
You forgot to specify the input pixel format.
Many parameters are missing for the raw output. But the error message is
from the input.

Do you really want raw video? That's not the easiest format for a beginner.
Everything must be specified for the input (file format, pixel format,
size, framerate, and whatever I may have forgotten...) and the same
things must be specified for the output again.

[image: image.png]
this error pops up when I add the flag -f rawvideo (I have a raw files,
should this be used for raw video?)


I tried by giving the input with -pix_fmt, it's the same error.

Yes, I want raw video. I have converted RAW files to .mp4 successfully with
uncompressed, the size of the .mp4 format video was less comparatively.
I have 30 RAW files with each file is around 4Mbps (so, 30 RAW
filesx4Mbps=180Mbps for one second), the output of the video file size
(.mp4) is coming around 18 Mbps, so I am assuming .mp4 video format is
doing some compression and hence I need to try this with RAW video.


What do you want to do with the RAW video? You know that you can't play 
it with most players?


Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] encode to RAW video

2022-10-24 Thread Michael Koch

Am 24.10.2022 um 17:50 schrieb Naveen.B:



Hello Team,

I have managed to convert raw Images files to .mp4 video,
I need to encode to a RAW video instead, could you please let me know

the

command for this?

RAW Image file is,
resolution - 1600x1300
fps-30
bit depth-16bit


Do not specify a video codec, or set -c:v rawhide, set your frame rate /
resolution / bit depth params and use .raw as extension.

I tried as you suggested, I get the below error,

You must specify all relevant parameters for the output file at the
correct place in the command line. After the input file name, and before
the output file name. I think you must also add -f rawvideo
Remove -crf 1, as it makes no sense for a raw file.

C:\Naveen\projects\DMS\software\ffmpeg_full\ffmpeg\bin>ffmpeg -f rawvideo
-s 1600x1300 -r 30 -i CapturedImage-%03d.raw raw_video.raw
ffmpeg version 2022-06-20-git-56419428a8-full_build-www.gyan.dev
Copyright (c) 2000-2022 the FFmpeg developers
   built with gcc 11.3.0 (Rev1, Built by MSYS2 project)
   configuration: --enable-gpl --enable-version3 --enable-static
--disable-w32threads --disable-autodetect --enable-fontconfig
--enable-iconv --enable-gnutls --enable-libxml2 --enable-gmp --enable-bzlib
--enable-lzma --enable-libsnappy --enable-zlib --enable-librist
--enable-libsrt --enable-libssh --enable-libzmq --enable-avisynth
--enable-libbluray --enable-libcaca --enable-sdl2 --enable-libdav1d
--enable-libdavs2 --enable-libuavs3d --enable-libzvbi --enable-librav1e
--enable-libsvtav1 --enable-libwebp --enable-libx264 --enable-libx265
--enable-libxavs2 --enable-libxvid --enable-libaom --enable-libjxl
--enable-libopenjpeg --enable-libvpx --enable-mediafoundation
--enable-libass --enable-frei0r --enable-libfreetype --enable-libfribidi
--enable-liblensfun --enable-libvidstab --enable-libvmaf --enable-libzimg
--enable-amf --enable-cuda-llvm --enable-cuvid --enable-ffnvcodec
--enable-nvdec --enable-nvenc --enable-d3d11va --enable-dxva2
--enable-libmfx --enable-libshaderc --enable-vulkan --enable-libplacebo
--enable-opencl --enable-libcdio --enable-libgme --enable-libmodplug
--enable-libopenmpt --enable-libopencore-amrwb --enable-libmp3lame
--enable-libshine --enable-libtheora --enable-libtwolame
--enable-libvo-amrwbenc --enable-libilbc --enable-libgsm
--enable-libopencore-amrnb --enable-libopus --enable-libspeex
--enable-libvorbis --enable-ladspa --enable-libbs2b --enable-libflite
--enable-libmysofa --enable-librubberband --enable-libsoxr
--enable-chromaprint
   libavutil  57. 27.100 / 57. 27.100
   libavcodec 59. 33.100 / 59. 33.100
   libavformat59. 25.100 / 59. 25.100
   libavdevice59.  6.100 / 59.  6.100
   libavfilter 8. 41.100 /  8. 41.100
   libswscale  6.  6.100 /  6.  6.100
   libswresample   4.  6.100 /  4.  6.100
   libpostproc56.  5.100 / 56.  5.100
CapturedImage-%03d.raw: No such file or directory


These files are present in the directory, but still its throwing this
error, No such file or directory.


Are you sure that there is a file CapturedImage-000.raw ?
You forgot to specify the input pixel format.
Many parameters are missing for the raw output. But the error message is 
from the input.


Do you really want raw video? That's not the easiest format for a beginner.
Everything must be specified for the input (file format, pixel format, 
size, framerate, and whatever I may have forgotten...) and the same 
things must be specified for the output again.


Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


[FFmpeg-user] Convert from TIF to DNG

2022-10-24 Thread Michael Koch
On a website (1) I found the idea that a TIF file can be converted to a 
DNG file just by adding some tags and renaming it. Let's test it. Make a 
16-bit TIF file:


ffmpeg -f lavfi -i nullsrc=s=320x320,format=yuv444p10le -vf 
geq=lum='X/W*32+32*floor(Y/H*32)':cr=512:cb=512,format=rgb48be -frames 1 
-y test16.tif


Add some tags:

exiftool -DNGVersion="1.4.0.0" -PhotometricInterpretation="Linear Raw" 
test16.tif


Rename the file to DNG:

copy test16.tif test16.dng

The DNG file can be opened with FFmpeg without any problems. But all 
other programs complain that the file is corrupt. I tested with 
IrfanView, FastStone, Gimp and Fitswork.

What else might be required to make it a valid DNG file?

Michael

(1) 
https://rawpedia.rawtherapee.com/Film_Simulation#Advanced_-_Identity_DNG

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] encode to RAW video

2022-10-24 Thread Michael Koch

Am 24.10.2022 um 14:29 schrieb Naveen.B:


On 24 Oct 2022, at 12:52, Naveen.B  wrote:

Hello Team,

I have managed to convert raw Images files to .mp4 video,
I need to encode to a RAW video instead, could you please let me know the
command for this?

RAW Image file is,
resolution - 1600x1300
fps-30
bit depth-16bit


Do not specify a video codec, or set -c:v rawhide, set your frame rate /
resolution / bit depth params and use .raw as extension.

I tried as you suggested, I get the below error,


You must specify all relevant parameters for the output file at the 
correct place in the command line. After the input file name, and before 
the output file name. I think you must also add -f rawvideo

Remove -crf 1, as it makes no sense for a raw file.

Michael
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


mail 17.09

2022-10-20 Thread Michael Koch
 

 

Mit freundlichen Grüßen / Best regards 

 

Michael Koch

 

--

 

Michael Koch

Senior Network Administrator

 

Mitutoyo CTL Germany GmbH

Von-Gunzert-Straße 17

D-78727 Oberndorf am Neckar

Tel.: +49-7423-8776-54

 <http://www.mitutoyo-ctl.de/> www.mitutoyo-ctl.de 

 

HRB 734157, Stuttgart

UST-Id: DE272400711

Geschäftsführer: Hans-Peter Klein, Swen Haubold

 

Informationen zum Umgang mit Ihren Daten finden Sie
<https://www.mitutoyo-ctl.de/datenschutzerklaerung/datenschutzerklaerung-fue
r-geschaeftspartner/> hier.

 



smime.p7s
Description: S/MIME cryptographic signature


mail 17.06

2022-10-20 Thread Michael Koch
 

 

Mit freundlichen Grüßen / Best regards 

 

Michael Koch

 

--

 

Michael Koch

Senior Network Administrator

 

Mitutoyo CTL Germany GmbH

Von-Gunzert-Straße 17

D-78727 Oberndorf am Neckar

Tel.: +49-7423-8776-54

 <http://www.mitutoyo-ctl.de/> www.mitutoyo-ctl.de 

 

HRB 734157, Stuttgart

UST-Id: DE272400711

Geschäftsführer: Hans-Peter Klein, Swen Haubold

 

Informationen zum Umgang mit Ihren Daten finden Sie
<https://www.mitutoyo-ctl.de/datenschutzerklaerung/datenschutzerklaerung-fue
r-geschaeftspartner/> hier.

 



smime.p7s
Description: S/MIME cryptographic signature


mail 17.04

2022-10-20 Thread Michael Koch
 

 

Mit freundlichen Grüßen / Best regards 

 

Michael Koch

 

--

 

Michael Koch

Senior Network Administrator

 

Mitutoyo CTL Germany GmbH

Von-Gunzert-Straße 17

D-78727 Oberndorf am Neckar

Tel.: +49-7423-8776-54

 <http://www.mitutoyo-ctl.de/> www.mitutoyo-ctl.de 

 

HRB 734157, Stuttgart

UST-Id: DE272400711

Geschäftsführer: Hans-Peter Klein, Swen Haubold

 

Informationen zum Umgang mit Ihren Daten finden Sie
<https://www.mitutoyo-ctl.de/datenschutzerklaerung/datenschutzerklaerung-fue
r-geschaeftspartner/> hier.

 



smime.p7s
Description: S/MIME cryptographic signature


mail 16.57

2022-10-20 Thread Michael Koch
 

 

Mit freundlichen Grüßen / Best regards 

 

Michael Koch

 

--

 

Michael Koch

Senior Network Administrator

 

Mitutoyo CTL Germany GmbH

Von-Gunzert-Straße 17

D-78727 Oberndorf am Neckar

Tel.: +49-7423-8776-54

 <http://www.mitutoyo-ctl.de/> www.mitutoyo-ctl.de 

 

HRB 734157, Stuttgart

UST-Id: DE272400711

Geschäftsführer: Hans-Peter Klein, Swen Haubold

 

Informationen zum Umgang mit Ihren Daten finden Sie
<https://www.mitutoyo-ctl.de/datenschutzerklaerung/datenschutzerklaerung-fue
r-geschaeftspartner/> hier.

 



smime.p7s
Description: S/MIME cryptographic signature


mailtest 16.51

2022-10-20 Thread Michael Koch
 

 

Mit freundlichen Grüßen / Best regards 

 

Michael Koch

 

--

 

Michael Koch

Senior Network Administrator

 

Mitutoyo CTL Germany GmbH

Von-Gunzert-Straße 17

D-78727 Oberndorf am Neckar

Tel.: +49-7423-8776-54

 <http://www.mitutoyo-ctl.de/> www.mitutoyo-ctl.de 

 

HRB 734157, Stuttgart

UST-Id: DE272400711

Geschäftsführer: Hans-Peter Klein, Swen Haubold

 

Informationen zum Umgang mit Ihren Daten finden Sie
<https://www.mitutoyo-ctl.de/datenschutzerklaerung/datenschutzerklaerung-fue
r-geschaeftspartner/> hier.

 



smime.p7s
Description: S/MIME cryptographic signature


mailtest 16.41

2022-10-20 Thread Michael Koch
 

 

Mit freundlichen Grüßen / Best regards 

 

Michael Koch

 

--

 

Michael Koch

Senior Network Administrator

 

Mitutoyo CTL Germany GmbH

Von-Gunzert-Straße 17

D-78727 Oberndorf am Neckar

Tel.: +49-7423-8776-54

 <http://www.mitutoyo-ctl.de/> www.mitutoyo-ctl.de 

 

HRB 734157, Stuttgart

UST-Id: DE272400711

Geschäftsführer: Hans-Peter Klein, Swen Haubold

 

Informationen zum Umgang mit Ihren Daten finden Sie
<https://www.mitutoyo-ctl.de/datenschutzerklaerung/datenschutzerklaerung-fue
r-geschaeftspartner/> hier.

 



smime.p7s
Description: S/MIME cryptographic signature


Mailman test please ignore 16.29

2022-10-20 Thread Michael Koch
Dies ist der Inhalt einer Mail

 

Mit freundlichen Grüßen / Best regards 

 

Michael Koch

 

--

 

Michael Koch

Senior Network Administrator

 

Mitutoyo CTL Germany GmbH

Von-Gunzert-Straße 17

D-78727 Oberndorf am Neckar

Tel.: +49-7423-8776-54

 <http://www.mitutoyo-ctl.de/> www.mitutoyo-ctl.de 

 

HRB 734157, Stuttgart

UST-Id: DE272400711

Geschäftsführer: Hans-Peter Klein, Swen Haubold

 

Informationen zum Umgang mit Ihren Daten finden Sie
<https://www.mitutoyo-ctl.de/datenschutzerklaerung/datenschutzerklaerung-fue
r-geschaeftspartner/> hier.

 



smime.p7s
Description: S/MIME cryptographic signature


Mailtest please ignore 15.57

2022-10-20 Thread Michael Koch
Dies ist der Inhalt

 

Mit freundlichen Grüßen / Best regards 

 

Michael Koch

 

--

 

Michael Koch

Senior Network Administrator

 

Mitutoyo CTL Germany GmbH

Von-Gunzert-Straße 17

D-78727 Oberndorf am Neckar

Tel.: +49-7423-8776-54

 <http://www.mitutoyo-ctl.de/> www.mitutoyo-ctl.de 

 

HRB 734157, Stuttgart

UST-Id: DE272400711

Geschäftsführer: Hans-Peter Klein, Swen Haubold

 

Informationen zum Umgang mit Ihren Daten finden Sie
<https://www.mitutoyo-ctl.de/datenschutzerklaerung/datenschutzerklaerung-fue
r-geschaeftspartner/> hier.

 



smime.p7s
Description: S/MIME cryptographic signature


Re: [FFmpeg-user] ScaleUp the Raw file

2022-10-19 Thread Michael Koch



The -s option in your command line is for the output size because it's
written after -i.
You must also set the input size before -i.
Your RAW image contains only pixel data and no header for size and pixel
format.

I din't understand the header for size and pixel format, width error is

resolved but the error Invalid argument is still coming.


This example works for me:

rem  Make a RAW image:
ffmpeg -f lavfi -i testsrc2=s=320x200 -pixel_format gray10be -frames 1 
-f rawvideo -y raw.raw


rem  The file raw.raw contains exactly 320 x 200 x 2 = 128000 bytes
rem  It doesn't contain any information about width, height and pixel 
format.


rem  Convert the RAW image to JPG:
ffmpeg -s 320x200 -pixel_format gray10be -f rawvideo -i raw.raw -y out.jpg

Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] ScaleUp the Raw file

2022-10-19 Thread Michael Koch

Am 19.10.2022 um 21:50 schrieb Naveen.B:

I got an error by trying this command. Could you kindly let me know if

this

is feasible?

Always always always include the complete output of ffmpeg when asking

for

help. For one, we don't know what error you got.

C:\Naveen\projects\DMS\software\ffmpeg_full\ffmpeg\bin>ffmpeg -i
CapturedImage-001.raw -vf scale=1920:1080 CapturedImageScale-001.raw
ffmpeg version 2022-06-20-git-56419428a8-full_build-www.gyan.dev
Copyright (c) 2000-2022 the FFmpeg developers
built with gcc 11.3.0 (Rev1, Built by MSYS2 project)
configuration: --enable-gpl --enable-version3 --enable-static
--disable-w32threads --disable-autodetect --enable-fontconfig
--enable-iconv --enable-gnutls --enable-libxml2 --enable-gmp

--enable-bzlib

--enable-lzma --enable-libsnappy --enable-zlib --enable-librist
--enable-libsrt --enable-libssh --enable-libzmq --enable-avisynth
--enable-libbluray --enable-libcaca --enable-sdl2 --enable-libdav1d
--enable-libdavs2 --enable-libuavs3d --enable-libzvbi --enable-librav1e
--enable-libsvtav1 --enable-libwebp --enable-libx264 --enable-libx265
--enable-libxavs2 --enable-libxvid --enable-libaom --enable-libjxl
--enable-libopenjpeg --enable-libvpx --enable-mediafoundation
--enable-libass --enable-frei0r --enable-libfreetype --enable-libfribidi
--enable-liblensfun --enable-libvidstab --enable-libvmaf

--enable-libzimg

--enable-amf --enable-cuda-llvm --enable-cuvid --enable-ffnvcodec
--enable-nvdec --enable-nvenc --enable-d3d11va --enable-dxva2
--enable-libmfx --enable-libshaderc --enable-vulkan --enable-libplacebo
--enable-opencl --enable-libcdio --enable-libgme --enable-libmodplug
--enable-libopenmpt --enable-libopencore-amrwb --enable-libmp3lame
--enable-libshine --enable-libtheora --enable-libtwolame
--enable-libvo-amrwbenc --enable-libilbc --enable-libgsm
--enable-libopencore-amrnb --enable-libopus --enable-libspeex
--enable-libvorbis --enable-ladspa --enable-libbs2b --enable-libflite
--enable-libmysofa --enable-librubberband --enable-libsoxr
--enable-chromaprint
libavutil  57. 27.100 / 57. 27.100
libavcodec 59. 33.100 / 59. 33.100
libavformat59. 25.100 / 59. 25.100
libavdevice59.  6.100 / 59.  6.100
libavfilter 8. 41.100 /  8. 41.100
libswscale  6.  6.100 /  6.  6.100
libswresample   4.  6.100 /  4.  6.100
libpostproc56.  5.100 / 56.  5.100
[image2 @ 023a84c00880] Format image2 detected only with low score

of

5, misdetection possible!
[rawvideo @ 023a84c14b40] Invalid pixel format.
[image2 @ 023a84c00880] Failed to open codec in
avformat_find_stream_info
[rawvideo @ 023a84c14b40] Invalid pixel format.
  Last message repeated 1 times
[image2 @ 023a84c00880] Failed to open codec in
avformat_find_stream_info
[image2 @ 023a84c00880] Could not find codec parameters for stream 0
(Video: rawvideo, none): unspecified size
Consider increasing the value for the 'analyzeduration' (0) and
'probesize' (500) options
Input #0, image2, from 'CapturedImage-001.raw':
Duration: 00:00:00.04, start: 0.00, bitrate: 832000 kb/s
Stream #0:0: Video: rawvideo, none, 25 fps, 25 tbr, 25 tbn
[NULL @ 023a84c15b80] Unable to find a suitable output format for
'CapturedImageScale-001.raw'
CapturedImageScale-001.raw: Invalid argument

C:\Naveen\projects\DMS\software\ffmpeg_full\ffmpeg\bin>


I think you must tell FFmpeg the pixel format and the size of the input
image. If you would use JPG or PNG as input format, then FFmpeg would
detect these details automatically. But for RAW format FFmpeg can't know
the details.


I gave the pixel_format and input size is 1920x1080, but I get error width
is not set and one more error which I dont understand.

C:\Naveen\projects\DMS\software\ffmpeg_full\ffmpeg\bin>ffmpeg -pixel_format
gray10le -i CapturedImage-001.raw -r 30 -s 1920x1080
CapturedImageScale-001.raw


The -s option in your command line is for the output size because it's 
written after -i.

You must also set the input size before -i.
Your RAW image contains only pixel data and no header for size and pixel 
format.


Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] ScaleUp the Raw file

2022-10-19 Thread Michael Koch

Am 19.10.2022 um 21:26 schrieb Naveen.B:

I got an error by trying this command. Could you kindly let me know if

this

is feasible?

Always always always include the complete output of ffmpeg when asking for
help. For one, we don't know what error you got.

C:\Naveen\projects\DMS\software\ffmpeg_full\ffmpeg\bin>ffmpeg -i
CapturedImage-001.raw -vf scale=1920:1080 CapturedImageScale-001.raw
ffmpeg version 2022-06-20-git-56419428a8-full_build-www.gyan.dev
Copyright (c) 2000-2022 the FFmpeg developers
   built with gcc 11.3.0 (Rev1, Built by MSYS2 project)
   configuration: --enable-gpl --enable-version3 --enable-static
--disable-w32threads --disable-autodetect --enable-fontconfig
--enable-iconv --enable-gnutls --enable-libxml2 --enable-gmp --enable-bzlib
--enable-lzma --enable-libsnappy --enable-zlib --enable-librist
--enable-libsrt --enable-libssh --enable-libzmq --enable-avisynth
--enable-libbluray --enable-libcaca --enable-sdl2 --enable-libdav1d
--enable-libdavs2 --enable-libuavs3d --enable-libzvbi --enable-librav1e
--enable-libsvtav1 --enable-libwebp --enable-libx264 --enable-libx265
--enable-libxavs2 --enable-libxvid --enable-libaom --enable-libjxl
--enable-libopenjpeg --enable-libvpx --enable-mediafoundation
--enable-libass --enable-frei0r --enable-libfreetype --enable-libfribidi
--enable-liblensfun --enable-libvidstab --enable-libvmaf --enable-libzimg
--enable-amf --enable-cuda-llvm --enable-cuvid --enable-ffnvcodec
--enable-nvdec --enable-nvenc --enable-d3d11va --enable-dxva2
--enable-libmfx --enable-libshaderc --enable-vulkan --enable-libplacebo
--enable-opencl --enable-libcdio --enable-libgme --enable-libmodplug
--enable-libopenmpt --enable-libopencore-amrwb --enable-libmp3lame
--enable-libshine --enable-libtheora --enable-libtwolame
--enable-libvo-amrwbenc --enable-libilbc --enable-libgsm
--enable-libopencore-amrnb --enable-libopus --enable-libspeex
--enable-libvorbis --enable-ladspa --enable-libbs2b --enable-libflite
--enable-libmysofa --enable-librubberband --enable-libsoxr
--enable-chromaprint
   libavutil  57. 27.100 / 57. 27.100
   libavcodec 59. 33.100 / 59. 33.100
   libavformat59. 25.100 / 59. 25.100
   libavdevice59.  6.100 / 59.  6.100
   libavfilter 8. 41.100 /  8. 41.100
   libswscale  6.  6.100 /  6.  6.100
   libswresample   4.  6.100 /  4.  6.100
   libpostproc56.  5.100 / 56.  5.100
[image2 @ 023a84c00880] Format image2 detected only with low score of
5, misdetection possible!
[rawvideo @ 023a84c14b40] Invalid pixel format.
[image2 @ 023a84c00880] Failed to open codec in
avformat_find_stream_info
[rawvideo @ 023a84c14b40] Invalid pixel format.
 Last message repeated 1 times
[image2 @ 023a84c00880] Failed to open codec in
avformat_find_stream_info
[image2 @ 023a84c00880] Could not find codec parameters for stream 0
(Video: rawvideo, none): unspecified size
Consider increasing the value for the 'analyzeduration' (0) and
'probesize' (500) options
Input #0, image2, from 'CapturedImage-001.raw':
   Duration: 00:00:00.04, start: 0.00, bitrate: 832000 kb/s
   Stream #0:0: Video: rawvideo, none, 25 fps, 25 tbr, 25 tbn
[NULL @ 023a84c15b80] Unable to find a suitable output format for
'CapturedImageScale-001.raw'
CapturedImageScale-001.raw: Invalid argument

C:\Naveen\projects\DMS\software\ffmpeg_full\ffmpeg\bin>



I think you must tell FFmpeg the pixel format and the size of the input 
image.
If you would use JPG or PNG as input format, then FFmpeg would detect 
these details automatically. But for RAW format FFmpeg can't know the 
details.


Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Mailtest 2 extern- please ignore EOM

2022-10-19 Thread Michael Koch
Hallo du da,

 

Dies ist der Inhalt

 

Mit freundlichen Grüßen / Best regards 

 

Michael Koch

 

--

 

Michael Koch

Senior Network Administrator

 

Mitutoyo CTL Germany GmbH

Von-Gunzert-Straße 17

D-78727 Oberndorf am Neckar

Tel.: +49-7423-8776-54

 <http://www.mitutoyo-ctl.de/> www.mitutoyo-ctl.de 

 

HRB 734157, Stuttgart

UST-Id: DE272400711

Geschäftsführer: Hans-Peter Klein, Swen Haubold

 

Informationen zum Umgang mit Ihren Daten finden Sie
<https://www.mitutoyo-ctl.de/datenschutzerklaerung/datenschutzerklaerung-fue
r-geschaeftspartner/> hier.

 



smime.p7s
Description: S/MIME cryptographic signature


Mailtest please ignore EOM

2022-10-19 Thread Michael Koch
 

 

Mit freundlichen Grüßen / Best regards 

 

Michael Koch

 

--

 

Michael Koch

Senior Network Administrator

 

Mitutoyo CTL Germany GmbH

Von-Gunzert-Straße 17

D-78727 Oberndorf am Neckar

Tel.: +49-7423-8776-54

 <http://www.mitutoyo-ctl.de/> www.mitutoyo-ctl.de 

 

HRB 734157, Stuttgart

UST-Id: DE272400711

Geschäftsführer: Hans-Peter Klein, Swen Haubold

 

Informationen zum Umgang mit Ihren Daten finden Sie
<https://www.mitutoyo-ctl.de/datenschutzerklaerung/datenschutzerklaerung-fue
r-geschaeftspartner/> hier.

 



smime.p7s
Description: S/MIME cryptographic signature


Re: [FFmpeg-user] Help: is it possibile to join an image and a live stream?

2022-10-09 Thread Michael Koch

Am 07.10.2022 um 20:28 schrieb Michael Koch:

Am 07.10.2022 um 19:32 schrieb i...@mbsoft.biz:

i have 2 inputs

first input image  -i logo.png

second input dshow webcam  -f dshow -i video="my webcam"

is it possible to show 5 seconds logo and and subsequently start webcam?


it can also be done with "streamselect" filter, have a look at the 
documentation for an example.




I haven't yet found a working example for sendcmd / streamselect.
What's the problem in this command line?

ffmpeg -re -f lavfi -i color=red:s=1280x720 -f lavfi -i 
testsrc2=s=1280x720 -lavfi sendcmd='5.0 streamselect map 
1';[0][1]streamselect=map=0 -f sdl2 -


The error message is "Unable to find a suitable output format for 
'streamselect' "

I'm not sure what's the meaning of this message. Does it mean
a) The previous filter has no suitable output format for the input of 
the streamselect filter

or
b) The streamselect filter has no suitable output format for the next 
filter?


The console output is below.

Michael



C:\Users\astro\Desktop\test>ffmpeg -re -f lavfi -i color=red:s=1280x720 
-f lavfi -i testsrc2=s=1280x720 -lavfi sendcmd='5.0 streamselect map 
1';[0][1]streamselect=map=0 -f sdl2 -
ffmpeg version 2022-10-02-git-5f02a261a2-essentials_build-www.gyan.dev 
Copyright (c) 2000-2022 the FFmpeg developers

  built with gcc 12.1.0 (Rev2, Built by MSYS2 project)
  configuration: --enable-gpl --enable-version3 --enable-static 
--disable-w32threads --disable-autodetect --enable-fontconfig 
--enable-iconv --enable-gnutls --enable-libxml2 --enable-gmp 
--enable-bzlib --enable-lzma --enable-zlib --enable-libsrt 
--enable-libssh --enable-libzmq --enable-avisynth --enable-sdl2 
--enable-libwebp --enable-libx264 --enable-libx265 --enable-libxvid 
--enable-libaom --enable-libopenjpeg --enable-libvpx 
--enable-mediafoundation --enable-libass --enable-libfreetype 
--enable-libfribidi --enable-libvidstab --enable-libvmaf 
--enable-libzimg --enable-amf --enable-cuda-llvm --enable-cuvid 
--enable-ffnvcodec --enable-nvdec --enable-nvenc --enable-d3d11va 
--enable-dxva2 --enable-libmfx --enable-libgme --enable-libopenmpt 
--enable-libopencore-amrwb --enable-libmp3lame --enable-libtheora 
--enable-libvo-amrwbenc --enable-libgsm --enable-libopencore-amrnb 
--enable-libopus --enable-libspeex --enable-libvorbis --enable-librubberband

  libavutil  57. 38.100 / 57. 38.100
  libavcodec 59. 49.100 / 59. 49.100
  libavformat    59. 33.100 / 59. 33.100
  libavdevice    59.  8.101 / 59.  8.101
  libavfilter 8. 49.100 /  8. 49.100
  libswscale  6.  8.112 /  6.  8.112
  libswresample   4.  9.100 /  4.  9.100
  libpostproc    56.  7.100 / 56.  7.100
Input #0, lavfi, from 'color=red:s=1280x720':
  Duration: N/A, start: 0.00, bitrate: N/A
  Stream #0:0: Video: wrapped_avframe, yuv420p, 1280x720 [SAR 1:1 DAR 
16:9], 25 fps, 25 tbr, 25 tbn

Input #1, lavfi, from 'testsrc2=s=1280x720':
  Duration: N/A, start: 0.00, bitrate: N/A
  Stream #1:0: Video: wrapped_avframe, yuv420p, 1280x720 [SAR 1:1 DAR 
16:9], 25 fps, 25 tbr, 25 tbn
[NULL @ 023291ccd4c0] Unable to find a suitable output format for 
'streamselect'

streamselect: Invalid argument
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] R: Help: is it possibile to join an image and a live stream?

2022-10-07 Thread Michael Koch

Am 07.10.2022 um 21:32 schrieb i...@mbsoft.biz:

I have tried

ffmpeg.exe" -loglevel quiet -analyzeduration 0 -probesize 32 -report -i
"e:\\logo.png" -f dshow -video_size 1280x720 -framerate 30
-video_device_number 0 -i "video=Full HD 1080P PC Camera" "sendcmd='5.0"
streamselect map "1',streamselect=inputs=2:map=0" out.mp4

buti t do not work


I think there are several errors in your command line. But even after I 
corrected them, I didn't find a working example with streamselect.


But I found a working example with "overlay" filter. You must change 
your camera name. Make sure that the image has the same size as the camera.


ffmpeg -loop 1 -framerate 10 -i img.jpg -f dshow -video_size 1280x720 
-framerate 10 -i video="BisonCam,NB Pro" -lavfi 
[0]format=rgb24[a];[1]format=rgb24[b];[b][a]overlay=enable='lt(t,5)':format=rgb 
-f sdl2 -


Michael


___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] Help: is it possibile to join an image and a live stream?

2022-10-07 Thread Michael Koch

Am 07.10.2022 um 19:32 schrieb i...@mbsoft.biz:

i have 2 inputs

first input image  -i logo.png

second input dshow webcam  -f dshow -i video="my webcam"

is it possible to show 5 seconds logo and and subsequently start webcam?


it can also be done with "streamselect" filter, have a look at the 
documentation for an example.


Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] Help: is it possibile to join an image and a live stream?

2022-10-07 Thread Michael Koch

Am 07.10.2022 um 19:32 schrieb i...@mbsoft.biz:

i have 2 inputs

first input image  -i logo.png

second input dshow webcam  -f dshow -i video="my webcam"

is it possible to show 5 seconds logo and and subsequently start webcam?


Here is an example for switching between two inputs after 5 seconds:

ffmpeg -f lavfi -i color=red -f lavfi -i color=yellow -lavfi 
blend=all_expr='if(lt(T,5),A,B)' -y -t 10 test.mp4


Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] datascope

2022-09-18 Thread Michael Koch

Am 18.09.2022 um 18:50 schrieb Carl Zwanzig:

On 9/18/2022 5:28 AM, Michael Koch wrote:

It seems the crop filter did change the pixel format to GBR.


Which brings the follow-up question- why did crop...?


geq or crop, one of them.

IIRC, some/many/most/all filters operate on a limited number of pix 
formats and ffmpeg automatically inserts conversion steps as needed. 
If this is true, it would be helpful for the user docs to show the 
native in/out pix formats of each filter.


I agree. That should be in the documentation.

Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] datascope

2022-09-18 Thread Michael Koch

Am 18.09.2022 um 13:36 schrieb Michael Koch:
Why does datascope show the color components in the unusual order GBR 
(from top to bottom)? It's confusing.

Can the order be changed to RGB?

ffmpeg -f lavfi -i color=black:s=26x6 -lavfi 
format=rgb24,geq=r='clip(64*mod(X,5),0,255)':g='clip(64*Y,0,255)':b='clip(64*trunc(X/5),0,255)',crop=25:5:0:0,datascope=s=750x180:mode=color2:format=dec 
-frames 1 -y test.png


I found a solution. It seems the crop filter did change the pixel format 
to GBR. It works if format=rgb24 is inserted before datascope.


___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


[FFmpeg-user] datascope

2022-09-18 Thread Michael Koch
Why does datascope show the color components in the unusual order GBR 
(from top to bottom)? It's confusing.

Can the order be changed to RGB?

ffmpeg -f lavfi -i color=black:s=26x6 -lavfi 
format=rgb24,geq=r='clip(64*mod(X,5),0,255)':g='clip(64*Y,0,255)':b='clip(64*trunc(X/5),0,255)',crop=25:5:0:0,datascope=s=750x180:mode=color2:format=dec 
-frames 1 -y test.png




___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] ffmpeg MP4/x264 output colours change when input source is different resolution (bug?)

2022-09-16 Thread Michael Koch

Am 16.09.2022 um 16:51 schrieb Erik Dobberkau:


Apart from width, height, PAR, DAR and SAR, bit depth and subsampling
method, the color encoding parameters are responsible for the visual
representation of an image: color space, primaries, white point, transfer
function, color range, and color matrix (if something other than RGB
encoding is used).


Is colorspace and color matrix not the same? If not, how can color 
matrix be specified in the command line?

How can white point be specified in the command line?

I did specify -pix_fmt, -colorspace, -color_trc, -color_primaries and 
-color_range. But that seems to be not sufficient. The color in VLC 
player is different for height=576 and height=720.


ffmpeg -f lavfi -i color=0x19be0f:s=400x576 -pix_fmt yuv420p -colorspace 
rgb -color_trc bt709 -color_primaries bt709 -color_range pc -crf 0 
-vcodec libx264 -t 5 -y out1.mp4
ffmpeg -f lavfi -i color=0x19be0f:s=400x720 -pix_fmt yuv420p -colorspace 
rgb -color_trc bt709 -color_primaries bt709 -color_range pc -crf 0 
-vcodec libx264 -t 5 -y out2.mp4


Michael


___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] ffmpeg MP4/x264 output colours change when input source is different resolution (bug?)

2022-09-16 Thread Michael Koch

Am 16.09.2022 um 16:34 schrieb Dan:
On Fri, 16 Sep 2022 14:55:20 +0100, Michael Koch 
 wrote:



Am 16.09.2022 um 15:20 schrieb Dan:

showinfo filter shows video frame color metadata to output of console,
nothing less - nothing more. showinfo cant produce incorrect video, if
you ever bothered to read documentation you would know.


By the way, just to clarify, I wasn't talking about the console
output, but
the window it opened up alongside (on the right), which showed the 
green

was darker.


Nobody is wrong. Everybody is correct. Your file is encoded so badly
that it should be immediately removed from existence.


In the end, I used "-colorspace fcc" which seems to have fixed the 
issue.


Please show your complete command line. Because for me it doesn't work.
Different colors for height 576 and 720.


Sure:

ffmpeg.exe -f lavfi -i color=0x19be0f:s=400x720 -crf 0 -vcodec libx264 
-t 5 -y -colorspace fcc 720fcc.mp4




For me that doesn't work in Windows. The color in VLC player is 
18,190,18 for height=576 and 13,163,11 for height=720.

It doesn't help if I specify -color_trc, -color_primaries and -color_range.

In VLC player are functions Tools / Media Information and Tools / Codec 
information. What's shown here is exactly the same for height=576 and 
height=720. But the colors are different.


Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] ffmpeg MP4/x264 output colours change when input source is different resolution (bug?)

2022-09-16 Thread Michael Koch

Am 16.09.2022 um 15:20 schrieb Dan:

showinfo filter shows video frame color metadata to output of console,
nothing less - nothing more. showinfo cant produce incorrect video, if
you ever bothered to read documentation you would know.


By the way, just to clarify, I wasn't talking about the console 
output, but

the window it opened up alongside (on the right), which showed the green
was darker.


Nobody is wrong. Everybody is correct. Your file is encoded so badly
that it should be immediately removed from existence.


In the end, I used "-colorspace fcc" which seems to have fixed the issue.


Please show your complete command line. Because for me it doesn't work. 
Different colors for height 576 and 720.


Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] ffmpeg MP4/x264 output colours change when input source is different resolution (bug?)

2022-09-15 Thread Michael Koch

Am 15.09.2022 um 13:43 schrieb Dan:
On Thu, 15 Sep 2022 12:07:08 +0100, Michael Koch 
 wrote:



Am 15.09.2022 um 11:53 schrieb Dan:

This seems to work with VLC player:

ffmpeg -f lavfi -i color=0x19be0f:s=400x576 -colorspace bt709 -crf 0
-vcodec libx264 -t 5 -y out1.mp4
ffmpeg -f lavfi -i color=0x19be0f:s=400x578 -colorspace bt709 -crf 0
-vcodec libx264 -t 5 -y out2.mp4


Unfortunately, doesn't work for me in MPC or Chrome. Can you try a
height of 720 instead of 578 to see if that fails for you?


Please try this, for me it works with VLC player, the color is exactly
the same:

ffmpeg -f lavfi -i color=0x19be0f:s=400x576 -colorspace bt709
-color_primaries bt709 -color_trc bt709 -crf 0 -vcodec libx264 -t 5 -y
out1.mp4
ffmpeg -f lavfi -i color=0x19be0f:s=400x720 -colorspace bt709
-color_primaries bt709 -color_trc bt709 -crf 0 -vcodec libx264 -t 5 -y
out2.mp4


Sure.

The colour is both the same now (for both Chrome and MPC), except it's
the darker, wrong colour (green=164 instead of 190).


Try some other values instead of "bt709".


___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] ffmpeg MP4/x264 output colours change when input source is different resolution (bug?)

2022-09-15 Thread Michael Koch

Am 15.09.2022 um 11:53 schrieb Dan:

This seems to work with VLC player:

ffmpeg -f lavfi -i color=0x19be0f:s=400x576 -colorspace bt709 -crf 0
-vcodec libx264 -t 5 -y out1.mp4
ffmpeg -f lavfi -i color=0x19be0f:s=400x578 -colorspace bt709 -crf 0
-vcodec libx264 -t 5 -y out2.mp4


Unfortunately, doesn't work for me in MPC or Chrome. Can you try a
height of 720 instead of 578 to see if that fails for you?


Please try this, for me it works with VLC player, the color is exactly 
the same:


ffmpeg -f lavfi -i color=0x19be0f:s=400x576 -colorspace bt709 
-color_primaries bt709 -color_trc bt709 -crf 0 -vcodec libx264 -t 5 -y 
out1.mp4
ffmpeg -f lavfi -i color=0x19be0f:s=400x720 -colorspace bt709 
-color_primaries bt709 -color_trc bt709 -crf 0 -vcodec libx264 -t 5 -y 
out2.mp4



___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] ffmpeg MP4/x264 output colours change when input source is different resolution (bug?)

2022-09-15 Thread Michael Koch

Am 15.09.2022 um 11:53 schrieb Dan:

This seems to work with VLC player:

ffmpeg -f lavfi -i color=0x19be0f:s=400x576 -colorspace bt709 -crf 0
-vcodec libx264 -t 5 -y out1.mp4
ffmpeg -f lavfi -i color=0x19be0f:s=400x578 -colorspace bt709 -crf 0
-vcodec libx264 -t 5 -y out2.mp4


Unfortunately, doesn't work for me in MPC or Chrome. Can you try a
height of 720 instead of 578 to see if that fails for you?


Not the same color, but the difference is small. Not visible without 
measuring.

It becomes even smaller when you add -color_primaries bt709


___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] ffmpeg MP4/x264 output colours change when input source is different resolution (bug?)

2022-09-15 Thread Michael Koch

Am 15.09.2022 um 11:41 schrieb Paul B Mahol:

On 9/15/22, Michael Koch  wrote:

Am 15.09.2022 um 11:26 schrieb Dan:

You are right that datascope shows no difference. But the issue is also
reproducible with VLC player.

As well as VLC Player and FFplay, I've tried MediaPlayerClassic,
Google Chrome,
Microsoft Edge and Vegas Pro, and the problem occurs with each of
those. Could
there be a "takes two to tango" thing going on, where both the players
and ffmpeg
are at fault due to miscommunication? It's somewhat hard to conceive
they'd all
interpret the file incorrectly otherwise. Only one player I tried -
Irfanview - interpreted
it correctly.

I can't get to test it with Firefox or Waterfox unfortunately. They
think the file
is corrupt (output mp4 produced using the "-f lavfi -i
color=0x19be0f:s=400x720" technique).
Maybe there's a way around that.

Also, showinfo produced an incorrect colour too. See:
https://i.imgur.com/LF43udT.png

(I use: "ffplay.exe -vf showinfo 576.mp4" ).

In summary, I've love a workaround at least, or at least some
reassurance that
Chrome et al. will come round to fix this, or that ffmpeg will
communicate the file
better to them.

This seems to work with VLC player:

ffmpeg -f lavfi -i color=0x19be0f:s=400x576 -colorspace bt709 -crf 0
-vcodec libx264 -t 5 -y out1.mp4
ffmpeg -f lavfi -i color=0x19be0f:s=400x578 -colorspace bt709 -crf 0
-vcodec libx264 -t 5 -y out2.mp4

mostly because of extra -colorspace flag supplied.


Wouldn't it be a good idea to show in FFprobe a message 
"colorspace=unknown" if colorspace isn't set?


___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] ffmpeg MP4/x264 output colours change when input source is different resolution (bug?)

2022-09-15 Thread Michael Koch

Am 15.09.2022 um 11:26 schrieb Dan:

You are right that datascope shows no difference. But the issue is also
reproducible with VLC player.


As well as VLC Player and FFplay, I've tried MediaPlayerClassic, 
Google Chrome,
Microsoft Edge and Vegas Pro, and the problem occurs with each of 
those. Could
there be a "takes two to tango" thing going on, where both the players 
and ffmpeg
are at fault due to miscommunication? It's somewhat hard to conceive 
they'd all
interpret the file incorrectly otherwise. Only one player I tried - 
Irfanview - interpreted

it correctly.

I can't get to test it with Firefox or Waterfox unfortunately. They 
think the file
is corrupt (output mp4 produced using the "-f lavfi -i 
color=0x19be0f:s=400x720" technique).

Maybe there's a way around that.

Also, showinfo produced an incorrect colour too. See: 
https://i.imgur.com/LF43udT.png


(I use: "ffplay.exe -vf showinfo 576.mp4" ).

In summary, I've love a workaround at least, or at least some 
reassurance that
Chrome et al. will come round to fix this, or that ffmpeg will 
communicate the file

better to them.


This seems to work with VLC player:

ffmpeg -f lavfi -i color=0x19be0f:s=400x576 -colorspace bt709 -crf 0 
-vcodec libx264 -t 5 -y out1.mp4
ffmpeg -f lavfi -i color=0x19be0f:s=400x578 -colorspace bt709 -crf 0 
-vcodec libx264 -t 5 -y out2.mp4


___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] ffmpeg MP4/x264 output colours change when input source is different resolution (bug?)

2022-09-15 Thread Michael Koch

Am 15.09.2022 um 11:01 schrieb Paul B Mahol:

On 9/15/22, Michael Koch  wrote:

Am 15.09.2022 um 00:30 schrieb Paul B Mahol:

On 9/15/22, Dan  wrote:

zscale=...,format=yuv420p

O okay. I tried that before (except using two -vf commands),
because
I
suspected you might've meant that.

Just tried it again, still no luck. Let me know if I need to tweak
anything:

ffmpeg.exe -f lavfi -i color=0x19be0f:s=400x578 -crf 0 -vcodec libx264
-vf
zscale=w=-1:h=-1,format=yuv420p -t 5 -y "578.mp4"

I'm using Media Player Classic to test the colours, which breaks the pic
using the 578 pixel
height. Chrome actually works with both the 576 and 578 pixel height, but
as
soon as I
change the height to 984, then both Media Player Classic AND Chrome show
it
broken.

Datascope shows the strange and seemingly unrelated 78,4C,44 values for
all
three sizes,
but it does that even without using zscale at all.

Good, we move forward, that are real values encoded in bitstream.

Anything RGB values you complain about have nothing directly related
about ffmpeg.

Also make sure that all software are reinterpreting your files correctly.
The files need to signal correct encoded colorspace/primaries/range/etc
so
it can be correctly displayed on screen.

Below is a Windows batch file for reproducing with FFplay. I did use the
latest version from Gyan. The color difference is clearly visible.

Looks like you had hard time understanding text you quoted above.


ffmpeg -f lavfi -i color=0x19be0f:s=400x576 -vf
zscale,setrange=full,format=yuv420p -colorspace bt709 -color_primaries
bt709 -crf 0 -vcodec libx264 -t 5 -y out1.mp4
ffmpeg -f lavfi -i color=0x19be0f:s=400x578 -vf
zscale,setrange=full,format=yuv420p -colorspace bt709 -color_primaries
bt709 -crf 0 -vcodec libx264 -t 5 -y out2.mp4
start ffplay -left 0 -top 0 out1.mp4
start ffplay -left 400 -top 0 out2.mp4

When inspecting the files with FFprobe or ExifTool, I see no differences
except height and small differences in bitrate and filesize.

ffplay is broken/buggy and should not be used here, it will display
differently stuff
all the time, and uses SDL library by default for video output.

Using datascope filter it clearly shows pixels are exactly same.


You are right that datascope shows no difference. But the issue is also 
reproducible with VLC player.


___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] ffmpeg MP4/x264 output colours change when input source is different resolution (bug?)

2022-09-15 Thread Michael Koch

Am 15.09.2022 um 00:30 schrieb Paul B Mahol:

On 9/15/22, Dan  wrote:

zscale=...,format=yuv420p

O okay. I tried that before (except using two -vf commands), because
I
suspected you might've meant that.

Just tried it again, still no luck. Let me know if I need to tweak
anything:

ffmpeg.exe -f lavfi -i color=0x19be0f:s=400x578 -crf 0 -vcodec libx264 -vf
zscale=w=-1:h=-1,format=yuv420p -t 5 -y "578.mp4"

I'm using Media Player Classic to test the colours, which breaks the pic
using the 578 pixel
height. Chrome actually works with both the 576 and 578 pixel height, but as
soon as I
change the height to 984, then both Media Player Classic AND Chrome show it
broken.

Datascope shows the strange and seemingly unrelated 78,4C,44 values for all
three sizes,
but it does that even without using zscale at all.

Good, we move forward, that are real values encoded in bitstream.

Anything RGB values you complain about have nothing directly related
about ffmpeg.

Also make sure that all software are reinterpreting your files correctly.
The files need to signal correct encoded colorspace/primaries/range/etc so
it can be correctly displayed on screen.


Below is a Windows batch file for reproducing with FFplay. I did use the 
latest version from Gyan. The color difference is clearly visible.


ffmpeg -f lavfi -i color=0x19be0f:s=400x576 -vf 
zscale,setrange=full,format=yuv420p -colorspace bt709 -color_primaries 
bt709 -crf 0 -vcodec libx264 -t 5 -y out1.mp4
ffmpeg -f lavfi -i color=0x19be0f:s=400x578 -vf 
zscale,setrange=full,format=yuv420p -colorspace bt709 -color_primaries 
bt709 -crf 0 -vcodec libx264 -t 5 -y out2.mp4

start ffplay -left 0 -top 0 out1.mp4
start ffplay -left 400 -top 0 out2.mp4

When inspecting the files with FFprobe or ExifTool, I see no differences 
except height and small differences in bitrate and filesize.


___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] ffmpeg MP4/x264 output colours change when input source is different resolution (bug?)

2022-09-14 Thread Michael Koch

Am 14.09.2022 um 19:00 schrieb Dan:

That's a great idea to use specified colours directly supported by ffmpeg
and helps narrow down the issue a lot!

Would you count this as a bug?


It seems to be a bug.


If this is a known legacy quirk, it seems
one of the ugliest and most misleading quirks I have seen in a while, 
and I could
imagine being responsible for wasting thousands of hours of thought 
based upon

mis-diagnosed colour-profiling assumptions.

I've spoken with two separate long-time video professionals so far, 
and they
were both convinced this was a colour profile thing and that my source 
images
were the problem, and not ffmpeg's behaviour, despite saying I checked 
both
BMPs in a hex editor, and they were byte-identical (other than 
resolution/filesize).


Are you using Windows per chance? 


yes, I did also test on Windows.


To make things even more confusing, one
of the aforementioned pros says he can't spot a difference on his 
non-Windows
system (presuming Mac, could be Linux) between the two output mp4s' 
colours.


The crazy thing is when you vertically stack the two videos together, 
then the colors become again identical in the output:


ffmpeg -i out1.mp4 -i out2.mp4 -lavfi vstack -y out.mp4

Sorry, I have no idea for a solution or workaround.

Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] ffmpeg MP4/x264 output colours change when input source is different resolution (bug?)

2022-09-14 Thread Michael Koch

Am 14.09.2022 um 11:21 schrieb Dan:

Using the latest 5.1.1 "essentials build" by www.gyan.dev.

Hi all, I'm a beginner to ffmpeg so I'm having a hard time believing 
that a utility so old and so widely used has such a fundamental bug, 
but the evidence is staring me in the face and leads me to no other 
conclusion.


It's incredibly easy to replicate thankfully. I want to convert 
numerous frames to make an animation, but thankfully, I've simplified 
the problem to even using a single image to make a '1 frame video' for 
the purposes of debugging.


Simply perform this command line:

ffmpeg.exe -i original.png -crf 0 -vcodec libx264 output.mp4

...With this "original.png" ("fC2Tj") image: 
https://i.stack.imgur.com/5jkct.png


And this command line:

ffmpeg.exe -i doubleHeight.png -crf 0 -vcodec libx264 output.mp4

...On this "doubleHeight" ("RGIvA") image: 
https://i.stack.imgur.com/PLdsb.png


The double height version is darker than it should be. I've checked 
the resulting video in both Media Player Classic and Chrome.


The issue can be reproduced without input images as follows:

ffmpeg -f lavfi -i color=0x19be0f:s=400x576 -crf 0 -vcodec libx264 -t 5 
-y out1.mp4
ffmpeg -f lavfi -i color=0x19be0f:s=400x578 -crf 0 -vcodec libx264 -t 5 
-y out2.mp4


The color seems to be brighter if the height is 576 or smaller, and 
darker if the height is 578 or larger.
It's clearly visible if you play both videos side by side. I did test 
with VLC Player and FFplay. I don't see how zscale could fix this issue.


Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH]lavfi/rotate: Fix undefined behaviour

2022-09-04 Thread Michael Koch

/Also, shouldn't the same change be done also to interpolate_bilinear8? /

I was unable to reproduce with 8-bit input.


When I tested it, the issue was reproducible only with 14-bit and 16-bit input. 
12-bit did work.

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] multiple flat videos in a equirectangular projection - v360 filter

2022-07-17 Thread Michael Koch

Am 17.07.2022 um 12:09 schrieb Denis Połeć:



Am 16.07.2022 um 22:12 schrieb Michael Koch :

Am 16.07.2022 um 20:57 schrieb Denis Połeć:

Am 16.07.2022 um 11:46 schrieb Paul B Mahol :

On Sat, Jul 16, 2022 at 11:22 AM Denis Połeć  wrote:


Hello,
I wouldn't call myself a beginner, but I still need a little bit to become
a pro. :)
I have a question that might be easy to answer.

I am working on a script to bring multiple flat videos into a
equirectangular projection by using v360 filter.
I have the problem that the edges of the input are very jagged.

This did not lead to a good result when I play the equirectangular
projection in a 360 player. I have also already tried different
interpolation modes, which does not lead to a better result.
Does anyone have an idea how I can avoid that? Is there a better way to do
this task?


Here is an example code with the result. The video in the example has a
resolution of 1080p:


ffmpeg -i BRAIN.mp4 -lavfi "\

[0]v360=input=flat:output=e:id_fov=45:w=5120:h=2560:yaw='0':alpha_mask=1[fg1];\

[0]v360=input=flat:output=e:id_fov=45:w=5120:h=2560:yaw=90:alpha_mask=1[fg2];[fg2][fg1]overlay[a];\

[0]v360=input=flat:output=e:id_fov=45:w=5120:h=2560:yaw=-90:alpha_mask=1[fg3];[fg3][a]overlay[b];\

[0]v360=input=flat:output=e:id_fov=45:w=5120:h=2560:yaw=180:alpha_mask=1[fg4];[fg4][b]overlay[c];\
\

[0]v360=input=flat:output=e:id_fov=45:w=5120:h=2560:yaw=0:pitch=45:alpha_mask=1[fg5];[fg5][c]overlay[d];\

[0]v360=input=flat:output=e:id_fov=45:w=5120:h=2560:yaw=90:roll=45:alpha_mask=1[fg6];[fg6][d]overlay[e];\

[0]v360=input=flat:output=e:id_fov=45:w=5120:h=2560:yaw=-90:roll=-45:alpha_mask=1[fg7];[fg7][e]overlay[f];\

[0]v360=input=flat:output=e:id_fov=45:w=5120:h=2560:yaw=180:pitch=-45:alpha_mask=1[fg8];[fg8][f]overlay[g];\
\

[0]v360=input=flat:output=e:id_fov=45:w=5120:h=2560:yaw=0:pitch=-45:alpha_mask=1[fg9];[fg9][g]overlay[h];\

[0]v360=input=flat:output=e:id_fov=45:w=5120:h=2560:yaw=90:roll=-45:alpha_mask=1[fg10];[fg10][h]overlay[i];\

[0]v360=input=flat:output=e:id_fov=45:w=5120:h=2560:yaw=-90:roll=45:alpha_mask=1[fg11];[fg11][i]overlay[j];\

[0]v360=input=flat:output=e:id_fov=45:w=5120:h=2560:yaw=180:pitch=45:alpha_mask=1[fg12];[fg12][j]overlay[k];\
\

[0]v360=input=flat:output=e:id_fov=45:w=5120:h=2560:yaw=0:pitch=90:alpha_mask=1[fg13];[fg13][k]overlay[l];\
[0]drawbox=w=1:h=1:color=black,v360=input=flat:output=e:id_fov=45:w=5120:h=2560:yaw=0:pitch=-90:alpha_mask=1[fg14];[fg14][l]overlay"
-q:v 4 -vframes 1 -y test11.jpg

Your filtergraph is extremely inefficient with that many cascaded overlays

and gives poor results. For proper stitching borders need to have small
transition from full opacity to no opacity.

To add small transitions with opacity on borders could use blurring filters
for example on alpha plane only.

How could I achieve that? How can I blurry just the borders?

You could apply some preprocessing to your input video, before you feed it to 
your script.
The trick is to multiply all pixels at the edges by 0.5. This can be done with 
a white mask which has gray pixels at the edge.


Thank you for your reply.


ffmpeg -f lavfi -i color=color=white:size=1920x1080 -lavfi 
drawbox=color=gray:t=1 -frames 1 -y mask.png

This works.


ffmpeg -f lavfi -i testsrc2=size=1920x1080 -i mask.png -lavfi multiply=offset=0 
-frames 1 -y test.png

But ffmpeg says multiply filter doesn’t exist.

I tried that:
ffmpeg -f lavfi -i testsrc2=size=1920x1080 -i mask.png -lavfi 
blend=all_mode=multiply -frames 1 -y test.png

output gets green.


Convert the pixel format of both inputs to RGB24 before using the blend 
filter. Multiply makes no sense in YUV format.


Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] multiple flat videos in a equirectangular projection - v360 filter

2022-07-17 Thread Michael Koch

Am 17.07.2022 um 12:09 schrieb Denis Połeć:



Am 16.07.2022 um 22:12 schrieb Michael Koch :

Am 16.07.2022 um 20:57 schrieb Denis Połeć:

Am 16.07.2022 um 11:46 schrieb Paul B Mahol :

On Sat, Jul 16, 2022 at 11:22 AM Denis Połeć  wrote:


Hello,
I wouldn't call myself a beginner, but I still need a little bit to become
a pro. :)
I have a question that might be easy to answer.

I am working on a script to bring multiple flat videos into a
equirectangular projection by using v360 filter.
I have the problem that the edges of the input are very jagged.

This did not lead to a good result when I play the equirectangular
projection in a 360 player. I have also already tried different
interpolation modes, which does not lead to a better result.
Does anyone have an idea how I can avoid that? Is there a better way to do
this task?


Here is an example code with the result. The video in the example has a
resolution of 1080p:


ffmpeg -i BRAIN.mp4 -lavfi "\

[0]v360=input=flat:output=e:id_fov=45:w=5120:h=2560:yaw='0':alpha_mask=1[fg1];\

[0]v360=input=flat:output=e:id_fov=45:w=5120:h=2560:yaw=90:alpha_mask=1[fg2];[fg2][fg1]overlay[a];\

[0]v360=input=flat:output=e:id_fov=45:w=5120:h=2560:yaw=-90:alpha_mask=1[fg3];[fg3][a]overlay[b];\

[0]v360=input=flat:output=e:id_fov=45:w=5120:h=2560:yaw=180:alpha_mask=1[fg4];[fg4][b]overlay[c];\
\

[0]v360=input=flat:output=e:id_fov=45:w=5120:h=2560:yaw=0:pitch=45:alpha_mask=1[fg5];[fg5][c]overlay[d];\

[0]v360=input=flat:output=e:id_fov=45:w=5120:h=2560:yaw=90:roll=45:alpha_mask=1[fg6];[fg6][d]overlay[e];\

[0]v360=input=flat:output=e:id_fov=45:w=5120:h=2560:yaw=-90:roll=-45:alpha_mask=1[fg7];[fg7][e]overlay[f];\

[0]v360=input=flat:output=e:id_fov=45:w=5120:h=2560:yaw=180:pitch=-45:alpha_mask=1[fg8];[fg8][f]overlay[g];\
\

[0]v360=input=flat:output=e:id_fov=45:w=5120:h=2560:yaw=0:pitch=-45:alpha_mask=1[fg9];[fg9][g]overlay[h];\

[0]v360=input=flat:output=e:id_fov=45:w=5120:h=2560:yaw=90:roll=-45:alpha_mask=1[fg10];[fg10][h]overlay[i];\

[0]v360=input=flat:output=e:id_fov=45:w=5120:h=2560:yaw=-90:roll=45:alpha_mask=1[fg11];[fg11][i]overlay[j];\

[0]v360=input=flat:output=e:id_fov=45:w=5120:h=2560:yaw=180:pitch=45:alpha_mask=1[fg12];[fg12][j]overlay[k];\
\

[0]v360=input=flat:output=e:id_fov=45:w=5120:h=2560:yaw=0:pitch=90:alpha_mask=1[fg13];[fg13][k]overlay[l];\
[0]drawbox=w=1:h=1:color=black,v360=input=flat:output=e:id_fov=45:w=5120:h=2560:yaw=0:pitch=-90:alpha_mask=1[fg14];[fg14][l]overlay"
-q:v 4 -vframes 1 -y test11.jpg

Your filtergraph is extremely inefficient with that many cascaded overlays

and gives poor results. For proper stitching borders need to have small
transition from full opacity to no opacity.

To add small transitions with opacity on borders could use blurring filters
for example on alpha plane only.

How could I achieve that? How can I blurry just the borders?

You could apply some preprocessing to your input video, before you feed it to 
your script.
The trick is to multiply all pixels at the edges by 0.5. This can be done with 
a white mask which has gray pixels at the edge.


Thank you for your reply.


ffmpeg -f lavfi -i color=color=white:size=1920x1080 -lavfi 
drawbox=color=gray:t=1 -frames 1 -y mask.png

This works.


ffmpeg -f lavfi -i testsrc2=size=1920x1080 -i mask.png -lavfi multiply=offset=0 
-frames 1 -y test.png

But ffmpeg says multiply filter doesn’t exist.


Get a newer version. The multiply filter was added not so long ago.

Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] multiple flat videos in a equirectangular projection - v360 filter

2022-07-16 Thread Michael Koch

Am 16.07.2022 um 20:57 schrieb Denis Połeć:



Am 16.07.2022 um 11:46 schrieb Paul B Mahol :

On Sat, Jul 16, 2022 at 11:22 AM Denis Połeć  wrote:


Hello,
I wouldn't call myself a beginner, but I still need a little bit to become
a pro. :)
I have a question that might be easy to answer.

I am working on a script to bring multiple flat videos into a
equirectangular projection by using v360 filter.
I have the problem that the edges of the input are very jagged.

This did not lead to a good result when I play the equirectangular
projection in a 360 player. I have also already tried different
interpolation modes, which does not lead to a better result.
Does anyone have an idea how I can avoid that? Is there a better way to do
this task?


Here is an example code with the result. The video in the example has a
resolution of 1080p:


ffmpeg -i BRAIN.mp4 -lavfi "\

[0]v360=input=flat:output=e:id_fov=45:w=5120:h=2560:yaw='0':alpha_mask=1[fg1];\

[0]v360=input=flat:output=e:id_fov=45:w=5120:h=2560:yaw=90:alpha_mask=1[fg2];[fg2][fg1]overlay[a];\

[0]v360=input=flat:output=e:id_fov=45:w=5120:h=2560:yaw=-90:alpha_mask=1[fg3];[fg3][a]overlay[b];\

[0]v360=input=flat:output=e:id_fov=45:w=5120:h=2560:yaw=180:alpha_mask=1[fg4];[fg4][b]overlay[c];\
\

[0]v360=input=flat:output=e:id_fov=45:w=5120:h=2560:yaw=0:pitch=45:alpha_mask=1[fg5];[fg5][c]overlay[d];\

[0]v360=input=flat:output=e:id_fov=45:w=5120:h=2560:yaw=90:roll=45:alpha_mask=1[fg6];[fg6][d]overlay[e];\

[0]v360=input=flat:output=e:id_fov=45:w=5120:h=2560:yaw=-90:roll=-45:alpha_mask=1[fg7];[fg7][e]overlay[f];\

[0]v360=input=flat:output=e:id_fov=45:w=5120:h=2560:yaw=180:pitch=-45:alpha_mask=1[fg8];[fg8][f]overlay[g];\
\

[0]v360=input=flat:output=e:id_fov=45:w=5120:h=2560:yaw=0:pitch=-45:alpha_mask=1[fg9];[fg9][g]overlay[h];\

[0]v360=input=flat:output=e:id_fov=45:w=5120:h=2560:yaw=90:roll=-45:alpha_mask=1[fg10];[fg10][h]overlay[i];\

[0]v360=input=flat:output=e:id_fov=45:w=5120:h=2560:yaw=-90:roll=45:alpha_mask=1[fg11];[fg11][i]overlay[j];\

[0]v360=input=flat:output=e:id_fov=45:w=5120:h=2560:yaw=180:pitch=45:alpha_mask=1[fg12];[fg12][j]overlay[k];\
\

[0]v360=input=flat:output=e:id_fov=45:w=5120:h=2560:yaw=0:pitch=90:alpha_mask=1[fg13];[fg13][k]overlay[l];\
[0]drawbox=w=1:h=1:color=black,v360=input=flat:output=e:id_fov=45:w=5120:h=2560:yaw=0:pitch=-90:alpha_mask=1[fg14];[fg14][l]overlay"
-q:v 4  -vframes 1 -y test11.jpg

Your filtergraph is extremely inefficient with that many cascaded overlays

and gives poor results. For proper stitching borders need to have small
transition from full opacity to no opacity.

To add small transitions with opacity on borders could use blurring filters
for example on alpha plane only.

How could I achieve that? How can I blurry just the borders?


You could apply some preprocessing to your input video, before you feed 
it to your script.
The trick is to multiply all pixels at the edges by 0.5. This can be 
done with a white mask which has gray pixels at the edge.


ffmpeg -f lavfi -i color=color=white:size=1920x1080 -lavfi 
drawbox=color=gray:t=1 -frames 1 -y mask.png


ffmpeg -f lavfi -i testsrc2=size=1920x1080 -i mask.png -lavfi 
multiply=offset=0 -frames 1 -y test.png


Then use test.png as input for your script.

Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] Multiple xfade in one run

2022-07-04 Thread Michael Koch

Am 04.07.2022 um 18:46 schrieb Cecil Westerhof via ffmpeg-user:

Paul B Mahol  writes:


On Mon, Jul 4, 2022 at 6:15 PM Cecil Westerhof via ffmpeg-user <
ffmpeg-user@ffmpeg.org> wrote:


Some time ago I was experimenting with xfade. I wanted to know how to
use several in one run. Now I really needed it, so I did some digging
and found this:
  ffmpeg -y  \
   -i input0.mkv \
   -i input1.mkv \
   -i input2.mkv \
   -i input3.mkv \
   -i input4.mkv \
   -i input5.mkv \
   -i input6.mkv \
   -i input7.mkv \
   -i input8.mkv \
   -i input9.mkv \
   -vcodec libx264   \
   -crf26\
   -preset veryfast  \
   -filter_complex "
 [0:a][1:a] acrossfade=d=4[a1];
 [0:v][1:v] xfade=transition=hlslice:
   duration=4:
   offset=308[v1];

 [a1][2:a] acrossfade=d=4[a2];
 [v1][2:v] xfade=transition=vertopen:
   duration=4:
   offset=357[v2];

 [a2][3:a] acrossfade=d=4[a3];
 [v2][3:v] xfade=transition=circlecrop:
   duration=4:
   offset=533[v3];

 [a3][4:a] acrossfade=d=4[a4];
 [v3][4:v] xfade=transition=rectcrop:
   duration=4:
   offset=1016[v4];

 [a4][5:a] acrossfade=d=4[a5];
 [v4][5:v] xfade=transition=slideup:
   duration=4:
   offset=1158[v5];

 [a5][6:a] acrossfade=d=4[a6];
 [v5][6:v] xfade=transition=wiperight:
   duration=4:
   offset=1473[v6];

 [a6][7:a] acrossfade=d=4[a7];
 [v6][7:v] xfade=transition=horzclose:
   duration=4:
   offset=1661[v7];

 [a7][8:a] acrossfade=d=4[a8];
 [v7][8:v] xfade=transition=diagbl:
   duration=4:
   offset=2082[v8];

 [a8][9:a] acrossfade=d=4[a9];
 [v8][9:v] xfade=transition=slideright:
   duration=4:
   offset=2211[v9]
   " \
   -map '[v9]' -map '[a9]'   \
   output.mkv

I hope there are better ways, because there are some problems with it.
For example it needs a lot of memory. (24 GB)


Could use (a)movie filters and only use such filter when actually needed in
graph.

I am concerning ffmpeg still a newbie. What do you mean by this?



I also didn't understand it.

Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] show an image with FFplay for 5 seconds

2022-07-01 Thread Michael Koch

Am 01.07.2022 um 16:03 schrieb Michael Koch:

Hello,

is it possible to show an image with FFplay for 5 seconds, and then 
exit? I did try this command in a Windows batch file, but it doesn't 
stop after 5 seconds. I drag and drop the image on the batch file:


ffplay -autoexit -loop 0 -t 5 %1


A workaround with -loop 125 works.

The problem in my first command line seems to be that -loop 0 has a 
higher priority than -t 5. Shouldn't the -t option have higher priority?


Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


[FFmpeg-user] show an image with FFplay for 5 seconds

2022-07-01 Thread Michael Koch

Hello,

is it possible to show an image with FFplay for 5 seconds, and then 
exit? I did try this command in a Windows batch file, but it doesn't 
stop after 5 seconds. I drag and drop the image on the batch file:


ffplay -autoexit -loop 0 -t 5 %1

Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] remap with pixel interpolation

2022-06-29 Thread Michael Koch

Am 29.06.2022 um 09:15 schrieb Paul B Mahol:

On Wed, Jun 29, 2022 at 8:56 AM Michael Koch 
wrote:


Suggestion for improvement:
Add pixel interpolation to the remap filter. For example if the output
size is 800x600, multiply the values in the mapping files by 16 so that
the range is 12800x9600, and use the 4 least significant bits for pixel
interpolation.



Very bad suggestion.

Better make remap filter use float pixel format for map files.


Which image format supports float grayscale?

Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


[FFmpeg-user] remap with pixel interpolation

2022-06-29 Thread Michael Koch

Suggestion for improvement:
Add pixel interpolation to the remap filter. For example if the output 
size is 800x600, multiply the values in the mapping files by 16 so that 
the range is 12800x9600, and use the 4 least significant bits for pixel 
interpolation.


Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


[FFmpeg-user] xfade

2022-06-21 Thread Michael Koch

Hi,

I don't understand the examples in https://trac.ffmpeg.org/wiki/Xfade

If the first input has length 5s and the offset is 4.5s and the duration 
is 1s, then from where comes the first input between t=5s and t=5.5s?


Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] Using several audios after each-other in a video

2022-06-16 Thread Michael Koch
FMA3 BMI1
[libx264 @ 0x55ce8345ab40] profile High, level 3.2, 4:2:0, 8-bit
[libx264 @ 0x55ce8345ab40] 264 - core 160 r3011 cde9a93 - H.264/MPEG-4 AVC 
codec - Copyleft 2003-2020 - h
ttp://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 
analyse=0x3:0x113 me=hex subme=7
psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 
cqm=0 deadzone=21,11 fast_p
skip=1 chroma_qp_offset=-2 threads=6 lookahead_threads=1 sliced_threads=0 nr=0 
decimate=1 interlaced=0 bl
uray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 
direct=1 weightb=1 open_gop=0
weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 
rc=crf mbtree=1 crf=23.0 q
comp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
Output #0, mp4, to 'sander2.mp4':
   Metadata:
 encoder : Lavf58.45.100
 Stream #0:0: Video: h264 (libx264) (avc1 / 0x31637661), yuv420p, 1280x960 
[SAR 1:1 DAR 4:3], q=-1--1, 25 fps, 12800 tbn, 25 tbc
 Metadata:  
encoder : Lavc58.91.100 libx264
 Side data:
   cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: N/A
 Stream #0:1: Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 
128 kb/s
 Metadata:
   encoder : Lavc58.91.100 aac
[swscaler @ 0x55ce83cb8ec0] deprecated pixel format used, make sure you did set 
range correctly
[swscaler @ 0x55ce840a0900] deprecated pixel format used, make sure you did set 
range correctly
[image2 @ 0x55ce8343e0c0] Thread message queue blocking; consider raising the 
thread_queue_size option (c
urrent value: 8)
[swscaler @ 0x55ce84215400] deprecated pixel format used, make sure you did set 
range correctly
.
.
.
frame=12388 fps= 27 q=-1.0 Lsize=   86187kB time=00:08:15.40 
bitrate=1425.2kbits/s speed=1.08x
video:84124kB audio:1878kB subtitle:0kB other streams:0kB global headers:0kB 
muxing overhead: 0.214622%
[libx264 @ 0x55ce8345ab40] frame I:50Avg QP:16.28  size:315675
[libx264 @ 0x55ce8345ab40] frame P:5756  Avg QP:20.62  size: 12073
[libx264 @ 0x55ce8345ab40] frame B:6582  Avg QP:22.45  size:   131
[libx264 @ 0x55ce8345ab40] consecutive B-frames: 28.2%  2.4%  1.7% 67.7%
[libx264 @ 0x55ce8345ab40] mb I  I16..4: 11.1% 59.1% 29.8%
[libx264 @ 0x55ce8345ab40] mb P  I16..4:  0.1%  0.3%  0.0%  P16..4: 31.9%  4.1% 
 7.0%  0.0%  0.0%skip
:56.6%
[libx264 @ 0x55ce8345ab40] mb B  I16..4:  0.0%  0.0%  0.0%  B16..8:  2.4%  0.0% 
 0.0%  direct: 0.0%  skip
:97.6%  L0:13.0% L1:85.8% BI: 1.2%
[libx264 @ 0x55ce8345ab40] 8x8 transform intra:65.0% inter:61.4%
[libx264 @ 0x55ce8345ab40] coded y,uvDC,uvAC intra: 66.5% 70.1% 56.9% inter: 
10.3% 11.4% 2.1%
[libx264 @ 0x55ce8345ab40] i16 v,h,dc,p: 65% 10%  7% 19%
[libx264 @ 0x55ce8345ab40] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 17% 17% 24%  6%  6%  
6%  8%  7%  9%
[libx264 @ 0x55ce8345ab40] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 14% 17%  9%  8% 10%  
9% 11% 10% 12%
[libx264 @ 0x55ce8345ab40] i8c dc,h,v,p: 60% 18% 13%  8%
[libx264 @ 0x55ce8345ab40] Weighted P-Frames: Y:10.7% UV:7.2%
[libx264 @ 0x55ce8345ab40] ref P L0: 68.7% 25.7%  5.3%  0.3%  0.0%
[libx264 @ 0x55ce8345ab40] ref B L0: 92.4%  7.1%  0.5%
[libx264 @ 0x55ce8345ab40] ref B L1: 89.3% 10.7%
[libx264 @ 0x55ce8345ab40] kb/s:1390.74
[aac @ 0x55ce8347e7c0] Qavg: 533.159




--
**
  ASTRO ELECTRONIC   Dipl.-Ing. Michael Koch
   Raabestr. 43   37412 Herzberg
  www.astro-electronic.de
  Tel. +49 5521 854265   Fax +49 5521 854266
**

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] streamselect in a realtime application

2022-06-16 Thread Michael Koch

Am 16.06.2022 um 09:33 schrieb Gyan Doshi:



On 2022-06-16 12:45 pm, Michael Koch wrote:
I would like to understand why in some cases -loop 1 is required 
before the input files, and in some cases it can be omitted.


-loop option is specific to the image sequence demuxer. Without it, a 
single image input is a video stream of 1 frame length.


But why does this example work without  -loop 1? Is it an undocumented 
feature of the remap filter, that it keeps the last mapping file alive?

ffmpeg -i in.mp4 -i xmap.pgm -i ymap.pgm -lavfi [0][1][2]remap out.mp4

You would apply loop when you need to keep alive an image sequence 
input at a timestamp beyond its natural length, usually a fade.
With the tpad or loop filters, this can be done inside a filtergraph 
instead of at the demuxer level.




Which method is better, -loop 1 before the input file, or doing it in 
the filtergraph?


Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] streamselect in a realtime application

2022-06-16 Thread Michael Koch

Am 13.06.2022 um 22:20 schrieb Michael Koch:

Hello,

I'm using FFmpeg for a realtime wormhole simulation in a planetarium. 
There are two inputs:

a) An equirectangular video for the environment.
b) A realtime stream from a 360° camera, this is mapped inside the 
wormhole.


Both inputs are stitched together with xstack, and then a suitable 
remap function is applied. The output is in fisheye format and 
streamed to the planetarium projector. So far, that's working fine.


But I also want to switch the wormhole on and off at certain times (t1 
and t2). To do this, I did try two approaches:


Approach 1:
Two streams are generated, one without and the other with wormhole. 
Then one of them is selected by a streamselect filter. The drawback of 
this approach is that double CPU power is required. Both streams must 
be generated by remap filters, although only one of them is used.


Approach 2:
I use two sets of xmap and ymap files for the remap filter. One set 
without and the other set with wormhole:
[xmap1][xmap2]streamselect@1=map=0[xmap];[ymap1][ymap2]streamselect@2=map=0[ymap];[a][xmap][ymap]remap... 

The drawback of this approach is that I must use -loop 1 before each 
of the four mapping file inputs. Which means the files are reloaded 
for each frame. This is a huge waste of time. If I don't use -loop 1, 
then it seems the streamselect filters don't work.


I would like to understand why in some cases -loop 1 is required before 
the input files, and in some cases it can be omitted.


This works without -loop 1:
ffmpeg -i in.mp4 -i xmap.pgm -i ymap.pgm -lavfi [0][1][2]remap out.mp4

When I use streamselect to select one of two mapping files, it doesn't 
work. There is no error message, but it's always the first input used, 
regardless which commands I send to the streamselect filers:
ffmpeg -i in.mp4 -i x1.pgm -i x2.pgm -i y1.pgm -i y2.pgm -lavfi 
[1][2]streamselect@1=map=0[xmap];[3][4]streamselect@2=map=0[ymap];[0][xmap][ymap]remap 
out.mp4


However the same example does work when I add -loop 1 before each 
mapping file:
ffmpeg -i in.mp4 -loop 1 -i x1.pgm -loop 1 -i x2.pgm -loop 1 -i y1.pgm 
-loop 1 -i y2.pgm -lavfi 
[1][2]streamselect@1=map=0[xmap];[3][4]streamselect@2=map=0[ymap];[0][xmap][ymap]remap 
out.mp4


Another question:
Does -loop 1 mean the file is reloaded from disk for each frame? Or is 
it copied from an internal buffer?


Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] Use concat filter with a fade

2022-06-14 Thread Michael Koch

Am 14.06.2022 um 16:30 schrieb Cecil Westerhof via ffmpeg-user:

Michael Koch  writes:


Am 14.06.2022 um 15:33 schrieb Cecil Westerhof via ffmpeg-user:

Michael Koch  writes:


Am 14.06.2022 um 13:47 schrieb Cecil Westerhof via ffmpeg-user:

Sometimes I have to cut parts out of a video. I now use for this (bash on 
Debian):
   ffmpeg -y\
  -ss ${videoStart} -to ${cutStart} -i ${inputFile} \
  -ss ${cutEnd} -to ${videoEnd} -i ${inputFile} \
  -vcodec libx264   \
  -crf26\
  -acodec libmp3lame -qscale:a 9\
  -preset veryfast  \
  -lavfi "concat=n=2:v=1:a=1"   \
  -an ${outputFile}

But the cut from one part to another is a bit abrupt. Is there a
possibility to smooth it with something like a fade?

you can use the xfade filter. :
https://www.ffmpeg.org/ffmpeg-all.html#xfade
https://trac.ffmpeg.org/wiki/Xfade

I am now using:
  offset=$((${cutStart} - ${videoStart} - ${duration}))
  xfade=xfade=transition=slideleft:duration=${duration}:offset=${offset}
  time ffmpeg -y \
-ss ${videoStart} -to ${cutStart} -i ${inputFile}\
-ss ${cutEnd} -to ${videoEnd} -i ${inputFile}\
-vcodec libx264  \
-crf26   \
-acodec libmp3lame -qscale:a 9   \
-preset veryfast \
-filter_complex ${xfade} \
${outputFile}

But I have a major and minor problem.
The major problem is that I do not have audio from the second part of
the video.

Audio has its own filter: acrossfade

This seems to work:
 
xfade="xfade=transition=slideleft:duration=${duration}:offset=${offset};acrossfade=d=${duration}"



The minor problem is that I have to calculate the offset. It would be
nice if I could use -duration, but that does not work sadly.

As far as I know this isn't yet implemented.

That sounds like it is going to be implemented. Or do I read to much
in this sentence?


I really don't know.

Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] Use concat filter with a fade

2022-06-14 Thread Michael Koch

Am 14.06.2022 um 15:33 schrieb Cecil Westerhof via ffmpeg-user:

Michael Koch  writes:


Am 14.06.2022 um 13:47 schrieb Cecil Westerhof via ffmpeg-user:

Sometimes I have to cut parts out of a video. I now use for this (bash on 
Debian):
  ffmpeg -y\
 -ss ${videoStart} -to ${cutStart} -i ${inputFile} \
 -ss ${cutEnd} -to ${videoEnd} -i ${inputFile} \
 -vcodec libx264   \
 -crf26\
 -acodec libmp3lame -qscale:a 9\
 -preset veryfast  \
 -lavfi "concat=n=2:v=1:a=1"   \
 -an ${outputFile}

But the cut from one part to another is a bit abrupt. Is there a
possibility to smooth it with something like a fade?

you can use the xfade filter. :
https://www.ffmpeg.org/ffmpeg-all.html#xfade
https://trac.ffmpeg.org/wiki/Xfade

I am now using:
 offset=$((${cutStart} - ${videoStart} - ${duration}))
 xfade=xfade=transition=slideleft:duration=${duration}:offset=${offset}
 time ffmpeg -y \
   -ss ${videoStart} -to ${cutStart} -i ${inputFile}\
   -ss ${cutEnd} -to ${videoEnd} -i ${inputFile}\
   -vcodec libx264  \
   -crf26   \
   -acodec libmp3lame -qscale:a 9   \
   -preset veryfast \
   -filter_complex ${xfade} \
   ${outputFile}

But I have a major and minor problem.
The major problem is that I do not have audio from the second part of
the video.


Audio has its own filter: acrossfade


The minor problem is that I have to calculate the offset. It would be
nice if I could use -duration, but that does not work sadly.


As far as I know this isn't yet implemented.


By the way: how should I do it when I want to use five parts of the video?


I haven't tested this, but I think you must build a binary tree:
[0:v][1:v]xfade[a];[2:v][3:v]xfade[b];[a][b]xfade

Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] Use concat filter with a fade

2022-06-14 Thread Michael Koch

Am 14.06.2022 um 13:47 schrieb Cecil Westerhof via ffmpeg-user:

Sometimes I have to cut parts out of a video. I now use for this (bash on 
Debian):
 ffmpeg -y\
-ss ${videoStart} -to ${cutStart} -i ${inputFile} \
-ss ${cutEnd} -to ${videoEnd} -i ${inputFile} \
-vcodec libx264   \
-crf26\
-acodec libmp3lame -qscale:a 9\
-preset veryfast  \
-lavfi "concat=n=2:v=1:a=1"   \
-an ${outputFile}

But the cut from one part to another is a bit abrupt. Is there a
possibility to smooth it with something like a fade?


you can use the xfade filter. :
https://www.ffmpeg.org/ffmpeg-all.html#xfade
https://trac.ffmpeg.org/wiki/Xfade

Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


[FFmpeg-user] streamselect in a realtime application

2022-06-13 Thread Michael Koch

Hello,

I'm using FFmpeg for a realtime wormhole simulation in a planetarium. 
There are two inputs:

a) An equirectangular video for the environment.
b) A realtime stream from a 360° camera, this is mapped inside the wormhole.

Both inputs are stitched together with xstack, and then a suitable remap 
function is applied. The output is in fisheye format and streamed to the 
planetarium projector. So far, that's working fine.


But I also want to switch the wormhole on and off at certain times (t1 
and t2). To do this, I did try two approaches:


Approach 1:
Two streams are generated, one without and the other with wormhole. Then 
one of them is selected by a streamselect filter. The drawback of this 
approach is that double CPU power is required. Both streams must be 
generated by remap filters, although only one of them is used.


Approach 2:
I use two sets of xmap and ymap files for the remap filter. One set 
without and the other set with wormhole:

[xmap1][xmap2]streamselect@1=map=0[xmap];[ymap1][ymap2]streamselect@2=map=0[ymap];[a][xmap][ymap]remap...
The drawback of this approach is that I must use -loop 1 before each of 
the four mapping file inputs. Which means the files are reloaded for 
each frame. This is a huge waste of time. If I don't use -loop 1, then 
it seems the streamselect filters don't work.


Any ideas how this could be made faster?

Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] concat audio files

2022-06-11 Thread Michael Koch

Am 11.06.2022 um 11:46 schrieb Nicolas George:

Michael Koch (12022-06-11):

I always get this error message:
Stream specifier '' in filtergraph description [0][1]concat=n=2:a=1 matches
no streams.

Where did you find this "[number]" syntax, and has it ever worked for
you?


yes, it works after I added v=0. Isn't this the normal syntax to specify 
the two inputs of the concat filter? In this case [0][1] could be 
omitted because there are only these two inputs. [0:0][1:0] does also work.


Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] concat audio files

2022-06-11 Thread Michael Koch

Am 11.06.2022 um 11:45 schrieb Gyan Doshi:



On 2022-06-11 03:02 pm, Michael Koch wrote:

I want to concat two audio files. What's wrong with this command line?

ffmpeg -i birds1.wav -i birds1.wav -lavfi [0][1]concat=n=2:a=1 -c:a 
aac -y sound.mp4


Default value for concat v is 1. So you need to set it to 0.

concat=n=2:a=1:v=0


oh yes, that's it.

Thank you!
Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


[FFmpeg-user] concat audio files

2022-06-11 Thread Michael Koch

I want to concat two audio files. What's wrong with this command line?

ffmpeg -i birds1.wav -i birds1.wav -lavfi [0][1]concat=n=2:a=1 -c:a aac 
-y sound.mp4


I always get this error message:
Stream specifier '' in filtergraph description [0][1]concat=n=2:a=1 
matches no streams.


The console output is below.

Michael



D:\Bochum\audio>ffmpeg -i birds1.wav -i birds1.wav -lavfi 
[0][1]concat=n=2:a=1 -c:a aac -y sound.mp4
ffmpeg version 2022-06-06-git-73302aa193-essentials_build-www.gyan.dev 
Copyright (c) 2000-2022 the FFmpeg developers

  built with gcc 11.3.0 (Rev1, Built by MSYS2 project)
  configuration: --enable-gpl --enable-version3 --enable-static 
--disable-w32threads --disable-autodetect --enable-fontconfig 
--enable-iconv --enable-gnutls --enable-libxml2 --enable-gmp 
--enable-bzlib --enable-lzma --enable-zlib --enable-libsrt 
--enable-libssh --enable-libzmq --enable-avisynth --enable-sdl2 
--enable-libwebp --enable-libx264 --enable-libx265 --enable-libxvid 
--enable-libaom --enable-libopenjpeg --enable-libvpx 
--enable-mediafoundation --enable-libass --enable-libfreetype 
--enable-libfribidi --enable-libvidstab --enable-libvmaf 
--enable-libzimg --enable-amf --enable-cuda-llvm --enable-cuvid 
--enable-ffnvcodec --enable-nvdec --enable-nvenc --enable-d3d11va 
--enable-dxva2 --enable-libmfx --enable-libgme --enable-libopenmpt 
--enable-libopencore-amrwb --enable-libmp3lame --enable-libtheora 
--enable-libvo-amrwbenc --enable-libgsm --enable-libopencore-amrnb 
--enable-libopus --enable-libspeex --enable-libvorbis --enable-librubberband

  libavutil  57. 26.100 / 57. 26.100
  libavcodec 59. 33.100 / 59. 33.100
  libavformat    59. 24.100 / 59. 24.100
  libavdevice    59.  6.100 / 59.  6.100
  libavfilter 8. 40.100 /  8. 40.100
  libswscale  6.  6.100 /  6.  6.100
  libswresample   4.  6.100 /  4.  6.100
  libpostproc    56.  5.100 / 56.  5.100
Guessed Channel Layout for Input Stream #0.0 : stereo
Input #0, wav, from 'birds1.wav':
  Duration: 00:02:01.97, bitrate: 2116 kb/s
  Stream #0:0: Audio: pcm_s24le ([1][0][0][0] / 0x0001), 44100 Hz, 
stereo, s32 (24 bit), 2116 kb/s

Guessed Channel Layout for Input Stream #1.0 : stereo
Input #1, wav, from 'birds1.wav':
  Duration: 00:02:01.97, bitrate: 2116 kb/s
  Stream #1:0: Audio: pcm_s24le ([1][0][0][0] / 0x0001), 44100 Hz, 
stereo, s32 (24 bit), 2116 kb/s
Stream specifier '' in filtergraph description [0][1]concat=n=2:a=1 
matches no streams.


___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] Please increase the number of variables

2022-06-07 Thread Michael Koch

Am 03.06.2022 um 17:30 schrieb Paul B Mahol:

On Fri, Jun 3, 2022 at 5:05 PM Michael Koch 
wrote:


Hello,

I'd like to ask if the number of variables in expressions could be
increased from 10 to 25 please. I mean the st() and ld() functions. I'm
just programming a realtime wormhole simulation and the limitation to 10
variables is becoming nasty. I did already begin to re-use the same
variables for different things, but that makes debugging complicated. It
would be so much easier to have more variables and use each one only for
one thing.
What you see below is only a small part of the code.


Why you doing that way, this is the most inefficient way to do it.


How would you do it instead? Sure, it's possible to generate the mapping 
files with C or C# code.


Pro C code:
+ No limit for the number of variables.
+ The code can be commented in the same line.
+ It's very helpful to get error messages for division by zero, 
sqrt(-1), asin(2), log(0), log(-1).


Pro FFmpeg code in batch file:
+ A whole project can be written in one batch file and executed with 
just one double-click. No compiler required.
+ The mapping files are automatically generated in the correct working 
folder, no problems with files in wrong folder.


In the meantime I got the code for a realtime wormhole (with fisheye 
projection in a planetarium) running with just 10 variables (see below). 
I had to re-use some of the variables, which is error-prone. I also had 
to use many if/ifnot functions to avoid overwriting of variables. With 
12 variables the code would have been simpler.


I have some ideas for even more complex transformations, but with the 
limitation to just 10 variables they are impossible to implement.


Michael


ffmpeg -f lavfi -i nullsrc=size=%OUT_W%x%OUT_H% -vf 
format=pix_fmts=gray16le,geq='^

st(0,%OUT_H_FOV%/180*((2*X+1)/%OUT_W%-1));^
st(1,%OUT_V_FOV%/180*((2*Y+1)/%OUT_H%-1));^
st(2,atan2(ld(1),ld(0)));^
st(3,PI/2*hypot(ld(0),ld(1)));^
st(4,sin(ld(3))*cos(ld(2)));^
st(5,sin(ld(3))*sin(ld(2)));^
st(6,cos(ld(3)));^
st(1,if(lt(ld(6),cos(%OUT_H_FOV%/360*PI)),32767,0));^
st(0,ld(5)*cos(%PITCH2%/180*PI)-ld(6)*sin(%PITCH2%/180*PI));^
st(6,ld(6)*cos(%PITCH2%/180*PI)+ld(5)*sin(%PITCH2%/180*PI));^
st(5,ld(0));^
st(7,atan2(ld(5),ld(4)));^
st(8,acos(ld(6)));^
st(9,if(lte(ld(8),%RS%/180*PI),1));^
if(ld(9),st(7,ld(8)*cos(ld(7;^
ifnot(ld(9),st(8,ld(8)-%S%*(2*%RS%/180*PI/(ld(8)-%RS%/180*PI;^
ifnot(ld(9),st(4,sin(ld(8))*cos(ld(7;^
ifnot(ld(9),st(5,sin(ld(8))*sin(ld(7;^
ifnot(ld(9),st(6,cos(ld(8;^
ifnot(ld(9),st(0,ld(5)*cos(%PITCH1%/180*PI)-ld(6)*sin(%PITCH1%/180*PI)));^
ifnot(ld(9),st(6,ld(6)*cos(%PITCH1%/180*PI)+ld(5)*sin(%PITCH1%/180*PI)));^
ifnot(ld(9),st(5,ld(0)));^
ifnot(ld(9),st(7,atan2(ld(5),ld(4;^
ifnot(ld(9),st(8,acos(ld(6;^
ifnot(ld(9),st(7,atan2(sin(ld(8))*cos(ld(7)),cos(ld(8);^
if(ld(9),0.5*%IN_W2%*(1+180/PI/%RS%*ld(7)),ld(1)+0.5*%IN_W%*(1+ld(7)/PI))' 
-frames 1 -y xmap.pgm



___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


[FFmpeg-user] between(x,min,max)

2022-06-06 Thread Michael Koch
The function  between(x,min,max)   is definded as: "Return 1 if x is 
greater than or equal to min and less than or equal to max, 0 otherwise."


In some cases it would be useful to have a similar function (with a 
different name) which doesn't include the maximum: "Return 1 if x is 
greater than or equal to min and less than max, 0 otherwise."


This would be useful if a function is defined segment-wise:
ld(0)*between(t,0,10)+ld(1)*between(t,10,20)+...

The problem is at t=10, where both "between" functions become true.

Sure, there are workarounds to solve this problem, but a new "between2" 
function would make things easier.


Michael


___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] Please increase the number of variables

2022-06-03 Thread Michael Koch

Am 03.06.2022 um 17:30 schrieb Paul B Mahol:

On Fri, Jun 3, 2022 at 5:05 PM Michael Koch 
wrote:


Hello,

I'd like to ask if the number of variables in expressions could be
increased from 10 to 25 please. I mean the st() and ld() functions. I'm
just programming a realtime wormhole simulation and the limitation to 10
variables is becoming nasty. I did already begin to re-use the same
variables for different things, but that makes debugging complicated. It
would be so much easier to have more variables and use each one only for
one thing.
What you see below is only a small part of the code.


Why you doing that way, this is the most inefficient way to do it.


Computing time doesn't care in this case. The code is only generating 
the mapping files for the remap filter.

In the realtime part of the simulation are no complicated functions.
I can solve the problem with just 10 variables, but it would be much 
easier (and better readable code) if more variables were available.


Michael

--
**
  ASTRO ELECTRONIC   Dipl.-Ing. Michael Koch
   Raabestr. 43   37412 Herzberg
  www.astro-electronic.de
  Tel. +49 5521 854265   Fax +49 5521 854266
**

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


[FFmpeg-user] Please increase the number of variables

2022-06-03 Thread Michael Koch

Hello,

I'd like to ask if the number of variables in expressions could be 
increased from 10 to 25 please. I mean the st() and ld() functions. I'm 
just programming a realtime wormhole simulation and the limitation to 10 
variables is becoming nasty. I did already begin to re-use the same 
variables for different things, but that makes debugging complicated. It 
would be so much easier to have more variables and use each one only for 
one thing.

What you see below is only a small part of the code.

Thanks,
Michael


ffmpeg -f lavfi -i nullsrc=size=%OUT_W%x%OUT_H% -vf 
format=pix_fmts=gray16le,geq='^

st(0,PI/360*%OUT_H_FOV%*((2*X+1)/%OUT_W%-1));^
st(1,PI/360*%OUT_V_FOV%*((2*Y+1)/%OUT_H%-1));^
st(4,cos(ld(1))*sin(ld(0)));^
st(5,sin(ld(1)));^
st(6,cos(ld(1))*cos(ld(0)));^
st(7,atan2(ld(5),ld(4)));^
st(8,acos(ld(6)));^
st(9,if(lte(ld(8),%RS%/180*PI),%OUT_W%,0));^
st(8,if(gt(ld(8),%RS%/180*PI),ld(8)-(2*%RS%/180*PI/(ld(8)-%RS%/180*PI)),0));^
st(4,sin(ld(8))*cos(ld(7)));^
st(5,sin(ld(8))*sin(ld(7)));^
st(6,cos(ld(8)));^
st(7,atan2(ld(4),ld(6)));^
st(8,asin(ld(5)));^
ld(9)+0.5*%IN_W%*(1+ld(7)/%IN_H_FOV%*360/PI)' -frames 1 -y xmap.pgm
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] Multiple parts of a video

2022-05-31 Thread Michael Koch

Am 31.05.2022 um 20:27 schrieb Cecil Westerhof via ffmpeg-user:

Michael Koch  writes:


Am 31.05.2022 um 18:43 schrieb Cecil Westerhof via ffmpeg-user:

Michael Koch  writes:


I have a short example in chapter 2.57 of my book:
http://www.astro-electronic.de/FFmpeg_Book.pdf

Just to make sure I understand it, I should do something like:
  ffmpeg -ss %S1% -t %L1% -i %I1% \
 -ss %S2% -t %L2% -i %I1% \
 -ss %S3% -t %L3% -i %I1% \
 -lavfi "concat=n=3:v=1:a=0"  \
 -an %OUT%

But if I understand it well, this works on the iframes, would it not
be better (but longer) to use:
  ffmpeg -i %I1% -ss %S1% -t %L1% \
 -i %I1% -ss %S2% -t %L2% \
 -i %I1% -ss %S3% -t %L3% \
 -lavfi "concat=n=3:v=1:a=0"  \
 -an %OUT%

I think that won't work. If you write the options after the input file,
then they are applied to the next input file, and the options in the
third line are applied to the output file.
The concat filter does also work with audio. I just didn't need audio in
my example.

But if you put them before the input file they are not precise, but
work on the iframe level. This can give quit extensive differences. (I
was bitten by that in the past.) Or is that not the case in this
specific scenario?



Then you could use the trim filter.

Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] Multiple parts of a video

2022-05-31 Thread Michael Koch

Am 31.05.2022 um 20:27 schrieb Cecil Westerhof via ffmpeg-user:

Michael Koch  writes:


Am 31.05.2022 um 18:43 schrieb Cecil Westerhof via ffmpeg-user:

Michael Koch  writes:


I have a short example in chapter 2.57 of my book:
http://www.astro-electronic.de/FFmpeg_Book.pdf

Just to make sure I understand it, I should do something like:
  ffmpeg -ss %S1% -t %L1% -i %I1% \
 -ss %S2% -t %L2% -i %I1% \
 -ss %S3% -t %L3% -i %I1% \
 -lavfi "concat=n=3:v=1:a=0"  \
 -an %OUT%

But if I understand it well, this works on the iframes, would it not
be better (but longer) to use:
  ffmpeg -i %I1% -ss %S1% -t %L1% \
 -i %I1% -ss %S2% -t %L2% \
 -i %I1% -ss %S3% -t %L3% \
 -lavfi "concat=n=3:v=1:a=0"  \
 -an %OUT%

I think that won't work. If you write the options after the input file,
then they are applied to the next input file, and the options in the
third line are applied to the output file.
The concat filter does also work with audio. I just didn't need audio in
my example.

But if you put them before the input file they are not precise, but
work on the iframe level. This can give quit extensive differences. (I
was bitten by that in the past.) Or is that not the case in this
specific scenario?



Then you could use the trim filter.

Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] Multiple parts of a video

2022-05-31 Thread Michael Koch

Am 31.05.2022 um 18:43 schrieb Cecil Westerhof via ffmpeg-user:

Michael Koch  writes:


I have a short example in chapter 2.57 of my book:
http://www.astro-electronic.de/FFmpeg_Book.pdf

Just to make sure I understand it, I should do something like:
 ffmpeg -ss %S1% -t %L1% -i %I1% \
-ss %S2% -t %L2% -i %I1% \
-ss %S3% -t %L3% -i %I1% \
-lavfi "concat=n=3:v=1:a=0"  \
-an %OUT%

But if I understand it well, this works on the iframes, would it not
be better (but longer) to use:
 ffmpeg -i %I1% -ss %S1% -t %L1% \
-i %I1% -ss %S2% -t %L2% \
-i %I1% -ss %S3% -t %L3% \
-lavfi "concat=n=3:v=1:a=0"  \
-an %OUT%


I think that won't work. If you write the options after the input file, 
then they are applied to the next input file, and the options in the 
third line are applied to the output file.
The concat filter does also work with audio. I just didn't need audio in 
my example.


 Michael
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] Multiple parts of a video

2022-05-31 Thread Michael Koch

Am 31.05.2022 um 11:17 schrieb Bo Berglund:

On Sun, 29 May 2022 13:17:55 +0200, Michael Koch 
wrote:

Using concat filter.

That is exactly what I already know: cutting the different parts.
Probably one command for each part and then concatenate them.
So n + 1 commands.
My question was: can it be done with one command?


Please have a look at
https://trac.ffmpeg.org/wiki/Concatenate

"Concat demuxer", "Concat protocol" and "Concat filter" are three
different things.
You did use the concat demuxer. Now if you want to do all in one line,
you must use the concat filter.

Michael

Stepping in here due to the interesting topic:

I am daily using a tool I created myself to use ffmpeg to remove ads from
recorded mp4 TV news videos.
What I do is the following:
- I manually scan the video to find the start/end times of the ads (seconds)
- From this list the tool creates the ffmpeg commands to extract the parts
*between* the ads as separate numbered mp4 files
- Then a list of these small files is written to a file $JOINFILE
- This is then used in an ffmpeg call like this:
   ffmpeg -f concat -safe 0 -i $JOINFILE -c copy $TARGETFILE
- When this is done the small files and $JOINFILE are deleted

So reading this thread I get the feeling that there is a way to use the list of
cut times in a *single ffmpeg command* to create the output mp4 file *without*
creating the list file and essentially doing everything in this single ffmpeg
command.


I have a short example in chapter 2.57 of my book:
http://www.astro-electronic.de/FFmpeg_Book.pdf

Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] Multiple parts of a video

2022-05-29 Thread Michael Koch

Am 28.05.2022 um 21:17 schrieb Cecil Westerhof via ffmpeg-user:

Paul B Mahol  writes:


On Sat, May 28, 2022 at 4:28 PM Cecil Westerhof via ffmpeg-user 
 wrote:

  When I just want to have a certain part of a video, I can do something
  like:
  ffmpeg -y -i input.MTS \
 -ss 00:08   \
 -to 00:17   \
 -acodec copy\
 -vcodec libx264 \
 -preset veryfast\
 output.mp4

  But what if I want several parts of a video in my video?
  Do I need to cut the different parts out of the video and concatenate
  them, or is it possible to do it with one command?

Using concat filter.

That is exactly what I already know: cutting the different parts.
Probably one command for each part and then concatenate them.
So n + 1 commands.
My question was: can it be done with one command?



Please have a look at
https://trac.ffmpeg.org/wiki/Concatenate

"Concat demuxer", "Concat protocol" and "Concat filter" are three 
different things.
You did use the concat demuxer. Now if you want to do all in one line, 
you must use the concat filter.


Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] FFplay, wrong output size on extended desktop

2022-05-26 Thread Michael Koch

Am 25.05.2022 um 21:32 schrieb Gyan Doshi:



On 2022-05-25 08:31 pm, Michael Koch wrote:

Hello,

we want to use FFplay in a planetarium. It's a Windows computer with 
an extended desktop. The left monitor contains the user interface 
(1920x1080) and at the right side is a 4Kx4K extended desktop, which 
is shown on two 4096x2048 monitors, upper half and lower half. The 
extended 4Kx4K desktop is mirrored to the dome with 11 projectors.


We are using this 4Kx4K test image:
http://www.paulbourke.net/dome/testpattern/4096.png

This is the FFplay command line:
ffplay -noborder -left 1920 -top 0 4096.png

The problem is that the image is shown twice as large as it should 
be, so that we see only the top left quadrant.

You can see a screenshot in this discord channel (in german language):
https://discord.com/channels/968113228230045706/968113812769210398

If we set the size to 2048x2048, then the output is 4096x4096:
ffplay -noborder -x 2048 -y 2048 -left 1920 -top 0 4096.png
But then we don't have the full 4K resolution.

Who has an idea what's wrong here?
The console output is below.


What's the size value in Settings -> Display under Scale and Layout?


Problem solved. Scale was set to 200%.

Thanks,
Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-user] FFplay, wrong output size on extended desktop

2022-05-25 Thread Michael Koch

Am 25.05.2022 um 21:32 schrieb Gyan Doshi:



On 2022-05-25 08:31 pm, Michael Koch wrote:

Hello,

we want to use FFplay in a planetarium. It's a Windows computer with 
an extended desktop. The left monitor contains the user interface 
(1920x1080) and at the right side is a 4Kx4K extended desktop, which 
is shown on two 4096x2048 monitors, upper half and lower half. The 
extended 4Kx4K desktop is mirrored to the dome with 11 projectors.


We are using this 4Kx4K test image:
http://www.paulbourke.net/dome/testpattern/4096.png

This is the FFplay command line:
ffplay -noborder -left 1920 -top 0 4096.png

The problem is that the image is shown twice as large as it should 
be, so that we see only the top left quadrant.

You can see a screenshot in this discord channel (in german language):
https://discord.com/channels/968113228230045706/968113812769210398

If we set the size to 2048x2048, then the output is 4096x4096:
ffplay -noborder -x 2048 -y 2048 -left 1920 -top 0 4096.png
But then we don't have the full 4K resolution.

Who has an idea what's wrong here?
The console output is below.


What's the size value in Settings -> Display under Scale and Layout?


I have forwarded your question to the discord channel. I can't answer it 
myself because I'm 300 km away from the planetarium. Please feel free to 
comment in the discord channel in english, if you like.


Michael

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".


<    1   2   3   4   5   6   7   8   9   10   >