Re: [FFmpeg-user] (no subject)

2020-11-01 Thread Edward Park
Hi,

> Good afternoon, sorry that it took so long to update this thread, well I am 
> still trying to use uar ffmpeg in my own way, but it seems that everything is 
> going against me, now the problem is that if I try to compile in windows the 
> process seems to be it carries out but then it doesn't actually do anything, 
> it doesn't compile the executables or the dll, but in linux if you want to 
> use the same set of commands, to compile I do the following:
> I introduce it in the form of a column so that it does not occupy too much
wut
> ./configure 
> --arch=x86_64 
> --target-os=mingw32 
> --cross-prefix=x86_64-w64-mingw32- 
> --prefix=/usr/local 
> --pkg-config=pkg-config 
> --pkg-config-flags=--static 
> --extra-cflags=-static 
> --extra-ldflags=-static 
> --extra-libs="-lm -lz -fopenmp" 
> --enable-static 
> --disable-shared 
> --enable-nonfree
> --enable-gpl 
> --enable-avisynth 
> --enable-libaom 
> --enable-libfdk-aac 
> --enable-libfribidi 
> --enable-libmp3lame 
> --enable-libopus 
> --enable-libsoxr 
> --enable-libvorbis 
> --enable-libvpx 
> --enable-libx264 
> --enable-libx265
> Make
> If I compile it in linux in this way it compiles well, although along the way 
> it tells me that some codecs are deprecated but it does compile, but this 
> same set of commands in windows does not compile, if it indicates that it 
> does but does not produce the final link, this it only happens to me with 
> ffmpeg.
Does that automatically cross-compile with just make? I thought you would need 
to add --enable-cross-compile. Also I’d have thought you’d want a different 
prefix at the least for cross-compiling on linux and compiling on windows.
> I have tried both with cygwin and with the monster developed by microsoft, 
> (the wsl2)

On wsl2, you’d just compile as if you were on a linux system, and not bother 
with mingw32, it’s basically a vm isnt it?

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Cannot find a matching stream for unlabeled input pad 7 on filter Parsed_concat_2

2020-10-29 Thread Edward Park
Hi,

> Sorry Nicolas, but I think I am still confused.  I changed it to 16 and
> get the same error so I am thinking I am not understanding.


The concat filter should have [# of segments] × ([# of audio streams] + [# of 
video streams]), or n × (v + a) as specified to the filter. You seem to have 2 
segments (n=2) with 2 streams (v=1:a=1) each. 

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] sudo: ffmpeg: command not found

2020-10-29 Thread Edward Park
Hi,

Maybe I wasn’t being clear, I meant run the configure script without any 
arguments, since I don’t know if gnutls is the only missing requirement.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Cannot find a matching stream for unlabeled input pad 7 on filter Parsed_concat_2

2020-10-29 Thread Edward Park
Hi,

> Can you please elaborate on the (1+1) part?
> I see here https://trac.ffmpeg.org/wiki/Concatenate 
> 
> that n=x where x is the number of input segments

Since there is a video stream and an audio stream in each segment (1+1), for 4 
segments there should be 8 inputs.


Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] sudo: ffmpeg: command not found

2020-10-28 Thread Edward Park
Hi,

Does that search the current directory with gnu find? I thought you need to do 
find . -type ….

I just noticed your configure script failed. I (and probably everyone else too) 
assumed that you successfully built ffmpeg and had ffmpeg_g, etc. in 
/home/marc/ffmpeg_build because you said you compiled ffmpeg… It says command 
not found because it wasn’t installed, it wasn’t compiled at all.

Try again from the top, git clone a fresh working copy of the repo and try 
running ./configure to see if it works.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffplay with Havision Makito X video encoder, H.264 stream, and "intra refresh" option enabled

2020-10-28 Thread Edward Park
Hi,

> It was suggested here that I "attach a file" but I'm not sure how that
> can be done with FFMPEG since it's unable to decode the video it
> doesn't seem to write anything to a file when you try to record the
> stream.

I don’t know about the issue you’re describing but I think nc (or it might be 
netcat depending on your system) to save the raw data you’re  using as the 
input would be best for recreating the sample as a file from a udp stream.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] sudo: ffmpeg: command not found

2020-10-28 Thread Edward Park
Hi,

> Am a novice.

That’s a really bad reason tbh. ~/bin/ffmpeg will probably work if $HOME/bin 
isn’t in your path.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Issue with combining mp4 audio file and an image

2020-10-28 Thread Edward Park
Hi,

> Here is the error I am getting:
> 
> 
> Stream specifier ':a' in filtergraph description 
> [0:v]setsar=1[v0];[1:v]setsar=1[v1];[v0][1:a][v1][2:a]concat=n=4:v=1:a=1
> matches no streams.


Your input #2 is a jpeg.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] sudo: ffmpeg: command not found

2020-10-28 Thread Edward Park
Hi,

It looks like you installed to $HOME/bin, many systems (I think most Debian 
based) restrict sudo PATH to “secure” directories. Why do you need sudo anyways?

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] how to burn subtitles and transcode live stream

2020-10-25 Thread Edward Park
Hi,

> Thank you for your email. I have tried but i can't manage to both scale and
> burn the subtitles if i try to only burn them with filter_complex it works
> or if i try to scale only it works too but chaining them doesn't work.

You need to scale after the subtitles, they have a set size too.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffprobe/ffplay ERROR :: Could not find codec parameters for stream 0 unspecified pixel format,

2020-10-24 Thread Edward Park
Hi,

> I did install version 4.3.1 on Linux MINT:
> 
> 
> ffprobe version 4.3.1 Copyright (c) 2007-2020 the FFmpeg developers
>   built with gcc 7 (Ubuntu 7.5.0-3ubuntu1~18.04)
>   configuration: --disable-x86asm
> 
> 
> I still got the same error. The mp4 files do not run anywhere. W10, Linux 
> does not matter which tool media player, VLC. The the mp4 from the same 
> device which were capture some month ago did not had that issues.


Can you share a sample of a non-working file? If you can’t get it to work using 
any program it sounds like the files are corrupt or otherwise not valid, maybe 
it happened when they were being transferred or captured? Unless the working 
files that the same device produced some months ago now fail to play as well, 
that would be strange.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] [hls] keepalive request failed

2020-10-24 Thread Edward Park
Hi,

>> ffmpeg -fflags +igndts -i 
>> "https://vod-kijk2-prod.talpatvcdn.nl/GIEBboXaKkD/3bf69d5f-da4f-7756-7e3d-ad8d1295f936/GIEBboXaKkD-index.ism/GIEBboXaKkD-index-audio=16-video=3031502.m3u8";
>>  -i "https://vod-kijk2-prod.talpatvcdn.nl/webvtt/760978E1.vtt"; -c copy -c:s 
>> srt "output.mkv"
>> [...]
>> [mpegts @ 0500f040] PES packet size mismatchime=00:32:30.87 
>> bitrate=3177.6kbits/s speed=2.52x
>> [...]
> To answer my own question: adding the flag "discardcorrupt" (-fflags 
> +discardcorrupt+igndts) fixes the issue and keeps the process going. 

Glad you found a workaround, but that’s a stopgap measure at best right? The 
“I/O error” messages are still encountered and new ssl sessions are 
established? Maybe there is a problem with how mbedtls is used in ffmpeg?

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] how to burn subtitles and transcode live stream

2020-10-24 Thread Edward Park
Hi,

> Who can I write to for help?

Assuming it’s about the FFmpeg command line tools you’re on the right list.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Problem in v360?

2020-10-24 Thread Edward Park
Hi,

> After a deeper look into the quaternion multiplication, it seems to be 
> correct. The terms are only written in a different order in the Wikipedia 
> article.
> 
> But it's a fact that the v360 rotations are broken, as can be shown by this 
> simple test:
> You can use any equirectangular input image.
> First make a combined rotation yaw=90 and pitch=45 in the default rotation 
> order (ypr),
> then rotate back pitch = -45,
> then rotate back yaw = -90.
> The output image should be the same as the input image. But it isn't.


It seems to me it assumes a 45° vertical fov and 90° horizontal fov on both 
input and output, does that affect the behavior at all? (Or am I not 
understanding right)

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Overlay Error reoccurring

2020-10-22 Thread Edward Park
Hi,

> Its John again, I followed your Guidance and changed a bit of the code but
> I am still getting an error of Unable to find a suitable output format for
> 'box=1:shado etc etc

I thought that was just in the email, there shouldn’t be any whitespace after 
the colon before box. You could combine all the filters together like -lavfi 
"[0:a][1:a]amerge=inputs=2;[2:v]drawtext=fontfile=C:\Windows\ARLRDBD.TTF:text='Snatch
 Media 
Player':fontcolor=white:fontsize=24:box=1:shadowcolor=darkblue:shadowx=1:shadowy=1:boxcolor=blue@0.6:boxborderw=5:x=50:y=H-th-50”
 Not sure if it works


Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg does nothing but using CPU

2020-10-22 Thread Edward Park
Hi,

>> just remove the redirection and look how to avoid that situation
> 
> Nope, I just added -nostdin.


But doesn’t that redirection just count as that process having another file 
open? I think with ionice -c3 it might make a difference, unless -nostdin also 
is silent

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffprobe/ffplay ERROR :: Could not find codec parameters for stream 0 unspecified pixel format,

2020-10-21 Thread Edward Park
Hi,

> I have a set of mp4 files, which can`t be handled via ffplay/ffprobe.

Try a recent version, you can easily download static builds 
 if you don’t want to compile 
on your own, package manager stable version is usually years old, I think 
because of the standard of vulnerability testing required. Or you could get the 
testing/unstable distro version, but if you don’t have the sources added it’s 
extra steps and even the latest release is usually a few months old. And can 
you confirm that the files work with different programs?

> I did attach the complete log files


Just an fyi, usually helps if you just copy paste into the body of the email 
instead of attaching log files if you can.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Overlay Error reoccurring

2020-10-21 Thread Edward Park
Hi,

> I am not sure if I have the overlay code in the correct position or should
> I be adding it to the  "-filter_complex "[0:a][1:a]amerge=inputs=2[a]"" as
> an additional Item.

I think that would be the cleanest way, separated by a semicolon. And you could 
get rid of the output pad label and the explicit stream mappings. I’m not sure 
if you can mix simple and complex filters, maybe putting the -vf after the 
gdigrab input is valid too?

> The overlay runs correctly on its own using windows cmd.


Is that the actual command you type what does \k and * do in windows command 
line??

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg audio encoding

2020-10-21 Thread Edward Park
Hi,

> i have a question what is the recommended way for transcoding only audio
> and keep video as is.
> 
> I tried this with command:
> ffmpeg -n -i something.mkv something_else.mkv -map 0 -vcodec copy -scodec
> copy -acodec ac3 -b:a 640k
> 
> The thing is that using this command with a 10+GB file the process is very
> very slow and it would take more than 1 day.
> 
> I tried manual to remove audio from mkv, encode it in ac3 and repack it
> back which was done in an h.
> 
> Do you have any idea what can be improved?


You need the output options before the output for them to have an effect. The 
command that takes 1 day is transcoding everything using default settings for 
mkv (including audio) and ignoring the trailing options.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ERROR: srt >= 1.3.0 not found using pkg-config

2020-10-20 Thread Edward Park
Hi,

> I can't get past the error ERROR: srt >= 1.3.0 not found using pkg-config
> 
> 
> 
> Has  anyone found where it is coming from?? Log file attached but it
> does not seem to add anything to the party

That’s not the only problem, can you run configure at all, with no options?

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Converting text to video for dictation

2020-10-20 Thread Edward Park
Hi,

>> typography effects based video
> So that one is basically what I'm looking to do with ffmpeg - turn a
> subtitle file (of any type) into a transparent backgrounded (?) video for
> use in my video editor to combine the necessary contents together - or some
> other file/method that has/involves time stamps/tags or "keyframes".

Well I’ll be honest I thought that was pretty unlikely when I threw that in the 
mix. 

> Since I don't intend to use my voice in my videos, I "voice" them via text
> elements either in the center of the screen or some other places where it
> makes sense to do so. (I do this with my current video editor already, but
> it isn't very efficient, time and complexity wise, especially for videos
> that heavily rely on this... so hopefully ffmpeg can help with that?) Also
> if possible (not needed), font and color changing, and maybe some movement.

So just to be clear, you’re referring to kinetic typography, right?
For example stuff like Pulp Fiction: Intonation , 
derivative graphics work by Jarett Moody (slightly NSFW, language)
Apple has also been playing around with the style since 2016, mostly in promo 
campaigns for non-traditional channels. Don’t Blink 


You say it’s not very efficient using your video editor (I assume something 
along the lines of Avid, PPro, FCPX/Motion), but I don’t think FFmpeg is the 
right tool for the job, unless you had some really complex ass subtitles 
already, using a bunch of v4+ features of the script and you just need to burn 
it into the video.

> Maybe subtitles in the future, but I sort of already know how to do those,
> however the first point would be good to know.

I think you want subtitles with heavy custom styling, maybe as opposed to 
captions like you would do in scenarist, try Aegisub  
with newer ASS versions.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Converting text to video for dictation

2020-10-19 Thread Edward Park
Hi,

> So, I'm a person that makes videos and dictates them via text elements. My
> video editor makes this tedious. Is there a way to make this process faster
> with ffmpeg (probably like a subtitle file into a video with text in the
> middle?)

I’m confused as to what you mean, are you asking how to author subtitle files 
from a transcript from the video, burn subtitles into video, or typography 
effects based video?

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Which graphics card for FFmpeg + Linux?

2020-09-23 Thread Edward Park
Hi,

> I've searched the web for infos about which graphics card/GPU to get for 
> using FFmpeg's hardware encoding/decoding features, but the results where 
> either heavily dated or unclear.
> If someone can point me at something, I'd be very grateful :)
> 
> Sorry if this has been asked a million times before, but:
> ---
> Which graphics card would you suggest to buy for a new PC (AMD Ryzer CPU) 
> with the intention of FFmpeg running well with it?
> What would the developers suggest?
> ---
> 
> 
> I knew about [FFmpeg on nvidia 
> developer](https://developer.nvidia.com/ffmpeg), but I'd also be curious 
> about the status with AMD GPUs?

"FFmpeg’s hardware encoding/decoding features” is kinda too broad of a use 
case, with only that query I’d say get the best “pro” line card you can.

If you’re asking about Nvidia vs Radeon, green team will recommend Quadros and 
red (now blue) team will recommend FirePro and Radeon Pro :P

There is basically a reference implementation of the nvenc/dec for directly 
using the hardware on nvidia cards, but amd tends to take a different approach 
with their sdks, abstracting away as much as possible rather than directly 
working with specific hardware, so it might not be as clear when hardware 
acceleration is happening (amf might be wrapped around vcn, and used by 
directx, Mac frameworks, vulkan, etc)

Regards,
Ted Park

P.S. lol@“and RTFMs"

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffplay struggling with this station!

2020-09-23 Thread Edward Park
Hi,

I just realized that the station was public, so I just tried:
% ffplay -nodisp -vn 
"https://jrtv-live.ercdn.net/jrradio/englishradiovideo.m3u8”
no issues.
> I did try without that buffer flag, but that had no effect. I’m going to try 
> updating ffplay, and see if that helps. 
Yeah also update the tls library before building and if that still doesn’t fix 
it it might be the connection speed?

> Also, I’ve noticed that vlc had a 1000 ms “network cache”, and I wonder if 
> that had anything to do with playing that station flawlessly. 

That probably means something like it’s’ playing 1 second in the past, so if 
something happens and it can’t keep up in realtime there’s still 1 second to 
fix it before it skips.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] "cookies" option doesn't seem to work.

2020-09-23 Thread Edward Park
Hi,

> I'm trying to set cookies for a HLS request.
> 
>> ffmpeg -v 99 -cookies "test=blabla;" -i
> https://del.thumva.com/hls/20200621-0005-05/index_1.m3u8
> 
> But the request doesn't actually include my cookie:

See docs for http in ffmpeg-protocols.

> At the very least, each cookie must specify a value along with a path and 
> domain.  HTTP requests that match both the domain and path will automatically 
> include the cookie value in the HTTP Cookie header field. Multiple cookies 
> can be delimited by a newline.
> 
> The required syntax to play a stream specifying a cookie is:
> 
>   ffplay -cookies "nlqptid=nltid=tsn; path=/; domain=somedomain.com;” 
> http://somedomain.com/somestream.m3u8
> 

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffplay struggling with this station!

2020-09-23 Thread Edward Park
Hi,

> I use the following command to play the station: 
> $ ffplay -nodisp -vn -fflags nobuffer -fflags discardcorrupt -flags low_delay 
> https://jrtv-live.ercdn.net/jrradio/englishradiovideo.m3u8


I think “-fflags nobuffer” tells ffplay not to buffer? So there will be nothing 
to play every time ffmpeg finishes playing a segment and only then starts 
fetching the next segment.

But I also thought if you use -fflags multiple times without a plus sign before 
the flag only the last one was used, so I’m not sure.

And I’m pretty sure low_delay is only a thing in some video decoders.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] xfade filter - custom expressions

2020-09-23 Thread Edward Park
Hi,

> "XY - The coordinates of the current sample." - Is the transition defined
> by selecting a color/state for each pixel?
> 
> "WH - The width and height of the image." - OK.
> 
> "P - Progress of transition effect." - Looking at the source, this is a
> float. Ranging from 0-1 I assume(?)
> 
> "PLANE - Currently processed plane." - Not sure what this means. How many
> planes are there? I'd say at least 2, one for each input. Are there more?
> 
> "A - Return value of first input at current location and plane." - This
> would return something like first_input[x][y] if the input were treated as
> an array? But I still don't understand what the plane refers to OR what the
> value returned would be. Would it be 0 or 1, or some color value?
> 
> "a0(x, y) a1(x, y) a2(x, y) a3(x, y) - Return the value of the pixel at
> location (x,y) of the first/second/third/fourth component of first input."
> - What does 'component' refer to? 1234 == RGBA ?
> 
> Is there an example that illustrates how to use a custom expression to
> achieve a meaningful transition?

I’m not sure how to write equations for separate components on the command line 
either, but using your interpretation as reference, tried out 
expr='A*P+B*(1-P)’ and got a basic working fade, at least looking at it 
visually with a couple sources.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] 5% of audio samples missing when capturing audio on a mac

2020-09-22 Thread Edward Park
Hi,

>> 48000 is certainly a much nicer number when you compare it with the common 
>> video framerates (24, 30/1.001, etc. all divide cleanly)
> 
> Can you explain this? I'm trying to get (30/1.001) or the rounded 29.97 to 
> divide 48k cleanly or be a clean ratio but I don't see it. Maybe that with 
> 30/1.001 it's got a denominator of 5, which is pretty small?

Compared to 44.1kHz? 48kHz is 48000 samples per second, and 29.97 (30/1.001) 
fps is, obviously, 3/1001 (≈29.97) frames per second - flip that around and 
you get 1001/3 seconds duration for each frame.

For each frame there are 1601.6 (16 × 1.001) samples. For 59.97fps, 800.8, for 
film, 2002 per frame. The 1.001 factor might seem a bit ugly, but that’s kind 
of why 48 whole kilohertz works much better.

if you think about an mpeg ts system clock timebase of 1/9 for example, 
common video or film framerates generally come out to an integer number of 
1/9 second “ticks.” A 29.97fps frame is 3003 “ticks”, which also matches 
the 1601.6 samples duration. The fractions of samples might make it look like 
the ratio is not easy to work with, but at 48kHz, one sample has a duration of 
1.875 “ticks”, or 15/8 = 30/16

If you replace 48000 with 44100, the numbers aren’t as nice. (Sometimes not 
even rational? Not sure what combo does that though)

I might be making up the history behind it, but 44.1kHz was basically just 
workable, with 20kHz assumed to be the “bandwidth” limit of sound intended for 
people to hear, 40kHz would be needed to encode sound signals that dense, and 
the extra 4.1kHz would help get rid of artifacts due to aliasing - and probably 
the biggest factor was the CD. I’m sure they could have pressed much more 
density into the medium, but the laser tech that was commercially viable at the 
time to put in players for the general consumer sort of made 44.1kHz a decent 
detent in the sampling frequency dial in an imaginary sample rate-to-cost 
estimating machine.

If you actually do the calculations with 44.1kHz, the ratios you get aren’t 
*too* bad, instead of numbers like 2^3 or 3×5, it’s something like 3×49 or 
something.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] HLS stream delivery problem

2020-09-22 Thread Edward Park
Hi,

> Do you know how to fix this?
> This is my code with ffmpeg, Y:\ is drive letter to the HLS server 
> (WebDAV)

WebDAV is good for lightweight collaboration for old school workgroups like 
maybe wiki pages. It’s basically lots of http requests to simulate a locally 
mounted drive, maybe there is hol blocking going on. If it is a transport 
issue, it would be impossible to find out without something like a span capture.

This doesn’t sound like a problem with ffmpeg at all, unless a different WebDAV 
share mounted in the same manner doesn’t suffer from the same problem, or if 
the problem occurs even when you save to a directly connected storage device.


Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] bwdif filter question

2020-09-22 Thread Edward Park
Hello,

>> I'm not entirely aware of what is being discussed, but progressive_frame = 
>> !interlaced_frame kind of sent me back a bit, I do remember the discrepancy 
>> you noted in some telecopied material, so I'll just quickly paraphrase from 
>> what we looked into before, hopefully it'll be relevant.
>> The AVFrame interlaced_frame flag isn't completely unrelated to mpeg 
>> progressive_frame, but it's not a simple inverse either, very 
>> context-dependent. With mpeg video, it seems it is an interlaced_frame if it 
>> is not progressive_frame ...
> 
> No so, Ted. The following two definitions are from the glossary I'm preparing 
> (and which cites H.262).

Ah okay I thought that was a bit weird, I assume it was a typo but I saw h.242 
and thought two different types of "frames" were being mixed. Before saying 
anything if the side project you mentioned was a layman’s glossary type 
reference material, I think you should base it off of the definitions section 
instead of the bitstream definitions, just my $.02. I read over what I wrote 
and I don't think it helps at all, let me try again, I am saying that there are 
the "frames" in the context of a container, and a different kind of video 
"frame" that has a width and height dimension. (When I wrote "picture frames" I 
meant to refer to physical wooden picture frames for photo prints, but with 
terms like frame pictures in play not very effective in hindsight)

> Since you capitalize "AVFrames", I assume that you cite a standard of some 
> sort. I'd very much like to see it. Do you have a link?

This was the main info I was trying to add, it's not a standard of any kind, 
quite the opposite, actually, since technically its declaration could be 
changed in a single commit, but I don't think that is a common occurrence. 
AVFrame is a struct that is used to abstract/implement all frames in the many 
different formats ffmpeg handles. it is noted that its size could change as 
fields are added to the struct.

There's documentation generated for it here: 
https://www.ffmpeg.org/doxygen/trunk/structAVFrame.html

> H.262 refers to "frame pictures" and "field pictures" without clearly 
> delineating them. I am calling them "pictures" and "halfpictures".

I thought ISO 13818-2 was basically the identical standard, and it gives pretty 
clear definitions imo, here are some excerpts. (Wall of text coming up… 
standards are very wordy by necessity)

> 6.1.1. Video sequence
> 
> The highest syntactic structure of the coded video bitstream is the video 
> sequence.
> 
> A video sequence commences with a sequence header which may optionally be 
> followed by a group of pictures header and then by one or more coded frames. 
> The order of the coded frames in the coded bitstream is the order in which 
> the decoder processes them, but not necessarily in the correct order for 
> display. The video sequence is terminated by a sequence_end_code. At various 
> points in the video sequence a particular coded frame may be preceded by 
> either a repeat sequence header or a group of pictures header or both. (In 
> the case that both a repeat sequence header and a group of pictures header 
> immediately precede a particular picture, the group of pictures header shall 
> follow the repeat sequence header.)
> 
> 6.1.1.1. Progressive and interlaced sequences
> This specification deals with coding of both progressive and interlaced 
> sequences.
> 
> The output of the decoding process, for interlaced sequences, consists of a 
> series of reconstructed fields that are separated in time by a field period. 
> The two fields of a frame may be coded separately (field- pictures). 
> Alternatively the two fields may be coded together as a frame 
> (frame-pictures). Both frame pictures and field pictures may be used in a 
> single video sequence.
> 
> In progressive sequences each picture in the sequence shall be a frame 
> picture. The sequence, at the output of the decoding process, consists of a 
> series of reconstructed frames that are separated in time by a frame period.
> 
> 6.1.1.2. Frame
> 
> A frame consists of three rectangular matrices of integers; a luminance 
> matrix (Y), and two chrominance matrices (Cb and Cr).
> 
> The relationship between these Y, Cb and Cr components and the primary 
> (analogue) Red, Green and Blue Signals (E’R , E’G and E’B ), the chromaticity 
> of these primaries and the transfer characteristics of the source frame may 
> be specified in the bitstream (or specified by some other means). This 
> information does not affect the decoding process.
> 
> 6.1.1.3. Field
> 
> A field consists of every other line of samples in the three rectangular 
> matrices of integers representing a frame.
> 
> A frame is the union of a top field and a bottom field. The top field is the 
> field that contains the top-most line of each of the three matrices. The 
> bottom field is the other one.
> 
> 6.1.1.4. Picture
> 
> A reconstructed picture is obtained by decoding a 

Re: [FFmpeg-user] bwdif filter question

2020-09-21 Thread Edward Park
Morning,

> Regarding 'progressive_frame', ffmpeg has 'interlaced_frame' in lieu of 
> 'progressive_frame'. I think that 'interlaced_frame' = !'progressive_frame' 
> but I'm not sure. Confirming it as a fact is a side project that I work on 
> only occasionally. H.242 defines "interlace" as solely the condition of PAL & 
> NTSC scan-fields (i.e. field period == (1/2)(1/FPS)), but I don't want to 
> pursue that further because I don't want to be perceived as a troll. :-)

I'm not entirely aware of what is being discussed, but progressive_frame = 
!interlaced_frame kind of sent me back a bit, I do remember the discrepancy you 
noted in some telecopied material, so I'll just quickly paraphrase from what we 
looked into before, hopefully it'll be relevant.

The AVFrame interlaced_frame flag isn't completely unrelated to mpeg 
progressive_frame, but it's not a simple inverse either, very 
context-dependent. With mpeg video, it seems it is an interlaced_frame if it is 
not progressive_frame, and it shouldn't result where mpeg progressive_sequence 
is set.

Basically, the best you can generalize from that is the frame stores interlaced 
video. (Yes interlaced_frame means the frame has interlaced material) Doesn't 
help at all... But I don't think it can be helped? Since AVFrames accommodates 
many more types of video frame data than just the generations of mpeg coded.

I think it was often said (not as much anymore) that "FFmpeg doesn't output 
fields" and I think at least part of the reason is this. At the visually 
essential level, there is the "picture" described as a single instance of a 
sequence of frames/fields/lines or what have you depending on the format and 
technology; the image that you actually see. 

But that's a visual projection of the decoded and rendered video, or if you're 
encoding, it's what you want to see when you decode and render your encoding. I 
think the term itself has a very abstract(?) nuance. The picture seen at a 
certain presentation timestamp either has been decoded, or can be encoded as 
frame pictures or field pictures.

Both are stored in "frames", a red herring in the terminology imo. The AVFrame 
that ffmpeg deals with isn't necessarily a "frame" as in a rectangular picture 
frame with width and height, but closer to how the data is  temporally 
"framed," e.g. in packets with header data, where one AVFrame has one video 
frame (picture). Image data could be scanned by macroblock, unless you are 
playing actual videotape.

So when interlace scanned fields are stored in frames, it's more than that both 
fields and frames are generalized into a single structure for both types of 
pictures called "frames" –  AVFrames, as the prefix might suggest, also are 
audio frames. And though it's not a very good analogy to field-based video, 
multiple channels of sound can be interleaved.

I apologize that was a horrible job at quickly paraphrasing but if there was 
any conflation of the packet-like frames and picture-like frames or interlaced 
scanning video lines and macro block scanning I think the info might be able to 
shift your footing and give you another perspective, even if it's not 100% 
accurate.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Create lossless PNG compressed avi video - not MPNG - for ImageJ

2020-09-15 Thread Edward Park
Hi,

>> ... error in ImageJ: "Unsupported compression: 474e504d 'MPNG' in line 16”
> 
> ffmpeg -i input -vcodec png -vtag "PNG " out.avi

I'm just curious, do you know if MPNG a thing, like can you use it instead of 
original magic bytes and have a series of png's to animate it?

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] 5% of audio samples missing when capturing audio on a mac

2020-09-14 Thread Edward Park
Hi,

Now that I try it, it works fine for some random number of seconds, then stops. 
Sometimes 3, sometimes 300.

Something that comes to mind is that Mojave release notes had something about a 
new model for security as it pertains to mic input (like to prevent a mac 
version of a "wiretap" type malware). I checked and I had given Terminal.app at 
some point, but that might not be enough, I think if you enable 
DevToolsSecurity you can whitelist specific binaries to run, but I don't know 
if that also is possible for entitlements.

> I have had a chance to test the issue on friends' laptops so here are two
> more data points. They have only version 4.3.1 I believe, not the latest
> HEAD. Their built-in mic sample rate is 48000Hz (I suppose more recent
> laptops updated it?).

Yeah, I don't remember the details but apparently it's more efficient or 
something?
48000 is certainly a much nicer number when you compare it with the common 
video framerates (24, 30/1.001, etc. all divide cleanly)

> 1) MacBook Pro (13-inch, 2018, Four Thunderbolt 3 Ports) Processor  2,3 GHz
> Intel Core i5
> No issues, captured audio sounds fine.
> 
> 2) MacBook Pro (13-inch, 2019, Four Thunderbolt 3 Ports) Processor 2,4 GHz
> Quad-Core Intel Core i5
> Missing audio samples as in my case (even more so), and the captured audio
> sounds even worse...

That's interesting because I'm pretty sure that was the year they started 
marketing the "directional beamforming mic array" that looked like the same 3 
mics as before, I wonder if they are interleaved/framed differently with a new 
chip?

> So unfortunately it looks like a problem with ffmpeg's avfoundation
> implementation at this point...

Well you can look at it that way, but another might be apple makes breaking 
changes to their system framework apis all the time :p

> I am making some recorded lectures using the webcam output of ATEM mini and 
> so the sound capture has to be flawless. FFmpeg is such a great tool for 
> encoding and I hoped to use it to grap 1080p webcam instead of the too simple 
> QuickTime, which offers only an Apple prores codec with 2GB/min of data... 
> OBS seems to have too much overhead for 1080p on my 2core laptop. Maybe I'll 
> have to grab the audio using quicktime and video using ffmpeg and sync them 
> up in iMovie.


Okay now I am not sure what the setup is. FFmpeg or QuickTime will record 
whatever input it gets if it can mux it, I think you should take another look 
at the controller for your capture interface (software or hardware). Since they 
call it "webcam" I really don't think prores would be the only output format, 
especially on hardware with that price tag, surely it has built in h.264, 
especially over usb.

Are you using the same device to record audio? I think that would be better if 
you're not, even if you have to add a seemingly unnecessary roundtrip.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Problem about duration value of converted mp3 file

2020-09-13 Thread Edward Park
Hi,

> Does anyone know anything about it?
> I use the 96k option, but a difference of about 1s remains.
> The file size is large, so I want to use it with more compression, but I
> cannot apply it.

>> The contents requested for confirmation have been retested and confirmed.
>> This is the result of recording a wav file on an Android device and
>> converting it on Windows PC.
>> 
>> The length of the original file is 26:39, and the result of converting it
>> to the default option is 25:47, which is displayed in Windows Explorer and
>> the file size is 4,686KB.
>> If this is converted using the -b:a 96k option, it has the same length as
>> the original 26:39 and the file size is 18,740KB.
>> 
>> I checked and played both the original file and the converted file using
>> ocenaudio SW, it marked as 26:39 and played.
>> However, the converted file by default is displayed in the time of 25:47
>> in Media Player and played.

Okay, so the file displays 25:47 in Media Player and plays... how long? Have 
you tried actually timing it, perhaps with a stopwatch like on your phone? I 
mean that is like a ~40 second difference if the number displayed is accurate I 
think you would be reporting some other issue, like being truncated, or sped 
up, skipping, etc. 

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Pixel format: default and filter?

2020-09-13 Thread Edward Park
Hi,

> How can I determine the pixel formats that ffmpeg has chosen for the filters' 
> input and output pads?


I'm not sure "chosen" is the best way to describe it, but inserting the 
showinfo filter will print the format of each frame at that point in the 
filterchain. But as format conversions are done automatically as needed, and 
you probably want to know which format it is throughout the filtering, you can 
see the info by inserting graphmonitor=f=format. (But this outputs as the video 
stream, so you may want to add a split filter, add the graphmonitor filter to 
one split and display it, and do what you were planning to with the other.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Pixel format: default and filter?

2020-09-13 Thread Edward Park
Hi,

> ffprobe now reports out.mov being yuv420p. Is this an implicit conversion to 
> a lower bit depth?

It's just the default output format for overlay. It's commonly used for stuff 
across colorspaces (like yuv420p video and argb png logos overlaid) especially 
with alpha.

You can set format option in the filter itself to force output format. I assume 
it doesn't do any conversions when you overlay two sources with same format 
with no alpha internally.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] 5% of audio samples missing when capturing audio on a mac

2020-09-12 Thread Edward Park
Hi,

You know it's confounding, I couldn't get avfoundation audio input to work at 
all, and then tried a bunch of options to get to the weird issue you describe 
(which I thought was maybe something to do with the clock source being 
different) but then I went back to supplying no extraneous options and it works 
pretty much without issue... In my case I was trying with a turntable plugged 
into an external interface, with ffplay I'm using to listen through my Mac 
seeing if it'll start stuttering again (???)

Some things I tried before it started working apparently on its own is 
explicitly setting decoder to pcm_f32le and setting input sample rates lower. 
Expectedly, it messes with pitch if you do this, but what I did not understand 
is it wasn't consistent, kept speeding up and back down. (And I'm sure it 
wasn't the spindle on the turntable)

I have no idea, other than that I think it's probably some other sound 
application changing system sound servers configuration with no relation to 
what I'm doing with ffmpeg...

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] rav1e encoding only using one core

2020-09-11 Thread Edward Park
Hi,
> I'm playing around with rav1e and noticed in my first test that only one
> core is used of the 8 (16 virtual) I have. I tried with and without
> -threads setting. Since most other codecs behave this way I was expecting
> it to use as many cores as possible if not constrained by the command line.

Unfortunately rav1e will disappoint you at the moment, as far as parallel 
encoding goes.
I don't know if most other codecs can be said to scale as well as you say, 
different codecs serve different different purposes, recent ones centered 
around consistent real-time performance will use the dozens of cores available 
in workstations nowadays, but most hit the cap pretty soon depending on config.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Error messages when generating ProRes from 16-Bit TIFF

2020-09-10 Thread Edward Park
Hi,

I only have a generic suggestion to offer; as always, try updating to the 
current code, or a nightly build.
> Log:
> 
> ffmpeg started on 2020-09-10 at 14:21:31
> Report written to "ffmpeg-20200910-142131.log"
> Log level: 48
> Command line:
> "C:\\Temp\\ffmpeg\\bin\\ffmpeg.exe" -report -i "D:\\Artus\\tif\\0%05d.tif" 
> -c:v prores_ks -profile:v  -vf "scale=2048x1550" -pix_fmt yuv444p10le 
> neu.mov
> ffmpeg version git-2020-06-15-9d80f3e Copyright (c) 2000-2020 the FFmpeg 
> developers
>  built with gcc 9.3.1 (GCC) 20200523
>  configuration: --enable-gpl --enable-version3 --enable-sdl2 
> --enable-fontconfig --enable-gnutls --enable-iconv --enable-libass 
> --enable-libdav1d --enable-libbluray --enable-libfreetype --enable-libmp3lame 
> --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg 
> --enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr 
> --enable-libsrt --enable-libtheora --enable-libtwolame --enable-libvpx 
> --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 
> --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp 
> --enable-libvidstab --enable-libvmaf --enable-libvorbis 
> --enable-libvo-amrwbenc --enable-libmysofa --enable-libspeex --enable-libxvid 
> --enable-libaom --disable-w32threads --enable-libmfx --enable-ffnvcodec 
> --enable-cuda-llvm --enable-cuvid --enable-d3d11va --enable-nvenc 
> --enable-nvdec --enable-dxva2 --enable-avisynth --enable-libopenmpt 
> --enable-amf
I was curious about a few things though, why might someone add 
"--disable-w32threads"? Can you use posix threads instead?

Also, is 2048x1550 not a typo? (1550 doesn't factor very well, 2×25×31)
I can't articulate a clear rationale for this (and so it might not be optimal) 
but I would personally convert formats then change the size, putting the format 
filter before scale instead of pix_fmt. Or more likely I wouldn't even think of 
that and expect ffmpeg to figure that part out for me.
> [AVIOContext @ 0219ad3f0c40] Statistics: 146089658 bytes read, 0 seeks
> [tiff @ 0219ad19ea80] compression: 1
> frame=0 fps=0.0 q=0.0 size=   0kB time=-577014:32:22.77 bitrate=  
> -0.0kbits/s speed=N/A
> cur_dts is invalid st:0 (0) [init:0 i_done:0 finish:0] (this is harmless if 
> it occurs once at the start per stream)
Again, not sure what it means if anything at all, but the time looks like it 
rolled over.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Problem with time-lapse

2020-09-09 Thread Edward Park
Hi,

> ffmpeg -i /mnt/ramdisk3g/workdir/%d.jpg -r 120 -vcodec mpeg4 -qsale 1 -y
> -filter:v " setpts=0.23*PTS"'' /mnt/ramdisk/mp4/
> 
> And the result is this
> https://drive.google.com/file/d/1acRm7vWwJAXz1jEU66nuD7Db9mrtPuz-/view?usp=sharing
> 
> In workdir i have multiplied every image 6 times with python counter loop
> and total number of files are 1374 when there is only 229
> Iam pretty happy to results now. Not sure how that minterpolate works but i
> will try use that too.


Glad you got something that works for you, but tbh what I suggested was 
basically to do nothing, just make a super fast slideshow, essentially.

That example video wasn't even made using FFmpeg, I opened 110 frames from your 
sample in an image viewer and held down the arrow key to "animate" them, a 
book-corner doodle flipbook on a computer, if you will.

It's fine as long as it works, but I think you can still eliminate some steps.

For example you said you multiplied each image 6 times, I'm guessing that means 
you made 6 identical files, so the same frame is on screen 6 times, which makes 
the video 6× slower.

IIRC, images read in get timestamped as if 25fps by default, but you can change 
it, instead of actually duplicating the images (-framerate 25/6).

Then the "setpts=0.23*PTS" basically increases the speed by 1/0.23 ≈ 4.3×, 
around 109fps.

The final framerate is fixed at 120fps with "-r 120" (which I assume is 
necessary) which duplicates frames to fill 120 from the ~109fps that is 
available.

I just feel like this is too much stretching and shrinking the "tape" to get an 
effective 25/6/0.23 fps, or 12.61 seconds if you are adjusting by length. All 
you need to do is set the rate you want before the input.

https://media.kumowoon1025.com/videos/example/starvideo-deduplication-cf.mp4 


The "-r" option will duplicate the frames as needed to reach the fps you set. 
The minterpolate filter will try to improve upon that by interpolating between 
frames to generate the "in-between" frames to fill in by analyzing the existing 
ones instead of simply duplicating them. There's not a lot of obvious motion 
here, so try blend as the mode (I think this may have been what you had in mind 
with tblend at the start)

https://media.kumowoon1025.com/videos/example/starvideo-minterpolate.mp4 


Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Output to S3

2020-09-09 Thread Edward Park
Hi,

>> Is setting up a sort of proxy server that will intermediate and upload
>> to s3 an option?
> It might be the option, but it will be the last option I'd like to accept.
> I'm seeking for more simple solution.
> I'm afraid that I'll stuck on sending a video to that server too.
Is the aversion to using the official s3 utilities due to resources and lack of 
access, or the additional integration and associated learning curve that it 
brings? If it's the latter, and you can configure the server to your needs, I 
think you might find any of the solutions that mount s3-compatible buckets as 
network shares, or even local fuse filesystems to your liking. Performance is 
not the best, and you will see request graph spike like mad, but it is in my 
opinion the most simple solution. (Since the s3 storage will be just like any 
other local path on the machine) Worth some consideration imo.

>> That is a very small window of view into the error you got, I'd try
>> using the send_expect_100 option and confirming which ssl library is used,
>> no real basis but what I might start with when I do the throw all of it on
>> the wall and see what sticks routine.
> The error says:
> Error in the push function.
> av_interleaved_write_frame(): I/O error
> Error writing trailer of
> https://bogdan-public.s3.us-east-2.amazonaws.com/video/output.ts: I/O error
> frame=  218 fps=184 q=10.9 Lsize= 404kB time=00:00:07.17 bitrate=
> 461.4kbits/s speed=6.06x
> video:352kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB
> muxing overhead: 14.843175%
> [tls @ 01b41de52940] The specified session has been invalidated for
> some reason.
> [tls @ 01b41de52940] Error in the pull function.
> [https @ 01b41de52840] URL read error: I/O error
> Conversion failed!
I am thinking that the "Error in the push function" is not from FFmpeg. I 
missed it earlier, the quoting was messed up in my email client, but the only 
problem FFmpeg seems to be reporting is "I/O error," and consistently before or 
after sending the main data body. (Let me know if I am wrong about this)

So in addition to the -send_expect_100 true, I would add -multiple_requests 
false to disable pipelining

> This is the whole output:
> 
> d:\Programs\ffmpeg-20200831-4a11a6f-win64-static\bin>
> ffmpeg -i sample_640x360.ts -method PUT
> https://bogdan-public.s3.us-east-2.amazonaws.com/video/output.ts
But the problem is the invalidated session, it suggests that you successfully 
got a valid session authorized beforehand, if it was invalidated there's no way 
to know why unless we start from there. Is this command unabridged as well? I 
think you mentioned you altered the url, but there's no api key or anything I 
would expect to be in the request header that uploads a file to an endpoint.

> I tried several video formats with this command:
> ffmpeg -i sample_640x360.ts -method PUT
> https://bogdan-public.s3.us-east-2.amazonaws.com/video/output.ts
> 
> The bucket (its name is changed here) is public and I was able to send data
> to it by Http Put request from the C# code sample.

I have a feeling the C# code sample was part of a whole AWS S3 C# SDK, handling 
all AAA with credentials hardcoded in a configuration file or something. Doing 
the same with a REST api will require more hands-on operation.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Unable to play or import video files in an app that uses ffmpeg

2020-09-08 Thread Edward Park
Hi,

> Issue: For me specifically on my Windows 10 PC, these video playback and
> import features do not work. When I select a video to play, nothing happens
> (the app is supposed to have an inner window that displays the playback but
> that is non-existent when I hit play). 

Perhaps some differential diagnosis could be helpful. It sounds like the 
program worked, and you used it successfully at some point, what has changed? 
Which operating system did you run before?

Regards,
Ted Park
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Problem with time-lapse

2020-09-08 Thread Edward Park
Hi,

> I'm trying to automate time-lapse video from still images. I have all sky
> camera that captures around 200 images / night. I've managed to do videos
> like this
> https://drive.google.com/file/d/1yyihZNypBy0r5y_JJiTHgtvc8N1RAZ3e/view?usp=sharing

Timelapse can be counterintuitive sometimes.

Sometimes you need to slow the framerate down, sometimes you need to raise it. 

Sometimes you need less frequent captures (or a step in the middle where you 
approve/reject the pictures that are going to be your frames), and sometimes, 
you need to capture at a much higher rate. 

> However what i'm looking for is for smoother video. Is it possible with
> ffmpeg?

I think this case it is a matter of not enough captures to make a good 
timelapse video.

> I've tried to clone every image 30 times etc but still fail to get good
> results. I've tried to search ffmpeg man pages and the web but no success.
Also, maybe interpolating/duplicating frames is actually counterproductive by 
giving you a less clean starting point? I think the simplest timelapse render 
looks pretty good, from there maybe you can minterpolate but I'm not sure if 
the filters before minterpolate help.

https://media.kumowoon1025.com/videos/simple-timelapse-example.mov

> I use python script to automate the whole process. so below is some option
> ive tried
> 
> os.system('ffmpeg -i ' + workdir2 + '%d.jpg  -vf '
> '"tblend=average,framestep=1,setpts=0.50*PTS,minterpolate"'  + ' -r 30 -b:v
> 64K -crf 10 -an ' + destinationdir + videofile)
> os.system('ffmpeg -r 30 -i ' + workdir2 + '%d.jpg -vcodec mpeg4 -qscale 1
> -y -filter:v '" setpts=2.0*PTS"' ' + destinationdir + videofile)

This is just a general tip, I can excavate the command from that, but it would 
be immensely helpful if you put it into a line you can feed into a shell and 
also attach the output.


Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] I can't copy or embed a picture to an output audio (A-A conversion)

2020-09-08 Thread Edward Park
Hi,

> ffprobe -hide_banner -show_streams /Users/home/Music/iTunes/iTunes\ 
> Media/Music/Black\ Sabbath/Born\ Again/01\ Trashed.aiff — the command
> 
> ##Result:
> 
> Input #0, aiff, from '/Users/home/Music/iTunes/iTunes Media/Music/Black 
> Sabbath/Born Again/01 Trashed.aiff':
...
>  Duration: 00:04:19.00, start: 0.00, bitrate: 12309 kb/s
>Stream #0:0: Audio: pcm_f32be (fl32 / 0x32336C66), 192000 Hz, 2 channels, 
> flt, 12288 kb/s
>Stream #0:1: Video: png, rgb24(pc), 732x745, 90k tbr, 90k tbn, 90k tbc 
> (attached pic)
...
> [STREAM]
> index=0
> codec_name=pcm_f32be
> codec_long_name=PCM 32-bit floating point big-endian
> profile=unknown
> codec_type=audio
...
> [/STREAM]
> [STREAM]
> index=1
> codec_name=png
> codec_long_name=PNG (Portable Network Graphics) image
> profile=unknown
> codec_type=video
...
> [/STREAM]
> 
> 
> Based on that info I tried execute the following reincarnations of the 
> command to copy what ffprobe  interpreted as a video stream but 
> unsuccessfully:
...
> The commands either fail, show warnings such as “the video stream doesn’t 
> exist” when I’m trying to map it, or transcode only the audio stream when I 
> use -vcodec:copy or -c:v copy. Running ffprobe on newly created file shows 
> the single audio stream as well.

Yes, it's rather misleading isn't it. Based on what the ff* tools tell you,  
what you tried (or at least one of them) should definitely have worked. The 
problem is that the output from the commands you used to determine your plan is 
not really an accurate description.

> A stunning revelation awaited me when I passed the input through MediaInfo 
> application. It showed only the audio-stream and no metadata reported by 
> ffprobe hence the “no videostream exists” - or smth to the effect of that - 
> message I mentioned above. What’s going on and where did ffprobe retrieve the 
> metadata and cover information from? Is it a way to copy or embed an artwork 
> or/and metadata to a new audio?

The output from mediainfo gives some indication that something is different; 
unusual; the video "stream" reported by ffprobe isn't really another stream. 
And when you think about it, it makes sense. It's an aiff format file, how is 
it going to have a video stream?? I don't even think it can have multiple 
streams of any kind (at least the "simple" kind, not the ones that are saved by 
drum machines and sequencers and whatnot. Think those get a different extension 
anyhow).

The answer is that the metadata and cover image are in an id3 tag. Use 
-write_id3v2 true with the aiff muxer to force it to write one. It's disabled 
by default for good reason though, chances are other applications than iTunes 
(and now its various spawns) won't expect, or look for one.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Output to S3

2020-09-08 Thread Edward Park
Hi,
> I can't rely on "aws s3 cp", since it can be not installed on a machine
> where I will run ffmpeg.
Is setting up a sort of proxy server that will intermediate and upload to s3 an 
option?

> Do you know what this error means ?
> [image: image.png]

That is a very small window of view into the error you got, I'd try using the 
send_expect_100 option and confirming which ssl library is used, no real basis 
but what I might start with when I do the throw all of it on the wall and see 
what sticks routine.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg http filter?

2020-09-07 Thread Edward Park
Hi,

> ffmpeg -i test.jpg -vf format=rgb24,http=localhost:8080 -y out.jpg


I don't think it's possible using filters, or with a single invocation like 
that. (The 'http' filter is hypothetical and just meant for illustration right?)

Depending on how you're connected to the server, I think pipes or sockets would 
be a better place to start. And also separate commands for outputting frames 
for the server to consume, and another that takes the returned images.


Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Convert DNG

2020-09-07 Thread Edward Park
Hi,

>> I'm working in an Archive where we digitize analog movies. Our master-files 
>> are usually DPX but now we get a new scanner which save the movies in DNG 
>> files. Is there a possibility to convert de DNG files with FFMPEG to DPX 
>> files
> 
> Do you mean CinemaDNG? That's not the same as DNG. DNG is a file format for 
> RAW still images, while CinemaDNG is for videos.

I'm pretty sure CinemaDNG is a series of DNG files with audio, timing metadata, 
etc. 
And both are not open standards, which is somewhat ironic because Adobe 
developed it as an answer to the plethora of proprietary raw image and film 
formats.


https://xkcd.com/927 

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] "interlace"

2020-09-06 Thread Edward Park

> I have great respect for you, Ted. Tell me: What is interlaced video?

Well that is unwarranted, possibly spiteful, even, to someone as insecure as me 
;)

That is the real question isn't it. Probably won't be very satisfying but I'll 
try to describe it. Probably not useful to say it's video that is interlaced 
(or to be interlaced) lines. Video as in not film? When individual frames don't 
correspond to a single moment in time.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] "interlace"

2020-09-06 Thread Edward Park
Hi,

> In the case of "interlace":
> "It's interlaced video" -- video in which the lines alternate (i.e. are 
> interlaced) between two (or theoretically, more) fields (e.g. 
> odd-even-odd-even...). That employs the past participle of the verb, 
> "interlace", as an adjective to modify "video".
> 
> H.262 refers to "interlace video" when referring to hard telecine. But 
> "interlace video" is a bit of a mutt. "Interlace" is clearly being used as an 
> adjective, yet "interlace" is not a participle (past or present) -- 
> "interlaced" is the past participle and "interlacing" is the present 
> participle. What it would have to be is a verbal adjective (i.e. a verb used 
> as an adjective). I may be wrong, but I don't think there exists such a thing 
> as a verbal adjective.
That's more or less what a participle is.

> A hard telecined video residing on a disc is clearly not interlaced. It is 
> clearly deinterlaced (i.e. stored in fields). Since it is deinterlaced, it 
> can be fed directly to a scanning TV (i.e. CRT) provided it is of the proper 
> frame rate, or it can be interlaced -- a verb -- by a decoder as part of the 
> decoding step.

Is it? Hard telecine is like telecining film and then recording it on a VCR, 
isn't it? And you don't need to deinterlace interlaced video to display it on 
an interlaced scanning TV. I think the confusion is when you deinterlace 
interlaced video, it is still interlaced video (or at least I think of it that 
way).

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ? About ffmpeg's prores implemention

2020-09-06 Thread Edward Park
Hi,

>> I'm not sure what you are referring to, what's there to spread fud about??
> 
> Closely read your text that you removed out.

You're reading into it too much if it provoked fear or uncertainty in your 
mind, wasn't my intention.

>> What is, AME or one of the prores encoders included in FFmpeg? I mean
>> obviously it'd depend on the job and hardware but for what it's worth I
>> compared prores_ks with xq and it took around twice the wallclock time.
>> It did indeed take way more than twice the cpu time though, but if its
>> output has problems then the whole point is moot :/
> 
> Your comparison is not scientifically proven and thus is highly
> personally biased.

Yes it was an observation of a single test case. I did say it was pointless. 
And I think being biased towards fairness is something to be avoided, at least 
personally, that is.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Ghostly audio stream.

2020-09-06 Thread Edward Park
Hi,

Is there a file with the same name or similar (with "-2" appended or something) 
in the same dir? Maybe you confused which is which? As for how to prevent, 
currently the default is to not load external audio files so updating ffmpeg 
and mpv might be an option.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] HLS remote playback

2020-09-04 Thread Edward Park
Hi,

> Now, how can I open that file and watch the stream on my laptop with the VLC?
> Do I need some NGNIX or Apache server or FFMPEG can do the job?


Yes, HLS stands for http live streaming, you will need to serve the files over 
http somehow. Doesn't have to be apache or nginx, I think there's a rudimentary 
webserver in the python examples for instance.

For the client yes FFmpeg can just take an http scheme url as input, i.e. 
ffmpeg -i http://hostname/path/to/playlist.m3u8 ...

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] build error at libavformat/udp.o

2020-09-01 Thread Edward Park
Hi,

>>   d. Android NDK version : r16
> 
> The issue is (for example) not reproducible with r19b here.


It turns out this is the crux of the problem, android idiosyncrasy in its c 
library. Actually the error was describing the exact problem, ip_mreq_source 
struct is defined differently.

https://issuetracker.google.com/issues/36987220

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Problem about duration value of converted mp3 file

2020-09-01 Thread Edward Park
Hi,

> When converting a wav file to MP3 using the default option, an error occurs
> in the length.
> Converted using the following command
> 
> ffmpeg.exe -i 1.wav 1.mp3
> 
> The duration of the original wav is 1:09:30, but the length of the
> converted MP3 is 1:07:16.
> The length of the file was checked through Windows Explorer and Windows
> Media Player.
> However, when checking with ocen audio and other software, it is normally
> displayed as 1:09:30.

That's more of a compromise than an error, encoder is most likely lame, and in 
ffmpeg would use vbr by default with a command like that. So to get accurate 
duration you pretty much need to decode, and it appears windows media player 
estimates instead. If there is a difference when you playback with a stopwatch 
in one hand that would be strange.

> When I tested using the -b:a option, 64k and 96k are converted to the same
> length, but there is a problem with 32k and 48k.
> 
> ffmpeg.exe -i sample_2.wav -b:a 96k sample_2_96.mp3
> 
> In addition, when converting m4a files to MP3, a problem occurs also in 96k.
> Please help me on what to fix or give options.

I think setting the bitrate makes it encode at cbr and that makes it possible 
to determine the duration more accurately but not sure why different bitrates 
gives different results.

Actually how confident are you about the accuracy of the input file duration 
that you are making these comparisons to?

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] build error at libavformat/udp.o

2020-08-31 Thread Edward Park
Hi,

> 3. When I tried to use FFmpeg 4.0.6, I do not have any problem to build it. I 
> have even done simple smoke test successfully for our final product.

Are you saying with older source, following the same steps gives you functional 
output? I thought you were trying to configure the build without a toolchain 
for cross compiling for whatever android device but i dont know now... 
Hopefully someone better informed will help you out but I think it's weird it 
doesn't look like you're using an sdk for developing for android for the tools, 
usually they have extended names that are arch specific to distinguish them (if 
you tell clang to cross compile for arm and it uses its as, ld, ar, c library, 
etc I don't think you can do that for android) 

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ? About ffmpeg's prores implemention

2020-08-30 Thread Edward Park
Hi,

> Again just typical FUD from same person.

I'm not sure what you are referring to, what's there to spread fud about??

> It is obviously slower, hahaha.


What is, AME or one of the prores encoders included in FFmpeg? I mean obviously 
it'd depend on the job and hardware but for what it's worth I compared 
prores_ks with xq and it took around twice the wallclock time. It did 
indeed take way more than twice the cpu time though, but if its output has 
problems then the whole point is moot :/

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Hoping to make multiple input seeking outputs based on a text file (inspired by concat)

2020-08-30 Thread Edward Park
Hi,

> Seems like a very simple script to do this. IMHO, this type of task shouldn't 
> be built into ffmpeg.

It seems to me that is the prevalent opinion. Why is that? To exhume a dead 
horse and start beating it again, the jumbled up program streams in a DVD's VOB 
transport streams require this exact maneuver to play. Or is the idea that a 
separate library should deal with this? I guess that's what bluray does...

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ? About ffmpeg's prores implemention

2020-08-30 Thread Edward Park
Hi,

> After more research, I’ve found that the two other encoders that I have which 
> write Prores create files that Afterburner recognizes as Prores and plays 
> them in real time.  Unfortunately ffmpeg’s Prores isn’t recognized as such by 
> the Mac and I’ll have to switch to AME for this going forward.  Too bad 
> because ffmpeg is so beautifully multithreaded and fast… I’ll miss it!

Well, Adobe pays Apple to use their ProRes codec in AME, so that should be a 
foolproof option.

Basically, authorized partners have access to a lot more comprehensive and 
definitive specifications and architectural directives from the original 
authors when they develop their codecs, and they go through the pass/fail tests 
that all but guarantees that their output works with any and all other official 
implementation. Th

On the other hand, and I might be wrong about this, but the ProRes 
implementation in FFmpeg was pretty much reverse engineered by a couple 
(extremely talented) people, and it was done a long time ago. As talented as 
the authors are, obviously it is impossible to replicate the codec perfectly 
without the "blueprints." Nevertheless, it worked fine (until now), and it is 
definitely maintained, but some significant updates were made by Apple that I 
don't think have been fully realized by the changes in FFmpeg, especially in 
the last few minor versions of motion.

Since you are on a Mac, implementing the ProRes encoder through videotoolbox 
would be your solution. I tried to tackle that a few weeks ago actually, but I 
think I may have been in way over my head, haha. Since it's a feature that 
would only benefit macos builds, priority might be low, I don't know how much 
you want ProRes+FFmpeg, but I'm thinking you could hire whoever does the heavy 
lifting in macos videotoolbox to implement the prores encoder as well.

> Unfortunately ffmpeg’s Prores isn’t recognized as such by the Mac and I’ll 
> have to switch to AME for this going forward.  Too bad because ffmpeg is so 
> beautifully multithreaded and fast… I’ll miss it!

I'm a bit confused this comment though... How beneficial additional threads are 
to performance is firstly dependent on the actual codec, FFmpeg, and AME more 
like orchestrates the multi-threading, and I can't see AME falling behind very 
much in this specific case... Do you mean ProRes rendering doesn't rev up the 
CPU usage over 300% or 600% if you use AME?? (Depending on source) Ultimately 
it's the same as all the other "Apple authorized" ProRes apps, videotoolbox.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] error avformat_open_input

2020-08-30 Thread Edward Park
Hi,

> and the video its in the same path that the project.
I don't think that's the issue, because the error description says:
> ... unresolved external symbol "int __cdecl avformat_open_input(struct 
> AVFormatContext * *,char const *,struct AVInputFormat *,struct AVDictionary * 
> *)" 
> (?avformat_open_input@@YAHPAPAUAVFormatContext@@PBDPAUAVInputFormat@@PAPAUAVDictionary@@@Z)
>  referenced in function "bool __cdecl load_video(char const *)" 
> (?load_video@@YA_NPBD@Z) 
Also, I think questions on using the libraries and troubleshooting your code 
are referred to the libav list and leaves this list for help on how to use the 
command line tools themselves, unless something changed recently.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Some questions about PTS

2020-08-28 Thread Edward Park
Hi,

> Let's assume the framerate is constant. For example, I want to delay a video 
> by 5 frames and then hstack the original video and the delayed version:
> 
> ffmpeg -i test.mp4 -vf "split[a][b];[b]setpts=PTS+5/(FR*TB)[c];[a][c]hstack" 
> -y out.mp4


I would try tpad=start=5, but I'm not sure what happens for the first 5 
frames... If your example works I'm pretty sure it would work.

> ffmpeg -i test.mp4 -vf "split[a],tpad=start=5[b];[a][b]hstack" -y out.mp4

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Some questions about PTS

2020-08-28 Thread Edward Park
Hello,

I am not confident about this info but I've always thought the timebase is 
usually the reciprocal of the framerate *or smaller*. As in, the duration of a 
frame can be represented accurately enough as the difference between the 
timestamps, which aren't counted using seconds, but "ticks" in whatever 
timebase. So smaller fractions could be used as the timebase as long as the 
system is capable, and on the other hand, when you create low-fps video like 
4fps, obviously the timebase isn't going to be 1/4, it'll probably have the 
same timebase as any other export. (I think of it as an analog wall clock, 
except it doesn't go tick-tock every second, it goes tick-tock every 1/9 
seconds.)

Actually, I think for the codec timebase, it is more common for it to be 1/2 
the reciprocal of the frame rate; if that's codec-specific, I don't know why 
that is. Maybe you've also seen some setpts examples where you divide/multiply 
something by 2 for some arcane reason? Hopefully someone can explain further..

When you delay some frames by whatever amount, it necessarily effects a change 
in the frame rate (but not the timebase). I'm not sure where the FR value for 
setpts comes from, maybe it wouldn't matter if it stays the same as the nominal 
framerate if indicated by the media, but if it is something that can change, 
maybe the effective rate at the end of the chain, obviously it wouldn't work as 
expected.

Just for the sake of curiosity, what has you looking to delay frames using 
setpts? I feel there are easier methods.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ? About ffmpeg's prores implemention

2020-08-27 Thread Edward Park
Hi,

ProRes is proprietary, Apple actually calls FFmpeg out on having an unlicensed 
implementation:

> Using any unauthorized implementation (like the FFmpeg and derivative 
> implementations) may lead to decoding errors, performance degradation, 
> incompatibility, and instability.


But you said it worked fine until fairly recently, did you install the pro 
video formats update that released recently?

Regards,
Ted Park
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] FFmpeg distribution help

2020-08-08 Thread Edward Park
Hi,

> We have developed software that requires the distribution of FFmpeg
> (included in the download). Our intention is to sell the software.
> Who do I contact to inquire further about how to proceed regarding
> permission, royalties, fees etc.?


Run ffmpeg -L to see which license is applicable.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] WavPack to PCM?

2020-08-08 Thread Edward Park
Hi,

> Is there a way to convert WavPack format to PCM using ffmpeg? 


If you just re-save as wav, I think the default is pcm 16bit.

i.e.
ffmpeg -i input.wav output.wav

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] dshow questions

2020-08-08 Thread Edward Park
Hi,

> Is -vcodec in this case an undocumented option of the dshow input device? I 
> ask because it's written before -i.
> Or is it an option for the output, and could (or should?) be written before 
> the output file?

I think it's not documented because it's deprecated, ultimately it's equivalent 
to codec:v, which is listed as input/output option.

> Why is each line listed twice? Might there be a difference between the lines, 
> but FFmpeg can't show what the difference is?

That is weird, maybe it is listing them once for device then for the pin? Or it 
tries to list video and audio properties but the audio falls back to the video 
device?

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffprobe

2020-07-17 Thread Edward Park
Hi,

> I am using ffprobe to calculate stream bps, with RTSP  every thing work fine, 
> with RTP , I need to add sdp file, I have one but couldn’t find how to tell 
> ffprobe to use it :
> 
> Ffprobe  -show_frames -rtsp_transport tcp rtsp://192.168.168.201/11   
> - work fine
> Ffprobe   -show_frames  rtp//192.168.168.201:1234--- don’t work
> need 11.sdp file


I think it's the other way around, the sdp file should tell some streaming 
software (ffmpeg might be an option) to stream rtp://192.168.168.201:1234 
, but I think you would need to specify what the 
codec is, the packets, etc. also, that rtsp does for you using the sdp file's 
contents. It basically takes care of controlling how the media is streamed.

i.e. Just having a text sdp file in the current directory where you run the 
command doesn't actually stream the media as specified, there is probably no 
process bound to that port at all.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Ffmpeg issues multiple HTTP requests when starting a video from a URL

2020-07-17 Thread Edward Park
Hi,

> There is no attempt to seek the video on the command line. The server returns 
> HTTP/1.1 206 Partial Content initially and ffmpeg then tries to get data at 
> different offsets before it prints "All info found" and proceeds with the 
> conversion. Multiple connections cause an additional delay which I'd like to 
> avoid no matter how small it is. Is there a mechanism to override this 
> behavior and have ffmpeg to work with the data stream as it is being read?

Information needed to parse the content can be near the end of the file, as it 
is in this case.

> [mov,mp4,m4a,3gp,3g2,mj2 @ 01faa550d080] [debug] Format 
> mov,mp4,m4a,3gp,3g2,mj2 probed with size=2048 and score=100
> [mov,mp4,m4a,3gp,3g2,mj2 @ 01faa550d080] [] type:'ftyp' parent:'root' sz: 
> 32 8 9495632
> [mov,mp4,m4a,3gp,3g2,mj2 @ 01faa550d080] [debug] ISO: File Type Major 
> Brand: isom
> [mov,mp4,m4a,3gp,3g2,mj2 @ 01faa550d080] [] type:'free' parent:'root' sz: 
> 8 40 9495632
> [mov,mp4,m4a,3gp,3g2,mj2 @ 01faa550d080] [] type:'mdat' parent:'root' sz: 
> 9468817 48 9495632

At the beginning of the file, it is apparent that after 'ftyp' then 'free', 
'mdat', media data, will continue for another ~9 mbytes, but that's not really 
useful if it can't be decoded. So rather than keep downloading, it skips to the 
next section of the file, looking for the necessary info.
> [mov,mp4,m4a,3gp,3g2,mj2 @ 01faa550d080] [] type:'moov' parent:'root' sz: 
> 26775 9468865 9495632


And 'moov' contains the info needed to make sense of the rest of the file. Now 
that it has that, it goes back and downloads from where it short-circuited 
before. At <10MBs, as you say, it might make more sense to download the whole 
file and then work with it on disk. But for larger downloads, or with a 
not-so-great connection, I'd imagine opening/closing multiple connections isn't 
as big a cost, resource or performance wise. And maybe more so in situations 
like where one might not need the video stream, for example.

-multiple_requests is supposed to keep the same connection, but I don't know if 
it actually works, especially depending on which ssl. If it doesn't, you can 
always curl or wget the whole file and avoid the extra connections' overhead, 
probably? Assuming that the media is self-contained in that one file, that is.
 

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] How do I increase the playback size of a video without rescaling it?

2020-07-16 Thread Edward Park
Hi,

> On Jul 16, 2020, at 08:39, Moritz Barsnick  wrote:
> 
> On Thu, Jul 16, 2020 at 07:22:39 -0500, fowman wrote:
>> No, I have the same on all three.
> 
> Ah, okay. *All* you players are able to scale the 720x576 video to
> fullscreen (automatically?), and *NONE* of them manages that with the
> 1280x720 video? That seems very unlikely. With every 1280x720 video? Or
> just this one?
> 
> I'm guessing the latter video is letterboxed, i.e. a large black box
> around it is encoded in the video. If you can share, we could tell.
> 
> If not, I know of no such property which would prevent such a video
> from being scaled. Generally speaking, all these players should somehow
> decode into a framebuffer, and then scale that, regardless of what the
> original video file says.
> 
> Moritz

I don't know if they are not used in all players but MP4 file format can 
contain the "clean-aperture" region info as well as matrices to specify sample 
transformations before the presentation should be displayed by default. 

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] copy encoder settings for cycling platform

2020-06-11 Thread Edward Park
Hi there,

> The Videos a very big and after uploading the Platform will re-encode it to a 
> small but well working file
> 
> I wont upload big files which they reduce from eg. 17GB to 4GB, but i dont 
> wont upload it in a lesser quality than they re-code
> 
> This mean i want upload it in the same quality or a little bit higher.
> 
> I have checked the Encoder Settings of the Platform and want ask if i can or 
> how i can simply copy this setting to produce the same quality.

The short answer is no. Each iteration of encoding (especially 
compression-heavy codecs such as h264) is all but guaranteed to have adverse 
effects on perceived quality or fidelity, on every encoding iteration. 
Replicating the encoder settings is not going to produce "lossless," or, what I 
think you are expecting, "no more loss than is necessary" output. 

The rule of thumb for codecs such as h264, hevc, vp9, etc. is to encode as few 
times as possible, if you are going for maximum fidelity to the original.

I may have misunderstood your question though, if that's the case, please let 
me know and I will try harder to understand your proposed workflow.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Question about licencing

2020-05-31 Thread Edward Park
Hi,

> Can we use ffmpeg for our demo without breaching any licence.
> 
> What documentation do I need to include on Git to ensure that users are
> aware that they are using FFMPEG and need to adhere to the licencing of
> that product.
> 
> Is there anything else we need to do to ensure that we adhere to the usage
> guidelines.

It depends on the license. Depending on the configuration, an ffmpeg build 
might not be redistributable at all in the first place.

If the demo uses the command line executables, run  ffmpeg -L  to check which 
license applies (if any) to that specific build.
For the static/dynamic libraries, I think they are necessarily distributed with 
the license text, or the api for each has av*_license() that will tell you 
which license you are granted.

I don't know how you are using FFmpeg, but just a reminder that GPL is very 
copyleft, and could possibly stipulate that you open source more than what's in 
the demo.
If you used other 3rd party libraries/components, you would need to comply with 
any accompanying license, which may not be compatible with the one for FFmpeg.

These are just a few things to think about regarding software licenses in 
general, there are definitely other aspects to consider. I don't know how big 
this project is, but you may want to consult legal if it's real big profile, as 
I append the obligatory disclaimer that I am not a lawyer and have not given 
any legal advice in this email, of which no part should be regarded as legal 
advice by anyone.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] How to analyse?

2020-05-30 Thread Edward Park
Hi,

Take a look at the silencedetect filter.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Text quality (credits etc) in h264

2020-05-29 Thread Edward Park
Hi,
> Normally I encode to a size of width 352 keeping aspect ratio, encoding to
> 704 is better, but obviously far bigger files

How many characters to fit in the frame? Those numbers are like something on my 
old gameboy... If increasing the width from 352 to 704 pixels is too expensive 
file-size wise, that means every 100kB matters, right? Is using the text itself 
not an option?

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Text quality (credits etc) in h264

2020-05-28 Thread Edward Park
Hi,
> Are there any tips to maintain the quality of text in output files.
> For example credits or slides in zoom videos.

If mostly static text, or maybe for the credits, low constant motion, is most 
of the video content, I think multi-pass encoding would be beneficial.
Or if you do not care about bitrate or filesize that much, you could keep 
lowering crf or set higher constant or target bitrate.
Hard to recommend starting points without knowing pixel format for crf, and 
also dimensions for bitrate.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] question about fps filter

2020-05-28 Thread Edward Park
Hi,
> The output file is how I noticed the issue it was reporting original frame
> rate as 24000/1001 and current frame rate as 3/1001.

I see, that's weird. This only happens with yadif_cuda, not yadif right? Is 
there a change if you hwdownload before fps filter (other than time)?

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Converting zoom recordings

2020-05-26 Thread Edward Park
Hi,

I might be wrong but it looks like the example you uploaded is from a session 
with at least two files, maybe the other named similarly but ending with _02 
(or maybe it is incomplete and the next part wasn't saved). I think it is 
sample buffers serialized, knowing what it is a recording of (type and number 
of streams) would help to reverse engineer.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] question about fps filter

2020-05-26 Thread Edward Park
Hi,
> I found this in the debug output and am not sure why it sees what was
> passed by the command line but then is ignored in the filter graph can
> someone help me understand please? The complete debug output is available
> but too big to include here.
> 
> [Parsed_yadif_cuda_0 @ 0220ac5561c0] Setting 'deint' to value
> 'interlaced'
> [Parsed_fps_1 @ 0220ac74a500] Setting 'fps' to value '24000/1001'
> [Parsed_fps_1 @ 0220ac74a500] fps=24000/1001
> [graph 0 input from stream 0:0 @ 0220ac749f40] Setting 'video_size' to
> value '720x480'
> [graph 0 input from stream 0:0 @ 0220ac749f40] Setting 'pix_fmt' to
> value '119'
> [graph 0 input from stream 0:0 @ 0220ac749f40] Setting 'time_base' to
> value '1/1000'
> [graph 0 input from stream 0:0 @ 0220ac749f40] Setting 'pixel_aspect'
> to value '186/157'
> [graph 0 input from stream 0:0 @ 0220ac749f40] Setting 'frame_rate' to
> value '3/1001'
> [graph 0 input from stream 0:0 @ 0220ac749f40] w:720 h:480 pixfmt:cuda
> tb:1/1000 fr:3/1001 sar:186/157

This doesn't necessarily mean the fps was ignored, it just means the input has 
that frame rate. As you've said it looks like the argument to the fps filter 
was parsed correctly, the message being logged before doesn't mean anything (it 
wasn't later overridden to 3/1001 or anything). Is the output file 
3/1001 fps?

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Problem with colorhold filter

2020-05-16 Thread Edward Park
Hi,

I think this might have been a typo? in vf_colorkey.c:48 the "diff" isn't 
normalized to 0 - 1, but to 0 - 3.
double diff = sqrt((dr * dr + dg * dg + db * db) / (255.0 * 255.0));

changing it to 
double diff = sqrt((dr * dr + dg * dg + db * db) / (3 * 255.0 * 255.0));
seems to fix it for me.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Newbie needs help on capture video/audio using ffmpeg on RPI

2020-05-14 Thread Edward Park
Hi,

> you are not setting up the /dev/video0 input correctly see
>  >
> 
> Let the camera/capture card to the encoding  and save stream with -c:v
> copy -c:a copy

It doesn't look like that's an option, when a card that size lists haswell i5 
as a "system requirement" it's probably not an actual capture card and depends 
on qsv for its advertised encoding performance.
Maybe you could try a build using libv4l2 and usign the v4l2 m2m codec names?

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] FFmpeg single threaded bottleneck

2020-05-14 Thread Edward Park
Hi,

Some values don't look right, try getting rid of them.
-thread_queue_size  seems arbitrary, it is queue length, not bytes
-indexmem  seems arbitrary, pretty sure default value is bigger
-rtbufsize 2147.48M is kind of abusive, especially for the audio inputs

I don't think you should be trying to buffer more, if the buffer keeps growing 
then it won't last. 

I can't really tell what the dshow input mapping looks like, but I think this 
is about the limit of your system.
With a 6800K, assuming the GPU is full sized,  are there enough lanes left for 
3 additional capture cards?
Using the hardware encoder for so many streams at once might also have to do 
with it, you could try saving 
the raw input to fast enough scratch disk to check for that quickly.

Regards,
Ted Park
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Subtitle metadata

2020-05-12 Thread Edward Park
Hi,
> Hi, I'm seeing a change in the way FFmpeg handles metadata options, 
> particularly for subtitles. I can't say when and which version cause it to 
> change.
> 
> FFmpeg now appears to ignore the "-disposition:s:s:0 none" option which I 
> would previously use to copy but not display subtitles by default:

I don't know if this was changed, but does replacing none with 0 and using a 
regular stream specifier after disposition work? e.g. -disposition:s:0 0

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Drawtext specific date and time with replay

2020-05-08 Thread Edward Park
Hi,
> Thank’s for reply. Yes I try it and it works fine but with this argument 
> (Timecode), it’s going back to 00:00:00:00 when you seek and this is exactly 
> that I don’t want. 

Oh, I did not know this happened, it sounds like a small bug? Then I suppose 
you could add setpts=PTS-STARTPTS before the drawtext filter and use the 
original command you had, it will set the starting pts to be 0.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Drawtext specific date and time with replay

2020-05-07 Thread Edward Park
Hi,

> The Timestamp (1588305600) correspond at 2020:05:01 06:00:00. But when the 
> file is played, the date is OK, but the time does not correspond to 06:00:00 
> but 21:11:21


Apparently the second argument is an offset added to the timestamp. Have you 
tried using the timecode arguments instead of using pts?

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] multiple bitrates with different logos

2020-05-07 Thread Edward Park
Hi,

Are the logo png files transparent and same size as the video frame size? If so 
you can use the overlay filter with the logo file as second input.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] How to slow down a video

2020-05-05 Thread Edward Park
Hi,

> But I'm lost how use it correctly. I want to slow down the video by 1.4 and 
> then increase the audio frequency (pitch) by 1.25.


So if you slow the video down by 1.4, You are also slowing down the audio by 
the same amount right? I think 1/1.4, and to add to the pitch to 1.25 ratio 
you'd have to multiply by 1.75, using filters to change frequency/pitch, 
librubberband has a simple pitch ratio factor you can multiply to whole 
spectrum. Or you could resample to your output format rate * 0.8 (1/1.25) then 
asetrate to the output sample rate after strectching/shrinking the audio to be 
1.25 times the video length I think it will have a similar effect.

For example, slowing the video down could be -vf setpts=1.4*PTS, and for audio 
-af atempo=1/1.25/1.4,aresample=36000,asetrate=48000

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] change inputs or mapping while recording

2020-04-24 Thread Edward Park
Hi,

> That works only under Linux, right? It seems libzmq is not enabled in the 
> FFmpeg build for Windows from Zeranoe.

FFmpeg would need to have been built with 0mq support but I'm almost sure there 
are windows versions of the library, client and server. Any way to send 
commands to a filter would work, you can sort of do it in the shell that's 
running FFmpeg (typing "c" brings up a command line) but it pauses everything.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] change inputs or mapping while recording

2020-04-24 Thread Edward Park
Hi,

> How can this be done with FFmpeg? Do you have an example? I know how commands 
> / sendcmd works. The opacity could be toggled, or the streamselect filter 
> could be used. But where does the switching signal come from, while FFmpeg is 
> running?


The specific messaging protocol/method would depend on a variety of factors 
including personal preference, but to parrot an example given in the manual 
using zmq, 

% ffmpeg -i INPUT -filter_complex 
'null[main];movie=INPUT2,zmq,lumakey@toggle=tolerance=1,[main]overlay,realtime' 
OUTPUT

and sending commands to the named lumakey filter would mimic toggling between 
the two streams if they were the same size and position. 
i.e. using the zmqsend example program,

% zmqsend <<<"lumakey@toggle tolerance 0"

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question #2

2020-04-24 Thread Edward Park
Hi,

> Output is actually 29.97p with 5th frame duplicates . The repeat field flags
> are not taken into account.

> If you use direct encode, no filters, no switches, the output from soft
> telecine input video is 29.97p, where every 5th frame is a duplicate
> 
> e.g
> ffmpeg -i input.mpeg -c:v rawvideo -an output.yuv
> 
> But you can "force" it to output 23.976p by using -vf fps
> 
> Is this what you mean by "forward the correct time base" ?

I think 5th frame duplicated is only accurate for shorter durations, I think 
you will see if you look at the timestamps of each frame over a longer period. 
They advance by 2 60fps 'ticks', 3 ticks, etc as if the duration was determined 
using rf and tff flags.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] change inputs or mapping while recording

2020-04-24 Thread Edward Park
Hi,

> I would now like to add stream2 as second video input and switch
> between stream1 and stream2 back and forth without interrupting the
> audio. Both streams are identical / come from identical cameras.
> 
> Is there any sane way to do this with ffmpeg? Or how would you
> recommend doing it?
If I had to do this, I would basically composite the two streams and toggle 
opacity of the top layer between 0 and 1.
It would break if either stream had reading problems, and you would constantly 
be processing both streams which might not be desirable if you were switching 
from A to B and never go back to A, for example.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question #2

2020-04-24 Thread Edward Park
Hi,

I don't know if the decoder outputs 30fps as is from 24fps soft telecine, but 
if it does, it must include the flags that you need to reconstruct the original 
24 format or set it as metadata because frame stepping in ffplay (using the "s" 
key on the keyboard) goes over 1/24 s progressive frames, even though the 
stream info says 29.97fps.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Can't write packet with unknown timestamp

2020-04-24 Thread Edward Park
Hi,

It is pretty much abandoned but did you try the dvd2concat perl script in the 
tools directory? That will output a file to use with the concat demuxer (not 
the protocol, I think that is only good for files you can literally use cat to 
concatenate), and it is pretty big, it should show that vobs from DVDs aren't 
that flat.

Example output:

ffconcat version 1.0

stream
exact_stream_id 0x1E0

stream
exact_stream_id 0x80

stream
exact_stream_id 0x81

stream
exact_stream_id 0x82

stream
exact_stream_id 0x83

stream
exact_stream_id 0x20

stream
exact_stream_id 0x21

stream
exact_stream_id 0x22

stream
exact_stream_id 0x23

stream
exact_stream_id 0x24

stream
exact_stream_id 0x25

stream
exact_stream_id 0x26

stream
exact_stream_id 0x27

stream
exact_stream_id 0x28

stream
exact_stream_id 0x29

file 'subfile,,start,382976,end,397312,,:/Volumes/CAPOTE/VIDEO_TS/VTS_01_1.VOB'
duration 00:00:00.000

file 'subfile,,start,397312,end,411648,,:/Volumes/CAPOTE/VIDEO_TS/VTS_01_1.VOB'
duration 00:00:00.000

file 'subfile,,start,411648,end,425984,,:/Volumes/CAPOTE/VIDEO_TS/VTS_01_1.VOB'
duration 00:00:00.000

file 'subfile,,start,425984,end,440320,,:/Volumes/CAPOTE/VIDEO_TS/VTS_01_1.VOB'
duration 00:00:00.000

file 'subfile,,start,440320,end,454656,,:/Volumes/CAPOTE/VIDEO_TS/VTS_01_1.VOB'
duration 00:00:00.000

file 'subfile,,start,454656,end,468992,,:/Volumes/CAPOTE/VIDEO_TS/VTS_01_1.VOB'
duration 00:00:00.000

file 'subfile,,start,468992,end,483328,,:/Volumes/CAPOTE/VIDEO_TS/VTS_01_1.VOB'
duration 00:00:00.000

file 'subfile,,start,483328,end,497664,,:/Volumes/CAPOTE/VIDEO_TS/VTS_01_1.VOB'
duration 00:00:00.000

file 'subfile,,start,497664,end,512000,,:/Volumes/CAPOTE/VIDEO_TS/VTS_01_1.VOB'
duration 00:00:00.000

file 'subfile,,start,512000,end,526336,,:/Volumes/CAPOTE/VIDEO_TS/VTS_01_1.VOB'
duration 00:00:00.000

file 'subfile,,start,526336,end,2215936,,:/Volumes/CAPOTE/VIDEO_TS/VTS_01_1.VOB'
duration 00:00:00.000

file 
'subfile,,start,2215936,end,4225024,,:/Volumes/CAPOTE/VIDEO_TS/VTS_01_1.VOB'
duration 00:00:00.033

file 
'subfile,,start,4225024,end,11036672,,:/Volumes/CAPOTE/VIDEO_TS/VTS_01_1.VOB'
duration 00:00:00.033

file 
'subfile,,start,11036672,end,16971776,,:/Volumes/CAPOTE/VIDEO_TS/VTS_01_1.VOB'
duration 00:00:00.100

file 
'subfile,,start,16971776,end,20178944,,:/Volumes/CAPOTE/VIDEO_TS/VTS_01_1.VOB'
duration 00:00:00.233

file 
'subfile,,start,20178944,end,24090624,,:/Volumes/CAPOTE/VIDEO_TS/VTS_01_1.VOB'
duration 00:00:00.233

file 
'subfile,,start,24090624,end,31891456,,:/Volumes/CAPOTE/VIDEO_TS/VTS_01_1.VOB'
duration 00:00:00.266

file 
'subfile,,start,31891456,end,34271232,,:/Volumes/CAPOTE/VIDEO_TS/VTS_01_1.VOB'
duration 00:00:00.200

file 
'subfile,,start,34271232,end,34314240,,:/Volumes/CAPOTE/VIDEO_TS/VTS_01_1.VOB'
duration 00:00:00.066

file 
'subfile,,start,34314240,end,34328576,,:/Volumes/CAPOTE/VIDEO_TS/VTS_01_1.VOB'
duration 00:00:00.133

file 
'subfile,,start,34328576,end,34392064,,:/Volumes/CAPOTE/VIDEO_TS/VTS_01_1.VOB'
duration 00:00:00.467

file 
'subfile,,start,34392064,end,187197440,,:/Volumes/CAPOTE/VIDEO_TS/VTS_01_1.VOB'
duration 00:03:27.400

file 
'subfile,,start,187197440,end,431077376,,:/Volumes/CAPOTE/VIDEO_TS/VTS_01_1.VOB'
duration 00:04:42.333

file 
'subfile,,start,431077376,end,589074432,,:/Volumes/CAPOTE/VIDEO_TS/VTS_01_1.VOB'
duration 00:03:22.200

file 
'subfile,,start,589074432,end,777889792,,:/Volumes/CAPOTE/VIDEO_TS/VTS_01_1.VOB'
duration 00:03:57.433

file 
'subfile,,start,777889792,end,940957696,,:/Volumes/CAPOTE/VIDEO_TS/VTS_01_1.VOB'
duration 00:03:29.133

file 
'subfile,,start,940957696,end,1121656832,,:concat:/Volumes/CAPOTE/VIDEO_TS/VTS_01_1.VOB|/Volumes/CAPOTE/VIDEO_TS/VTS_01_2.VOB'
duration 00:03:50.967

file 
'subfile,,start,47917056,end,251195392,,:/Volumes/CAPOTE/VIDEO_TS/VTS_01_2.VOB'
duration 00:04:23.500

file 
'subfile,,start,251195392,end,469211136,,:/Volumes/CAPOTE/VIDEO_TS/VTS_01_2.VOB'
duration 00:04:28.633

file 
'subfile,,start,469211136,end,694167552,,:/Volumes/CAPOTE/VIDEO_TS/VTS_01_2.VOB'
duration 00:05:09.834

file 
'subfile,,start,694167552,end,798273536,,:/Volumes/CAPOTE/VIDEO_TS/VTS_01_2.VOB'
duration 00:02:17.667

file 
'subfile,,start,798273536,end,1016700928,,:/Volumes/CAPOTE/VIDEO_TS/VTS_01_2.VOB'
duration 00:04:21.700

file 
'subfile,,start,1016700928,end,1287690240,,:concat:/Volumes/CAPOTE/VIDEO_TS/VTS_01_2.VOB|/Volumes/CAPOTE/VIDEO_TS/VTS_01_3.VOB'
duration 00:05:53.633

file 
'subfile,,start,213950464,end,433152000,,:/Volumes/CAPOTE/VIDEO_TS/VTS_01_3.VOB'
duration 00:04:08.300

file 
'subfile,,start,433152000,end,621991936,,:/Volumes/CAPOTE/VIDEO_TS/VTS_01_3.VOB'
duration 00:04:07.166

file 
'subfile,,start,621991936,end,851488768,,:/Volumes/CAPOTE/VIDEO_TS/VTS_01_3.VOB'
duration 00:04:36.400

file 
'subfile,,start,851488768,end,1098213376,,:concat:/Volumes/CAPOTE/VIDEO_TS/VTS_01_3.VOB|/Volumes/CAPOTE/VIDEO_TS/VTS_01_4.VOB'
duration 00:05:25.200

file 
'subfile,,start,24473600,end,126795776,,:/Volumes/CAPOTE/VIDEO_TS/VTS_01_4.V

Re: [FFmpeg-user] Problem with recorded YouTube stream

2020-04-22 Thread Edward Park
Hi,
> [NULL @ 0x248ee00] Opening 'RRTepdBvUFE.mp4' for reading
> [file @ 0x248f620] Setting default whitelist 'file,crypto'
> Probing mov,mp4,m4a,3gp,3g2,mj2 score:100 size:2048
> [mov,mp4,m4a,3gp,3g2,mj2 @ 0x248ee00] Format mov,mp4,m4a,3gp,3g2,mj2 probed 
> with size=2048 and score=100
> [mov,mp4,m4a,3gp,3g2,mj2 @ 0x248ee00] type:'ftyp' parent:'root' sz: 32 8 
> 227566402
So it looks like the file appears to be 227,566,402 bytes, most of which you'd 
expect to be taken up by the mdat atom
> [mov,mp4,m4a,3gp,3g2,mj2 @ 0x248ee00] ISO: File Type Major Brand: isom
> [mov,mp4,m4a,3gp,3g2,mj2 @ 0x248ee00] type:'free' parent:'root' sz: 8 40 
> 227566402
> [mov,mp4,m4a,3gp,3g2,mj2 @ 0x248ee00] type:'mdat' parent:'root' sz: 0 48 
> 227566402
But it is shown as 0 bytes long for some reason. Maybe it is a placeholder that 
was supposed to be replaced with the actual size afterwards because you can't 
know how long the atom is until you've finished writing it.
> [mov,mp4,m4a,3gp,3g2,mj2 @ 0x248ee00] type:'ftyp' parent:'root' sz: 32 8 
> 227566402
> [mov,mp4,m4a,3gp,3g2,mj2 @ 0x248ee00] ISO: File Type Major Brand: isom
> [mov,mp4,m4a,3gp,3g2,mj2 @ 0x248ee00] type:'free' parent:'root' sz: 8 40 
> 227566402
> [mov,mp4,m4a,3gp,3g2,mj2 @ 0x248ee00] type:'mdat' parent:'root' sz: 0 48 
> 227566402
> [mov,mp4,m4a,3gp,3g2,mj2 @ 0x248ee00] moov atom not found
> [AVIOContext @ 0x2497800] Statistics: 65536 bytes read, 3 seeks
> RRTepdBvUFE.mp4: Invalid data found when processing input
And I guess it goes back to the start? Not sure but either way it doesn't find 
the moov atom. You could try looking for it, by opening the file as plain ascii 
text and literally searching for 'moov' but I can't help but think the correct 
size for the previous atom would have been filled in if the script's ffmpeg 
invocation got to that point. Still, if the file's taking up ~260MB on disk, 
maybe the byte count is selling the file short for some reason, and it might be 
there near the end. 

But your newest version of ffmpeg is still pretty old, you should try with an 
up-to-date version first, it probably has as much of a chance of working as any 
other method.

Regards,
Ted Park
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Muxing multiple files and concatenating those outputs

2020-04-21 Thread Edward Park
Hi,

> I ended up just keeping it as a script and getting the developers to
> incorporate it into our code.  Thanks though!  Since it is a script, a
> single bash call is not too intrusive.  If we had some C programmers, I
> would have them use the ffmpeg libraries directly, but they are all just
> java heads..

So is the problem basically solved now? If I had known this zombie 
apocalypse-like status quo was going to maintain for this long I would have 
recommended seeing if your VAR or sales rep if you have an account with the 
vendor for your call management system to see if you can get a license for 
doing this automatically (call recording's probably a feature? Just not 
activated I'm thiinking?)

Regards,
Ted Park
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Problem with recorded YouTube stream

2020-04-21 Thread Edward Park
Hi,
> [tls @ 0x22e0d00] The TLS connection was non-properly terminated.
> [tls @ 0x22e0d00] The specified session has been invalidated for some reason.
> [tls @ 0x2309820] The TLS connection was non-properly terminated.te= 
> 224.4kbits/s   

It doesn't look like the download was entirely successful in the first place...
It does report some kind a bitrate, are you trying to salvage this, or would 
redownloading be an easier option?

Regards,
Ted Park
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Output image sequence duplicate first image

2020-04-20 Thread Edward Park
Hi,

> Use the output option -r or the fps filter to control the frequency of the 
> image grabs. They will show different behaviour wrt the beginning of the grab 
> cycle.

The identical frames at the start is surprising for me, why would that happen 
and not be counted as a duplicate? Does it happen when the frames aren't in the 
order that they're presented at the beginning?

Regards,
Ted Park
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Output image sequence duplicate first image

2020-04-20 Thread Edward Park
Hi,

> /usr/src/app # ffmpeg -y -i /data/vod-storage-dev/source/testfile.mxf -r 
> 20/60 -s 283x159 -frames:v 10 -pix_fmt yuvj420p -c:v mjpeg  
> /data/vod-storage-dev/temp_out/1cq8tq69tkumv1es0zxeb0ty4qa_%03d.jpeg
...
>Stream #0:0: Video: mpeg2video, yuv422p(tv, top first), 1920x1080 [SAR 1:1 
> DAR 16:9], 5 kb/s, 25 fps, 25 tbr, 25 tbn, 50 tbc
>Metadata:
>  file_package_umid: 
> 0x060A2B340101010501010D001324CF5D529471346A24CF5D00529471346A2401
...
>Stream #0:0: Video: mjpeg, yuvj420p(pc), 283x159 [SAR 848:849 DAR 16:9], 
> q=2-31, 200 kb/s, 0.33 fps, 0.33 tbn, 0.33 tbc
>Metadata:
>  file_package_umid: 
> 0x060A2B340101010501010D001324CF5D529471346A24CF5D00529471346A2401
>  encoder : Lavc58.54.100 mjpeg
>Side data:
>  cpb: bitrate max/min/avg: 0/0/20 buffer size: 0 vbv_delay: -1
> frame=   10 fps=3.0 q=1.6 Lsize=N/A time=00:00:30.00 bitrate=N/A dup=0 
> drop=583 speed= 8.9x
> video:51kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing 
> overhead: unknown

If this is accurate, it looks like you have asked it to produce 20 frames in 60 
seconds, or 1/3 fps, or 1 frame every 3 seconds regardless of input framerate, 
whatever. It will drop frames, duplicate frames, do what is needed to keep 1/3 
frames per second constant output. You've also specified that you want 10 
frames total. at 1/3 fps that runs 30 seconds. I'm not sure why the first and 
second are always duplicates, it doesn't look like any frames were duplicated. 
(at 0.33fps if two frames were exactly the same when they weren't duplicated, 
maybe it was frozen in the source?)

Regards,
Ted 
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] hw_decode.c on osx?

2020-04-20 Thread Edward Park
Hi,
> you've probably already seen carl eugen's post referring to ticket 8615.  
> looks like this may be a regression ...  to answer your question, though,
> i'm on a 2018 mini (i7); gpu is "Intel UHD Graphics 630", which should
> support hardware decoding for hevc (and certainly for h264).
Yes, I have, and I noticed it was about a dozen commits behind origin HEAD.
So I pulled and rebuilt, and got the exact same error. Like decode was not even 
starting with vda or vt, encode still worked fine.

Something interesting I noticed was when I ran ffmpeg -hwaccels it only showed 
videotoolbox once.
It always listed it twice which I thought was weird, but only once now that 
it's not working. 
So does that mean it listed it once for decompression and once for compression 
via VideoToolbox? So it built
without decode support but with encode support?

Turning off avx optimizations doesn't do anything to help this time.

Worked at 55d830f69a2ff3ca191d97862200d4cc480d25b7
kumowoon1025@rfarm1 ffmpeg % ./ffmpeg_g -buildconf && ./ffmpeg_g -hide_banner 
-hwaccels
ffmpeg version N-96726-g018a42790c Copyright (c) 2000-2020 the FFmpeg developers
  built with Apple clang version 11.0.0 (clang-1100.0.33.17)
  configuration: --prefix=/tmp/ffocl --enable-gpl
  libavutil  56. 40.100 / 56. 40.100
  libavcodec 58. 68.102 / 58. 68.102
  libavformat58. 38.100 / 58. 38.100
  libavdevice58.  9.103 / 58.  9.103
  libavfilter 7. 75.100 /  7. 75.100
  libswscale  5.  6.100 /  5.  6.100
  libswresample   3.  6.100 /  3.  6.100
  libpostproc55.  6.100 / 55.  6.100

  configuration:
--prefix=/tmp/ffocl
--enable-gpl
Hardware acceleration methods:
videotoolbox
videotoolbox


Not working now
kumowoon1025@rfarm1 ffmpeg % ./ffmpeg_g -buildconf && ./ffmpeg_g -hide_banner 
-hwaccels
ffmpeg version N-97394-gcacdac819f Copyright (c) 2000-2020 the FFmpeg developers
  built with Apple clang version 11.0.3 (clang-1103.0.32.29)
  configuration: --prefix=/tmp/ffocl --enable-gpl
  libavutil  56. 42.102 / 56. 42.102
  libavcodec 58. 80.100 / 58. 80.100
  libavformat58. 42.100 / 58. 42.100
  libavdevice58.  9.103 / 58.  9.103
  libavfilter 7. 79.100 /  7. 79.100
  libswscale  5.  6.101 /  5.  6.101
  libswresample   3.  6.100 /  3.  6.100
  libpostproc55.  6.100 / 55.  6.100

  configuration:
--prefix=/tmp/ffocl
--enable-gpl
Hardware acceleration methods:
videotoolbox



Diff of config.h

9c9
< #define CC_IDENT "Apple clang version 11.0.3 (clang-1103.0.32.29)"
---
> #define CC_IDENT "Apple clang version 11.0.0 (clang-1100.0.33.17)"
392c392
< #define CONFIG_AVIO_LIST_DIR_EXAMPLE 1
---
> #define CONFIG_AVIO_DIR_CMD_EXAMPLE 1
476d475
< #define CONFIG_LIBRABBITMQ 0
758d756
< #define CONFIG_CDTOONS_DECODER 1
878d875
< #define CONFIG_MV30_DECODER 1
918a916
> #define CONFIG_SDX2_DPCM_DECODER 1
1042d1039
< #define CONFIG_HCA_DECODER 1
1077d1073
< #define CONFIG_SIREN_DECODER 1
1130c1126
< #define CONFIG_DERF_DPCM_DECODER 1
---
> #define CONFIG_PCM_ZORK_DECODER 1
1134d1129
< #define CONFIG_SDX2_DPCM_DECODER 1
1155d1149
< #define CONFIG_ADPCM_IMA_ALP_DECODER 1
1157d1150
< #define CONFIG_ADPCM_IMA_APM_DECODER 1
1164d1156
< #define CONFIG_ADPCM_IMA_MTF_DECODER 1
1184d1175
< #define CONFIG_ADPCM_ZORK_DECODER 1
1578d1568
< #define CONFIG_WEBP_PARSER 1
1754d1743
< #define CONFIG_CAS_FILTER 1
1873d1861
< #define CONFIG_MASKEDTHRESHOLD_FILTER 1
1899d1886
< #define CONFIG_OVERLAY_CUDA_FILTER 0
1902d1888
< #define CONFIG_PAD_OPENCL_FILTER 0
1985d1970
< #define CONFIG_TMEDIAN_FILTER 1
2081d2065
< #define CONFIG_ALP_DEMUXER 1
2088d2071
< #define CONFIG_APM_DEMUXER 1
2127d2109
< #define CONFIG_DERF_DEMUXER 1
2155d2136
< #define CONFIG_FWSE_DEMUXER 1
2169d2149
< #define CONFIG_HCA_DEMUXER 1
2578d2557
< #define CONFIG_LIBAMQP_PROTOCOL 0


And this is the working build's config.h on the right.
/* Automatically generated by configure - do not modify! */
#ifndef FFMPEG_CONFIG_H
#define FFMPEG_CONFIG_H
#define FFMPEG_CONFIGURATION "--prefix=/tmp/ffocl --enable-gpl"
#define FFMPEG_LICENSE "GPL version 2 or later"
#define CONFIG_THIS_YEAR 2020
#define FFMPEG_DATADIR "/tmp/ffocl/share/ffmpeg"
#define AVCONV_DATADIR "/tmp/ffocl/share/ffmpeg"
#define CC_IDENT "Apple clang version 11.0.0 (clang-1100.0.33.17)"
#define av_restrict restrict
#define EXTERN_PREFIX "_"
#define EXTERN_ASM _
#define BUILDSUF ""
#define SLIBSUF ".dylib"
#define HAVE_MMX2 HAVE_MMXEXT
#define SWS_MAX_FILTER_SIZE 256
#define ARCH_AARCH64 0
#define ARCH_ALPHA 0
#define ARCH_ARM 0
#define ARCH_AVR32 0
#define ARCH_AVR32_AP 0
#define ARCH_AVR32_UC 0
#define ARCH_BFIN 0
#define ARCH_IA64 0
#define ARCH_M68K 0
#define ARCH_MIPS 0
#define ARCH_MIPS64 0
#define ARCH_PARISC 0
#define ARCH_PPC 0
#define ARCH_PPC64 0
#define ARCH_S390 0
#define ARCH_SH4 0
#define ARCH_SPARC 0
#define ARCH_SPARC64 0
#define ARCH_TILEGX 0
#define ARCH_TILEPRO 0
#define ARCH_TOMI 0
#define ARCH_X86 1
#define ARCH_X86_32 0
#define ARCH_X86_64 1
#define HAVE_ARMV5TE 0
#define HA

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-20 Thread Edward Park
Hey,

>> I don't understand what you mean by "recursively".
> 
> Haven't you heard? There's no recursion. There's no problem. The 'blend' 
> filter just has some fun undocumented features. Hours and hours, days and 
> days of fun. So much fun I just can't stand it. Too much fun.

There's no recursion because a filtergraph is typically supposed to be a 
directed acyclic graph, there is no hierarchy to traverse. Blend not specifying 
which of the two input frames it takes the timestamps from is true enough, 
except the only reason it poses a problem is because it leads to another filter 
getting two frames with the exact same timestamp, as they were split earlier on 
in the digraph. And it's not obvious by any means, but you can sort of deduce 
that blend will take the timestamps from the first input stream, blend having a 
"top" and "bottom" stream (I mean on the z-axis, lest this cause any more 
confusion) kind of implies similar operation to the overlay filter applied on 
the two inputs that each go through some other filter, with an added alpha 
channel, and the description for the overlay filter says the first input is the 
"main" that the second "overlay" is composited on.

On a different note, in the interest of making the flow of frames within the 
filtergraph something simple enough to picture using my rather simple brain, 
this is my attempt at simplifying a filtergraph you posted a while ago, I'm not 
sure if it's accurate, but I can't tell if I'm reproducing the same result even 
when frame stepping (because to compare frame by frame, I had to compare it to 
another telecine, and the only one I'd seen is the 3-2 pulldown. And I really 
cannot tell the difference when playing at speed, I can tell them apart if I 
step frame by frame, but not identify which is which, had to draw a label on 
them)

Could you see if it actually does do the same thing? 
telecine=pattern=5,select='n=2:e=ifnot(mod(mod(n,5)+1,3),1,2)'[C],split[AB_DE],select='not(mod(n+3,4))'[B],[C][B]blend[B/C],[AB_DE][B/C]interleave

The pads are labeled according to an ABCDE pattern at the telecine, I don't 
know if that makes sense or is correct at all.
It does make it possible to 4up 1920x1080 streams with different filters and 
compare them in real time without falling below ~60fps. I think the fact that 
"split" actually copies a stream, while "select" splits a stream is kind of 
confusing now. "Select" also adds another stream of video but I think splitting 
then using select with boolean expressions to discard the not selected frames 
has to be wasteful.

Regards,
Ted Park
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".