Re: [FFmpeg-user] how to use svt-hevc in x265

2019-07-23 Thread Frank Tetzel
> > I build ffmpeg with x265 and svt-hevc. I can do transcoding via
> > ffmpeg command. But how to use svt-hevc in x265 via ffmpeg
> > command?  
> 
> I don't understand what you mean by "use svt-hevc in x265". svt-hevc
> is an HEVC/H.265 encoder. (lib)x265 is an HEVC/H.265 encoder. You
> would use either one or the other.

I thought so too, but apparently there is some integration:
http://www.x265.org/x265-svt-hevc-house/?ModPagespeed=noscript
https://x265.readthedocs.io/en/default/svthevc.html

I do not see any point in ffmpeg supporting execution of svt-hevc
through x256. It supports both encoders on its own.

Regards,
Frank
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Video scaling, padding and cropping together question

2018-01-10 Thread Frank Tetzel
> The thing I am trying to achieve is a compromise between these two 
> extremes, eg. a method which gives less cropping at the LH & RH
> picture edges for the price of a little padding above and below the
> picture.  I'm presuming this (ideally) involves scaling, padding and
> cropping together in a single filter.

Just pick a bigger width for the scale and pad filter, and then add
crop as the last filter, cropping to the final resolution. Add crop
with another comma to the filter chain as you did with scale and pad.

Also, have a look at the filter documentation:
http://ffmpeg.org/ffmpeg-filters.html
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] html5 Canvas to ffmpeg

2018-01-10 Thread Frank Tetzel
> I am making animations in html5 canvas. I am able to get frames from
> canvas using requestAnimationFrame() and convert them to png dataurl.
> I want to send these frames to an ffmpeg running on localhost for
> live encoding (I want to send the encoded video to rtmp server
> afterward).
> 
> Does anyone have an experience here? What is the recommended setup
> here? Does it even make sense to use ffmpeg here?

Browsers nowadays have a builtin video encoder for WebRTC [1]. Consider
using this instead of ffmpeg + rtmp.

[1] https://developer.mozilla.org/en-US/docs/Web/API/WebRTC_API
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Seeking for a method to remove sections of a video

2017-06-18 Thread Frank Tetzel
> Hi,
> 
> I'm new to this list.
> 
> I'm seeking for a tool to remove the start and end sections of a movie
> recorded from TV, plus sections on the middle (commercials), in
> Linux. I have tried several GUI tools and they are either too complex
> or lack some crucial feature (like handling two language audio
> tracks). Some tools are obsolete and abandoned, do not work
> (gopchop...).
> 
> I have found that ffmpeg can do the perfect conversion and cutting.
> 
> My problem is finding out the cut points.
> 
> Thus what I seek is a GUI Wrapper for ffmpeg that allows me to move
> around the movie selecting start, end, and middle remove sections, and
> just generate a script for ffmpeg that I can then edit and adjust with
> my own options and run.
> 
> I have tried, for instance, ffmpegyag. Well, it is incapable of
> visualizing my videos, so I can't select the cut points...
> 
> 
> The closest I have is finding the points with Xine, then manually
> concot the command line to generate sections, then concatenate using
> method in https://trac.ffmpeg.org/wiki/Concatenate
> 
> I seek a GUI to automate generating the cut points in a list or
> script.
> 
> Thanks :-)
> 

In the past I used avidemux for simple cutting jobs:
http://fixounet.free.fr/avidemux/

Not sure if it can handle multiple audio tracks.

This is a full GUI application, not a ffmpeg frontend that you want.
Just mentioning it...
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Using the flags -movflags +faststart

2017-02-26 Thread Frank Tetzel
> > The '+' sign in front of the flags is not going to work? I disagree.
> > ffmpeg's command line parser doesn't care whether the first flag is
> > prepended with a '+' or not:  
> 
> Actually, I found this comment
> (http://stackoverflow.com/questions/23419351/ffmpeg-using-movflags-faststart#comment60936769_23440682):
> 
>   The + sign indicates that ffmpeg should set the specified value in
>   addition to any values that the MOV/MP4 muxer will automatically set
>   during the course of executing the command. Omitting it means ffmpeg
>   will reset the flags to their default values, and only toggle the
>   state of faststart. Most MP4s generation doesn't involve the other
>   flags so usually it doesn't make a difference.
> 
> I couldn't find this in ffmpeg's source or docs though.

Mhmm, maybe not anymore. I think I read about it some years ago in the
docs or wiki. In some of the examples it's still in, but no explanation.

https://ffmpeg.org/ffmpeg-formats.html#Examples-9
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] stabilisation

2016-05-11 Thread Frank Tetzel
> when using -vf "deshake" a black frame is coming separating the
> original frame from the "interpolated" pixels along the boders.
> More like a line than a frame actually !
> Is there a way I can turn off these black lines ?

As far as i know you have to "zoom in" to get rid of the artifacts at
the borders, as Bouke already mentioned. The filter documentation[1]
actually mentions an option "edge" which sounds like what you want to
change. Try different values for that. Maybe "original" is what you're
looking for.

As an alternative to the deshake filter you can use vid.stab[2][3] which
is intregated with two filters in ffmpeg (if compiled in the right way).
It only supports 2-pass-encoding in ffmpeg. You also get some "wobble"
artifacts at the border, but it isn't that bad most often. You might
still want to crop a bit at the edges.

[1] http://ffmpeg.org/ffmpeg-filters.html#deshake
[2] https://github.com/georgmartius/vid.stab
[3] http://public.hronopik.de/vid.stab/

Regards,
Frank
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Build error with libmp3lame and libshine.

2015-11-09 Thread Frank Tetzel
> The error message says...
> 
> /home/user/build/lib/libmp3lame.a(takehiro.o):(.rodata+0x0): multiple
> definition of
> `slen2_tab' /home/user/build/lib/libshine.a(tables.o):(.rodata+0xc80):
> first defined
> here /home/user/build/lib/libmp3lame.a(takehiro.o):(.rodata+0x40):
> multiple definition of
> `slen1_tab' /home/user/build/lib/libshine.a(tables.o):(.rodata+0xcc0):
> first defined here collect2: ld returned 1 exit status make: ***
> [ffmpeg_g] Error 1

These are symbol collisions between both libraries which basically means
you can't use both at the same time (with static linking). prefixes (or
namespaces in c++) are there for a good reason...

Build shared libs and link dynamically! As these functions are not part
of the API, this should work.
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user


Re: [FFmpeg-user] ffmpeg serving to Avisynth

2015-09-14 Thread Frank Tetzel
> > What processing do you want to do with ffmpeg?
> 
> At a minimum I foresee concatenating input files with it. My current
> project has 350+ video files. Avisynth cannot work with more than 25
> - 35 without crashing. I have not found an Avisynth mechanism for
> concatenating files that preserves audio.

There are multiple ways to concatenate files depending on the input
codecs and what other processing you want to do with it [4]. Not sure
if it handles hundreds of input files well enough.

> > And why do you want to
> > send it over tcp, if that's what TCPSource reads (not an avisynth
> > user)?
> 
> To avoid intermediate storage. Workspace for this project is 2TB.
> Each additional version of the project is currently ~700GB. Some form
> of inter-process communication is required to avoid intermediate
> storage. TCPSource() seems the only type of built-in IPC input
> Avisynth supports.

I don't know which data layout they expect in TCPSource and if it is in
any way compatible with the tcp output protocol in ffmpeg, or any other
protocol. I know this was your question in the first place but i can't
help you there. You could play around and just try to connect [1][2].

There's also some avisynth support in ffmpeg [3]. As i never used it i
don't know about its capabilities.

What are you doing after processing with avisynth? Do you pipe it back
into ffmpeg for encoding? Can't you use built-in filters [5] instead of
an avisynth script?


[1] http://avisynth.nl/index.php/TCPServer
[2] http://ffmpeg.org/ffmpeg-protocols.html#tcp
[3] http://www.ffmpeg.org/faq.html#How-can-I-read-DirectShow-files_003f
[4] http://trac.ffmpeg.org/wiki/Concatenate
[5] http://ffmpeg.org/ffmpeg-filters.html#Description
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user


Re: [FFmpeg-user] ffmpeg serving to Avisynth

2015-09-13 Thread Frank Tetzel
> I hope to reach an expert familiar with both ffmpeg and Avisynth! I’m
> looking to perform some processing in ffmpeg, and then to continue in
> Avisynth. For a number of reasons, including disk space, I’d like to
> avoid intermediate files. Which, if any, of the ffmpeg server
> protocols are compatible with Avisynth’s TCPSource() filter, i.e.

What processing do you want to do with ffmpeg? And why do you want
to send it over tcp, if that's what TCPSource reads (not an avisynth
user)? If it's just decoding then have a look at ffms2 [1]. It has an
avisynth plugin.

[1] https://github.com/FFMS/ffms2
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user


Re: [FFmpeg-user] Inserting Seekable Offsets in m4a Audio File

2015-07-27 Thread Frank Tetzel
   Does seeking in the original file works with ffplay?
 
  I cannot answer your question because I do 
  not know how to try seeking from a console.
 
 (I don't understand what you are saying here.)
 
 Did you try playing your input file with ffplay?
 Does it play? Did you try to seek?
 Did you try to wildly click into the window that 
 opened? (Did a window open?)
 Did you try to press all buttons on your keyboard 
 that might make sense when trying to seek?

Before trying more than 100 keys, just have a look here:
https://www.ffmpeg.org/ffplay.html#While-playing
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user


Re: [FFmpeg-user] how to make moov box before mdat box in mp4 file

2015-06-26 Thread Frank Tetzel
Hi,

 I used the command of 'ffmpeg -i ./file1.mp4 ./file2.mp4' to
 trancode. But I found moov box is behind mdat box in file2.mp4.
 
 
 How to how to make moov box before mdat box in mp4 file?

See format documentation:
https://ffmpeg.org/ffmpeg-formats.html#mov_002c-mp4_002c-ismv

Either run qt-faststart after encoding or do it directly by adding the
following to the ffmpeg command (in front of the output file):
-movflags +faststart

Regards,
Frank
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user


Re: [FFmpeg-user] how to use alpha channel to make transparent effect

2015-06-26 Thread Frank Tetzel
 I use the below command to place one image on one video:ffmpeg
 -i ./file1.mp4 -i ./file2.jpg -filter_complex overlay ./file3.mp4
 
 But there is no transparent effect in resulting file, i.e. file3.mp4.
 What is the command to use alpha channel to make transparent effect?

Have a look at the blend filter and its opacity and expression options:
http://ffmpeg.org/ffmpeg-filters.html#blend_002c-tblend

Regards,
Frank
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user


Re: [FFmpeg-user] how to use alpha channel to make transparent effect

2015-06-26 Thread Frank Tetzel
  I use the below command to place one image on one video:ffmpeg
  -i ./file1.mp4 -i ./file2.jpg -filter_complex overlay ./file3.mp4
 
  But there is no transparent effect in resulting file, i.e.
  file3.mp4. What is the command to use alpha channel to make
  transparent effect?
 
  Have a look at the blend filter and its opacity and expression
  options: http://ffmpeg.org/ffmpeg-filters.html#blend_002c-tblend
 
 IMHO OP actually wants overlay filter.

When he uses an image format with alpha support like png instead of
jpeg, then yes. Just look at the second example here:
http://ffmpeg.org/ffmpeg-filters.html#Examples-55
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user


Re: [FFmpeg-user] Question about ffmpeg

2015-01-21 Thread Frank Tetzel
 I am stil looking around for other solutions, but the idea is to
 combine 30 images of 500x500 to a combined total image of 2500x3000
 pixels. The reason I would like to use ffmpeg for is the batchfile
 automation. The smaller images change content so the new total map
 needs to be updated when that happens. Stitching these images
 everytime in photoshop can be time consuming. While ffmpeg could do
 it in 5 seconds. I'll try to use the advice I have read sofar.

Hi,

i think, imagemagick is better suited for the job when it comes to
still images. See:
http://www.imagemagick.org/Usage/montage/#geometry_size

Greetings,
Frank.
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user


Re: [FFmpeg-user] How to scale watermark proportionally to width and height of video frame?

2015-01-19 Thread Frank Tetzel
 How to scale watermark proportionally video frame size (to width and
 to height of video frame)?
 
 Now I use the following command:
 
 ffmpeg -i input.mkv -i logo.png -filter_complex
 [1:v]scale=w=iw/3:h=ih/3[watermark_scaled];[0:v][watermark_scaled]overlay=x=main_w-overlay_w-main_w/20:y=main_h-overlay_h-main_w/20
 output.mkv
 
 I use scale=w=iw/3:h=ih/3, but I can't use main_w and main_h
 variables in this place. E.g. scale=w=main_w*0.05:main_h*0.05 don't
 work. How to get video frame width and height to use them for scaling
 watermark?

Hello,

as far as i know, it is not possible. You have to get the video
dimension on an outer layer, like a shell script, and pass it to the
scale filter. All filters only know about their inputs, not about other
filter chains.

Something like:

streaminfo=`mktemp`
ffprobe -v quiet -show_streams -select_streams v:0 input.mkv $streaminfo
width=`grep width $streaminfo | cut -d'=' -f2`
height=`grep height $streaminfo | cut -d'=' -f2`

ffmpeg -i input.mkv -i logo.png -filter_complex
[1:v]scale=w=$width/3:h=$height/3[watermark_scaled];[0:v][watermark_scaled]overlay=x=main_w-overlay_w-main_w/20:y=main_h-overlay_h-main_w/20
output.mkv

rm $streaminfo


Greetings,
Frank.
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user