In Thu, 2 Jun 2022 18:51:23 + (UTC)
james young via ffmpeg-user wrote:
> Hi,I am working on a software which converts the audio to mp3 with 8k
> bitrate using ffmpeg. The audio can be of any codec and the output
> will be mp3. This conversion takes longer time to finish for codecs
> like
In Tue, 15 Feb 2022 11:34:40 +0500
Nikita Zlobin wrote:
> "format=...(from `ffmpeg -pix_fmts`)" in the filtergraph should do it,
> e.g. -filter:v="format = rgb24". Though I don't know, if there are
> formats in the list, which are more like hsv than rgb (yuv could b
"format=...(from `ffmpeg -pix_fmts`)" in the filtergraph should do it,
e.g. -filter:v="format = rgb24". Though I don't know, if there are
formats in the list, which are more like hsv than rgb (yuv could be
similar, because its luma channel is similar to hsv value or hsl
lightness channels).
In
- why would not this choke cause cpu load?
In Tue, 25 Jan 2022 05:58:04 +0500
Nikita Zlobin wrote:
> I managed to get it working with overlay, using tee & named fifo.
>
> In first attempt I had two ffmpeg instance:
> 1 - creating new image, merging with accumulator image and
In Tue, 25 Jan 2022 11:14:31 +0100
Bo Berglund wrote:
> On Tue, 25 Jan 2022 15:11:41 +0500, Nikita Zlobin
> wrote:
>
> >I would try separate separate concats for audio and video,
> >as they are demuxed anyway before filtergraph.
> >
>
> Hmm, what exact
In Tue, 25 Jan 2022 11:14:31 +0100
Bo Berglund wrote:
> On Tue, 25 Jan 2022 15:11:41 +0500, Nikita Zlobin
> wrote:
>
> >I would try separate separate concats for audio and video,
> >as they are demuxed anyway before filtergraph.
> >
>
> Hmm, what exact
I would try separate separate concats for audio and video,
as they are demuxed anyway before filtergraph.
In Tue, 25 Jan 2022 09:35:59 +0100
Bo Berglund wrote:
> This is about ffmpeg usage on Linux...
>
> I am nightly downloading news shows from several sources and
> sometimes a show from one
I managed to get it working with overlay, using tee & named fifo.
In first attempt I had two ffmpeg instance:
1 - creating new image, merging with accumulator image and sending to
tee, which makes split job, redirecting it to second ffmpeg.
2 - takes output from ffmpeg-1 and shifts, preparing for
This filter graph is expected to have loop, whose purpose is to
accumulate output from first graph part. Initial part has format of
1-column (1xH) image, while accumulator image has multiple columns.
Loop:
- Initial image is overlaid to right accumulator column,
- entire accumulator is shifted to
For comparison, how could it look with RGB for libfreetype subpixel
layout: https://imgur.com/vFxavmR.png
Command:
stdbuf -o0 yes | SDL_AUDIODRIVER=alsa \
ffplay -v info -f rawvideo -pixel_format gray -video_size 1x1 -codec:v
rawvideo -framerate 2 \ -vf "
nullsink, nullsrc =
By some reason text is rendered grayscale, despite horizontal rgb is
default x11 settings, and drawtext uses libfreetype.
Shell command:
stdbuf -o0 yes | SDL_AUDIODRIVER=alsa \
ffplay -v quiet -f rawvideo -pixel_format gray -video_size 1x1 -codec:v
rawvideo -framerate 2 \ -vf "
nullsink,
11 matches
Mail list logo