arathon.
#Pitch leaves the speed as it is while the cantor either sings with the
whales or attracts all bats in the forest. Default is 1 for same tempo
and/or same pitch.
On 31.10.23 15:34, Joshua Grauman wrote:
I tried this and it doesn't help. The audio and video are still not in
I tried this and it doesn't help. The audio and video are still not in
sync, even when you don't use the fade-out...
Josh
On Tue, 31 Oct 2023, 凯迪软件(咨询、售后) via ffmpeg-user wrote:
ffmpeg -y -i "v1-ed.mp4" -i "a1-ed.wav" -filter_complex "[1]anull[aud];
[0]fade=t=out:st=6673.87:n=24[out]" -stric
Hi all,
I have a GeForce GT 1030/PCIe/SSE2 that supports NVENC and I have ffmpeg
compiled with nvenc support.
./ffmpeg -encoders 2>/dev/null | grep nvenc
V. h264_nvenc NVIDIA NVENC H.264 encoder (codec h264)
V. nvencNVIDIA NVENC H.264 encoder (codec h264)
V
On Tue, 18 Dec 2018, Carl Zwanzig wrote:
2018-12-18 12:36 GMT+01:00, Joshua Grauman :
I have two video cameras recording an event 30 minutes long.[...] The
audio won't
be exactly identical as they are two different cameras, different mics,
different locations, but they are both reco
Yes, exactly. Thanks for the mathematical name for what I'm trying to do.
I don't necessarily need to do it in ffmpeg, so I found this, which should
be easy to adapt for my needs (includes sample C code).
http://paulbourke.net/miscellaneous/correlate
Josh
I have two video cameras recordin
Hi all,
I have two video cameras recording an event 30 minutes long. I hit start
on the two cameras at different times. I would like to figure out a
command-line script that can basically diff the audio to figure out the
difference between the start times and align the videos. The audio won't
Hi all,
I am looking at getting a new laptop and was wondering if getting a GPU
will increase ffmpeg speed. I am thinking of getting a laptop with a
Nvidia Geforce MX150 and I'm wondering if that will speed up the following
command:
shm is a program that outputs video frames for ffmpeg to en
Hello,
I'm trying to add code so the php/linux web video script that I'm using, will
add a watermark upon each video upload.
I've added this '$shell' line, in between the existing code lines here, with no
success:
$input_path = $full_dir . $file_upload['filename'];
$shell = shell_exec("$ffmp
Hi all,
I'm sure I could figure this out eventually, but I was wondering if anyone
knew offhand the easiest way to add 5 seconds of black silence to the end
of a video? I have a number of inputs and am running them all through a
filtergraph chain like this and just want to add black silence to
Hi all,
My camera automatically splits up video files at the 2 GB mark like I
think many cameras do. I use ffmpeg to combine them back using the concat
filter, which works, but there is a glitch for a fraction of a second
right at the break between the two files in the resultant output (in bot
Hi all,
From what I'm seeing, a fade in like this,
"[in]fade=t=in:st=5.0:d=1[out];" will black out everything in the video
before the 5 second start time. Is there any way to have the fade in only
affect the video from t=5 to t=6?
Josh
___
ffmpeg-u
Hi all,
I am combining various effects, and I noticed that when I fade in a
video
using the [in]fade=t=in:st=2:n=24[out] syntax, and I fade in text using
alpha='if(lt(t,2),0,if(lt(t,3),t-2,1))', that they both fade in roughly
at
the same time, but both are not linear. I'm guessing the fade
Hi all,
I am combining various effects, and I noticed that when I fade in a video
using the [in]fade=t=in:st=2:n=24[out] syntax, and I fade in text using
alpha='if(lt(t,2),0,if(lt(t,3),t-2,1))', that they both fade in roughly at
the same time, but both are not linear. I'm guessing the fade=in use
Hi all,
I am combining various effects, and I noticed that when I fade in a video
using the [in]fade=t=in:st=2:n=24[out] syntax, and I fade in text using
alpha='if(lt(t,2),0,if(lt(t,3),t-2,1))', that they both fade in roughly at
the same time, but both are not linear. I'm guessing the fade=in
If anyone is interested, I figured out how to fix my problem. Since I
already had to compile ffmpeg, I just modifed the source code to the
curves filter.
Basically, I compute the saturation and value for each pixel, and only
apply the curve if the saturation and value are low enough... It work
Hi all,
I am processing some video and would love help with the curves filter. The
video is shot with a 'black' background, but it is not completely black
(notice folds in the backdrop). I want to filter the video to make the
background completely black without affecting any of the rest of the
Hello all,
I am using ffmpeg to edit videos of speeches/talks (like a TED talk). I am
using a headworn microphone. I would like to add an audio filter to get
rid of 'pop' and 'ess' sounds, especially present when pronouncing the 'p'
sound and 's' and 'sh' sounds respectively. Does anyone have
Hey. Im trying to build a video frame by frame by passing frames to FFmpeg
via a pipe (in this case an anonymous pipe).
I did something similar in C++ using Qt.
I simply wrote raw image data to stdout using fwrite like:
fwrite((unsigned char *)sharedMemory.constData()+16, sharedMemory.size()-1
I've made some progress with my command. But I've run into a problem. In
the docs, under the drawbox command, it lists an option 'replace'
replace
Applicable if the input has alpha. With value 1, the pixels of the painted
box will overwrite the video’s color and alpha pixels. Default is 0, whic
Hi all,
Would someone mind helping me to get a commandline correct?
I have a video, screencast.avi, of utvideo with transparency.
I extract a couple seconds of images from screencast.avi like:
ffmpeg -ss 31 -t 2 -i screencast.avi filename%03d.png
I would like to edit the images and then "put
Thanks for the suggestion. Those look pretty close, but from what I can
see in the docs, they can't read a single pixel. Am I missing something or
are you suggesting I can modify the code of that filter for my needs?
Josh
2017-10-05 17:52 GMT+02:00 Joshua Grauman :
Also, to wr
Hi all,
I'm still working on my setup and haven't gotten anywhere. Maybe
simplifying my question will help.
Is there any way to read a single pixel value from a video on the
commandline that can be used as a variable in other filters (ie to
position overlays)?
If not, would there be generi
Hi all,
I have a setup where I am overlaying a screen record video on top of a
camera video. The overlay works great. But I want to be able to position
or turn off the screen recording differently, depending on a single pixel
set in the screen record video.
How can I get a single pixel value
There is probably a way to do it directly with ffmpeg on the commandline,
but as I'm not an expert on that, I won't confuse you with guesses.
I have figured out how to generate raw video frames with a C++ program and
have ffmpeg convert it into a video. My ffmpeg command looks like this:
./sh
Hello all,
I am working on a command that concats two videos from my camera, and then
overlays another video on top of the concatonated video. It is mostly
working, but the timing isn't right. For sake of argument, let's say that
MVI_0001.MOV is 5 min, and MVI_0002.MOV is 3 min (they are broke
Hello again,
One more question. Trying to squeeze every ounce of performance out I
realized that it would be somewhat faster to use the ffmpeg libs directly
rather than piping the output of my program to ffmpeg, as it would be one
less copy. I've started looking at the devel libs, but am a lit
hanks!
Josh
Am 11.12.2016 um 07:21 schrieb Joshua Grauman:
I am thinking about upgrading my cpu from a Core i5-4690K to a Core
i7-4790K. At the same clock speed, will ffmpeg run faster with the i7?
Does ffmpeg support hyperthreading or other features of the i7?
there are no other fea
Hello all,
I am thinking about upgrading my cpu from a Core i5-4690K to a Core
i7-4790K. At the same clock speed, will ffmpeg run faster with the i7?
Does ffmpeg support hyperthreading or other features of the i7?
For reference, here is the command that I want to increase the speed of:
./shm
I'll keep that in mind if I really need extra speed. Thanks!
Josh
On Fri, 2 Dec 2016, Fred Perie wrote:
2016-12-02 17:38 GMT+01:00 Joshua Grauman :
Thanks guys, I'll look into these ideas!
Josh
On Fri, 2 Dec 2016, Paul B Mahol wrote:
On 12/2/16, Joshua Grauman wrote:
Hel
Thanks guys, I'll look into these ideas!
Josh
On Fri, 2 Dec 2016, Paul B Mahol wrote:
On 12/2/16, Joshua Grauman wrote:
Hello all,
I am using the following command successfully to generate a screencast.
The video comes from my program 'gen-vid' which outputs the raw fr
Hello all,
I am using the following command successfully to generate a screencast.
The video comes from my program 'gen-vid' which outputs the raw frames
with alpha channel. The resulting .avi has alpha channel as well which is
my goal. It all works great except that my computer can't handle d
Moritz,
Thanks so much for the hint! That's pretty nifty. Unfortunately, I was
actually hoping to do both... I wanted to change the input stream and the
output stream (I am using ffmpeg to simply encode raw frames without
resizing).
Josh
On Mon, Oct 31, 2016 at 12:09:52 -0700, J
Hello all,
I have a program that outputs raw video data to ffmpeg like this:
./gen-vid | ffmpeg -f rawvideo -pixel_format bgra -video_size 1366x713
-framerate 30 -i - -vcodec png overtest.avi
I am wondering if there is any way to change the video size mid-stream
without skipping any frames? A
I'm not really sure why, but I got it to work with the following cmd...
Thanks for your patience!
./shm | ffmpeg -f rawvideo -pixel_format bgr32 -video_size 1274x541 -framerate
30 -i - out.mp4
On Sat, 3 Sep 2016, Joshua Grauman wrote:
Sorry for another question, but I'm having a
4x541 -pix_fmt rgb32 -vcodec rawvideo -i -
out.mp4
Could find no file with path 'pipe:' and index in the range 0-4
pipe:: No such file or directory
What am I doing wrong?
Thanks!
Josh
On Fri, 2 Sep 2016, Joshua Grauman wrote:
I've been looking for a way to screencast an app w
Yes, I know I don't *need* an alpha channel. But without one, you have to
apply the same transparency to the whole frame, or use a green screen of
some sort, both of which have negative affects...
Josh
I have another different, but related question. What format would you
suggest for storing
I've been looking for a way to screencast an app with alpha, but I haven't
found one yet (but if someone knows of one, I'd love it). So I'm working
on modifying my app to generate png's at 30 frames per second.
Josh
2016-09-02 7:29 GMT+02:00 Joshua Grauman :
I ha
y, I'm not sure that I'm going to be able
to get vcdiff to run fast enough to create the diffs at 30 frames/sec :(.
Josh
On Thu, Sep 01, 2016 at 10:31:15 -0700, Joshua Grauman wrote:
> So I know ffmpeg can encode video from a list of pngs. (img1.png,
> img2.png, img3.png, e
ate the diffs
at 30 frames/sec :(.
Josh
On Thu, Sep 01, 2016 at 10:31:15 -0700, Joshua Grauman wrote:
So I know ffmpeg can encode video from a list of pngs. (img1.png,
img2.png, img3.png, etc). But what if I have one png, and then a list of
diffs (using vcdiff) like (img1.png, img2.diff, img3.d
That's exactly what I need! Hadn't heard of the png_pipe option. I was
thinking I might have to write a kernel module to create a bunch of /proc
files or something! Thanks!
Josh
On Fri, 2 Sep 2016, Cley Faye wrote:
2016-09-01 19:31 GMT+02:00 Joshua Grauman :
I could of course ge
Sorry, I just realized I forgot to mention I'm on Linux...
Josh
Hello all,
So I know ffmpeg can encode video from a list of pngs. (img1.png, img2.png,
img3.png, etc). But what if I have one png, and then a list of diffs (using
vcdiff) like (img1.png, img2.diff, img3.diff, etc.). Each diff wo
Hello all,
So I know ffmpeg can encode video from a list of pngs. (img1.png,
img2.png, img3.png, etc). But what if I have one png, and then a list of
diffs (using vcdiff) like (img1.png, img2.diff, img3.diff, etc.). Each
diff would be based upon the last. I could of course generate all the
pn
42 matches
Mail list logo