On 6/28/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
> Yes and no. It is perfectly feasible to create a video
> from a stream   of photographs, with each picture
> translating to a single frame of   video, but you can't
> just append it to something that's already been   encoded.
> The way that video is encoded, you can't just slap an
> extra   frame onto the end.
I don't get it, why is the encoding process so different?
Isn't like instead of encoding at full speed, you "freeze"
the process after each frame? If a buffer of some pictures
is needed I can keep it.

I guess Ivan meant a different thing than you do, something like
appending a new frame to an already encoded video WITHOUT
reencode the whole thing. That's feasible (with some tricks, in transcode)
only if used codec compress intraframe only (MJPEG come to mind).

But IIUC you just want to freeze te encoder after each frame encoded,
I am correct?
If so, it should be already feasible with socket protocol and some shell script
magic. Maybe (I've little practice on this field, notably socket
tricks) transcode
lacks some features like notification of a frame just encoded, I just
don't know;
anyway, going socket is definitively the way to go IMHO
(and one of cleanest way that I see).

No there won't be much difference between frames and I am
not going to use all i-frames; I'd like to encode the video
in a format like flashvideo or similar.

Just take in account that transcode doesn't yet support flashvideo nor
some `exotic'
codecs (patches welcome :) )

Here you can find some examples encoded after the capture
was finished:
http://www.manoweb.com/alesan/timelapse/

That's quite nice :)

+++

Casomai servisse, per la risoluzione di questo problema puoi anche
contattarmi in privato
e in italiano (visto che la lingua delle ML e` l'inglese :) )

+++

Bests,

--
Francesco Romani

Reply via email to