On Tue, May 08, 2001 at 03:22:30PM +0200, W. Michael Petullo wrote:
> A few very rough calculations seem to indicate that a performance
> increase of about 33% from the jpeg code will allow me to capture 25
> frames a second at a resolution of 640x480 pixels.

How tied are you to needing jpeg as the captured output?  You really only
need it as the final output, correct?

Is there any reason why you couldn't stream to an intermediate file format
and then compress it?

I was thinking something along the way of differential run length encoding,
perhaps with a little bit of preprocessing.  Take something like what the
Utah Raster Toolkit does (RLE along each channel) and the old FLI stuff and
sort of combining them.  The problem with this approach that I see is,
well, NTSC hasn't earned the nick name "never the same color" for a reason.
The preprocessing I referred too might be interframe adjustments (ie, if a
pixel in consquetive frames differs by less than 2, assume it's really the
same color).  Then take the interframe differences between pixels, and RLE
that.  This should save a significant amount of space over raw frames, then
you can later process this stream with a DCT based encoder (JPEG, MPEG).

mrc
-- 
       Mike Castle       Life is like a clock:  You can work constantly
  [EMAIL PROTECTED]  and be right all the time, or not work at all
www.netcom.com/~dalgoda/ and be right at least twice a day.  -- mrc
    We are all of us living in the shadow of Manhattan.  -- Watchmen



_______________________________________________
Video4linux-list mailing list
[EMAIL PROTECTED]
https://listman.redhat.com/mailman/listinfo/video4linux-list

Reply via email to