Hi,

I've spent the last couple of days playing with in-memory bitmap
representations in libnspng (a toy that I'm not intending that we use --
it's just a useful playground for me right now).

Currently, we store bitmaps in memory as 32bpp buffers. This is pretty
inefficient in memory usage.

What I've currently got is probably a bit too dependent on PNG, but it's
showing promise. The in-memory format of image data is as follows:

+ Each scanline is stored in its original format (after removing any 
  PNG filtering), compressed using LZF. [Original format means one of
  Greyscale, RGB, Paletted, Greyscale + Alpha, RGBA, at various bits
  per component -- see the PNG spec for the full details].
+ Interlaced images have more scanlines than necessary. The extra
  scanlines are treated as above.

This allows easy redraw of bitmap sections -- only decompress the
relevant scanlines. 

The upshot of this is that memory requirements for bitmaps becomes
significantly smaller -- taking the screenshots directory (and
subdirectories) from the website source tree results in the compressed
format described above requiring, on average, 12.26% of the memory that
an uncompressed 32bpp buffer needs. This is a pretty major improvement
(not least as it actually allows me to render huge images on a machine
with little available RAM).
 
On the downside, processing and rendering is slower -- I've not measured
how much slower yet, especially as there's much scope for improvement:

+ Deinterlace images before storing them -- avoids the need to 
  consider interlacing during redraw.
+ Make all scanlines RGB(A) before compressing them -- avoids the 
  need to consider the possible scanline types during redraw. 
  (Completely dropping the alpha channel for opaque images provides 
  a free 25% reduction in space requirements.)
+ Compress chunks of 8 scanlines, say, to amortise the cost of 
  (de)compression, particularly during redraw.

Some questions:

1) Is it worth pursuing this further?
2) What kind of API do people want? Right now, when wanting to redraw, 
   the client passes a row callback which is called for every scanline 
   in the specified clipping region. The bitmap data fed to the 
   callback is uncompressed 32bpp RGBA. The callback is responsible for 
   actually outputting the scanline (including any alpha blending, 
   colour-space conversion, and scaling).

Given that changing the way in which bitmaps work impacts many bits of
the core and frontends, I'd not be intending to even start a transition
until we've sorted out content caching.


J.


Reply via email to