In article <1253233759.5804.394.ca...@duiker>,
   John-Mark Bell <[email protected]> wrote:

> What I've currently got is probably a bit too dependent on PNG, but it's
> showing promise. The in-memory format of image data is as follows:

> + Interlaced images have more scanlines than necessary. The extra
>   scanlines are treated as above.

Do you mean while an interlaced PNG is partially decoded?  Are those extra
scanlines discarded when they become unnecessary?

> This allows easy redraw of bitmap sections -- only decompress the
> relevant scanlines.

Just to be clear, you mean only decompress the relevant scanlines from the
intermediate LZF compressed state (as opposed to original source data)
when they're needed for redraw?

> The upshot of this is that memory requirements for bitmaps becomes
> significantly smaller

This is great.

> On the downside, processing and rendering is slower I've not measured
> how much slower yet, especially as there's much scope for improvement:

> + Deinterlace images before storing them -- avoids the need to 
>   consider interlacing during redraw.
> + Make all scanlines RGB(A) before compressing them -- avoids the 
>   need to consider the possible scanline types during redraw. 
>   (Completely dropping the alpha channel for opaque images provides 
>   a free 25% reduction in space requirements.)
> + Compress chunks of 8 scanlines, say, to amortise the cost of 
>   (de)compression, particularly during redraw.

I guess doing things in chunks of scanlines will be needed for formats
like JPEG anyway.

> Some questions:

> 1) Is it worth pursuing this further?

Yes, the memory savings are certainly worthwhile, but I think we need to
know if the speed penalty is going to be significant.

> 2) What kind of API do people want? Right now, when wanting to redraw, 
>    the client passes a row callback which is called for every scanline 
>    in the specified clipping region. The bitmap data fed to the 
>    callback is uncompressed 32bpp RGBA. The callback is responsible for 
>    actually outputting the scanline (including any alpha blending, 
>    colour-space conversion, and scaling).

So the front end would make the calls to plot each scanline?  So e.g. for
the RISC OS front end, we'd call Tinct once per image scanline instead of
once per image?  Or would the front end build the scanlines into a single
32bpp bitmap and plot that in one?

I guess having 8 scanline chunks would make this more efficient too.

Maybe making the core do the image scaling is something we could consider
too.  In the future we may need to plot rotated bitmaps as well.  E.G. in
SVGs.

> Given that changing the way in which bitmaps work impacts many bits of
> the core and frontends, I'd not be intending to even start a transition
> until we've sorted out content caching.

Yeah.  I also think that for bitmaps we should consider deferring the
decoding of images (from original source data) to when they're going to be
displayed.  At the moment we decode big images that may be at the bottom
of a page that the user never scrolls down to see.

-- 

Michael Drake (tlsa)                  http://www.netsurf-browser.org/


Reply via email to