Hi,

My application displays very large images at different zoom factors. I have 
tiled my images into individual tiles of a much smaller size. Each image has 
hundreds of tiles. The tiles will be used to interactively display the image 
and the speed of the zooming and panning is very important.

That's why I am thinking storing the tiles into an SQLite database. I am 
soliciting some advice on how to setup the index.

The main column for a Tile table will be its pixel blob. I can setup the rest 
of its structure as I wish. For example:

left: integer horizontal offset of the tile
top: integer vertical pixel offset of the tile
width: integer horizontal size of the tile
height: integer vertical size of the tile
shrink ratio: a power of two. How much the resolution of the tile has been 
reduced from the image resolution

width and height are typically constant, equal, and a power of two. The 
exception is at the right and bottom edges for the right-most and bottom-most 
rows of tiles for images that have non-power of 2 dimensions.

In memory, my data structure is a quad tree. The leaves of the quad tree point 
to "elementary" tiles: tiles that holds exact pixels from the image at the same 
resolution. At he next tree level, I take 4 tiles (arranged in a square) and 
make a new tile from them. This new tile is shrunk in resolution from its 4 
source tiles by a factor of 2 in each direction. That is, each of its pixels 
represent 4 pixels from the source tiles. I iterate at the next level. 
Conceptually, this process stops when level n has only one tile, which can be 
considered as a thumbnail of the original image.

When I need a tile, I know the topleft of the tile and I know the current zoom 
factor. The zoom factor can be 1.0 (actual pixels) or smaller (zooming out). 
From the zoom factor, I can easily compute the shrink ratio to use: I want to 
use those tiles that have just enough resolution, not better, as that would be 
wasteful and I am memory constrained (I can't load the full image in memory at 
any time). To find out the shrink ratio, it's basically 1/zoom factor, rounded 
to the correct adjacent power of two.

The set of tiles needed at any given time is those that cover (at least 
partially) the displayed rectangle. The number of needed tiles is constant for 
a given window size, since tiles have a constant size (not quite constant: edge 
and zoom rounding effects can affect slightly that number. at least there is a 
known upper bound to that number). That number is small, much smaller than the 
number of elementary tiles. The required tiles are "adjacent": they form a 
connected rectangle.

Note that I can set up my Tile table differently if I want. For example, I can 
use row and column numbers rather than left and top since the pixel sizes are 
known.

My question is: is there any smart way to set up my table index? My naive idea 
is to have a composite index: shrink ratio/left/top (or ratio/row/column).

Is there any better? For example is there any reason to prefer either of 
ratio/left/top and ratio/top/left?

Is there any reason to consider "smart" encodings similar to geo hashing where 
each additional bit in an identifier encodes a smaller area?

I realize that since I have only hundreds - not millions - of tiles, this may 
sound like premature optimization. But I am not after optimization here (yet), 
only after sensible design.

Thanks,

Jean-Denis

_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to