On 31 Dec 2011, at 4:56pm, Ivan Shmakov wrote:

>       The integers could take up to 32 bits long, but I deem them
>       likely to “cluster”, like, e. g.: 1, 1, 1, 1, 2, 2, 2, 3, 3, 3,
>       101, 101, 102, 102, 102, 103, 103, 768, 768, etc.  My guess is
>       that such sequences should be quite compressible, but the point
>       is that there'll only be a few such numbers per row, and
>       millions of such rows in the table.

Thing is, an integer like 103 takes two bytes to store, and one of those bytes 
is the byte that indicates what kind of value is being stored.  So it's really 
only taking one byte per value.  And you can't improve on this by compressing 
individual values, only by compressing at the table level.  And if you're 
compressing at table level, making any changes is rather slow.

>       [snip]is there a way to determine the
>       filesystem space occupied by a particular table, or index, in
>       SQLite?  It now seems to me that the problem is different to
>       what I've initially presumed.

Not inside SQLite's API functions.  You can do it by reading the file format 
and multiplying the page size by how many pages are used.

Simon.
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to