[EMAIL PROTECTED] uttered:

Mikey C <[EMAIL PROTECTED]> wrote:
Not sure what you mean there DRH, but I set compression on one of my database
files on NTFS and file size shrunk from  1,289,216 bytes to 696,320 bytes.

And of course the whole compression / decompression process is completely
transparent to SQLite and if you decide that compression is a bad thing, you
just uncheck the box on that file and you are back to where you started.


After turning compression on, try making lots of updates to
the database.  Does the database stay the same size?  Is
there a significant I/O performance hit?  I'm guessing that
the answer in both cases will be "yes".  Please let me know.


Compression on NTFS and co is done at a cluster group level. If the cluster group does not compress, it is stored as is. I think NTFS works with 16 cluster groups, which would be 64k chunks I think.

My guess is that bigger page sizes will benefit compressing filesystems, as similar keys will be close to each other. Match the page size to the group size, so 64k in the case of NTFS.

The performance hit should be negligable if any, especially given modern processors' vast performance advantage over disk IO. As has been said, the amount of data being read/written should be lower, so performance may marginally improve. But seek latency should be similar in both cases, so performance is probably largely the same. On a full/fragmented filesystem, writing less data may also reduce the number of seeks required. But a full and fragmented filesystem will have other performance issues anyway.


--
D. Richard Hipp   <[EMAIL PROTECTED]>



Christian

--
    /"\
    \ /    ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
     X                           - AGAINST MS ATTACHMENTS
    / \

Reply via email to