On Sun, Mar 25, 2012 at 1:48 PM, Tal Tabakman <tal.tabak...@gmail.com>wrote:

> Hi,
> I am writing an application that performs a lot of DB writes. I am using a
> lot of recommended optimizations (like using transactions and more...)
> I want to improve my recording time by reducing the amount of I/O. one way
> to do so is by compressing the data before dumping it to DISK.
> I am evaluating a sqlite extension called zipvfs. this VFS extension
> compresses pages before writing them to disk
>

This seems like a misuse of ZIPVFS.  ZIPVFS is designed to be read-mostly.
ZIPVFS trades write performance in exchange for better compression and read
performance.

ZIPVFS has many potential uses, but its design use-case is a multi-gigabyte
map database on a portable GPS navigation device.  The database needs to be
compressed in order to fit in available storage.  Yet, the database also
needs to be modifiable since maps do sometimes change, though not often nor
by much.  In other words, ZIPVFS is designed to be written about as much as
you need to change a map.




> I am using zlib compress/uncompress as my compression callback functions
> for this VFS. I assumed that database writing will  be faster with this VFS
> since
> compression [means less I/O], in reality I see no difference (but the data
> is indeed compressed)...
> any idea why I don't see any recording time improvement ? is there an
> overhead with zipvfs ?  any other recommended compression callback
> functions ?
> cheers
> Tal
> _______________________________________________
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
D. Richard Hipp
d...@sqlite.org
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to