> We need to produce copies of our databases for archive.
> It is a requirement that the size of those copies being as small as
> possible, without having to perform an external compression.
> vacuum doesn't seem to perform a compression (it works on fragmented
> data), is there any other way to do that ?

If you can't use an external compression program (which would almost 
certainly help reduce the size of your archived database), then there 
are a couple of options I can think of:

1. When you create the copy of your database, you could drop all of the 
indices from the copy, then vacuum.  Depending on your schema, this has 
the potential to remove some redundant information.  (At the expense of 
query speed, of course.)  You could always re-create the indices, if 
needed, when reading the archive.

2. If that doesn't help enough, run the sqlite3_analyzer (from 
http://sqlite.org/download.html) to see which table(s) are using the 
most disk space.  Focus on these tables to see if you can save space: 
Can you better normalize the schema to reduce repeated values?  Can some 
(non-vital) data be omitted from the archive?  etc.

If the above two options don't help enough, than I would reconsider the 
external compression tool.  zlib, for example, is a relatively 
lightweight, open source compression library that may do well on your 
database.

Hope this helps,
  Eric
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to