Hello, thank you and others for your answers.

We are not allowed to use external libraries, because of industrial
certification constraints. We can use SQLite because we can not do without
database, but it's a big stuff to validate it according to those
constraints, so we have to reduce as much as possible the use of such
third-parties libraries.

The idea from Eric Minbiole to drop index in the copy file and to
investigate with sqlite_analyzer is for the moment the better one !

For now we are just wondering how to use SQlite facilities, and if it's
not sufficient, maybe we would think of the opportunity to developp a tiny
compression algorithm by ourselves, or not... There is no requirement of
on-the-fly compression / decompression because it's for archive only
(fortunately !).

++

Vincent

> Hello!
>
> Â ñîîáùåíèè îò Monday 19 January 2009 20:22:33 vlema...@ausy.org
> íàïèñàë(à):
>> It is a requirement that the size of those copies being as small as
>> possible, without having to perform an external compression.
>
> You can using internal data compression. By compressing a few big fields
> you can extremly reduce
> size of your database. zlib on-the-fly compression is good.
>
> Best regards, Alexey.
> _______________________________________________
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to