Just out of curiosity.
If I for instants have 1000 rows in a table with a lot of blobs and a lot of them have the same data in them, is there any way to make a plugin to sqlite that in this case would just save a reference to another blob if it's identical. I guess this could save a lot of space without any fancy decompression algorithm, and if the blob-field is already indexed there would be no extra time to locate the other identical blobs :)

Just a thought :)

John Stanton wrote:
What are you using for compression?

Have you checked that you get a useful degree of compression on that numeric data? You might find that it is not particularly amenable to compression.

Hickey, Larry wrote:
I have a blob structure which is primarily doubles. Is there anyone with
some experience with doing data compression to  make the blobs smaller?
Tests I have
run so far indicate that compression is too slow on blobs of a few meg to be practical. I get now at least 20 to 40 inserts per second but if a single compression takes over a second, it's clearly not worth the trouble. Does anybody have experience
with a compression scheme with blobs that consist of mostly arrays of
doubles?
Some  schemes ( ibsen) offer lightening speed decompression so if the
database was primarily used  to read, this would be good choice but very
expensive to do
the compression required  to make it.


-----------------------------------------------------------------------------
To unsubscribe, send email to [EMAIL PROTECTED]
-----------------------------------------------------------------------------



-----------------------------------------------------------------------------
To unsubscribe, send email to [EMAIL PROTECTED]
-----------------------------------------------------------------------------


-----------------------------------------------------------------------------
To unsubscribe, send email to [EMAIL PROTECTED]
-----------------------------------------------------------------------------

Reply via email to