Thanks for pointing out the obvious :)
Seriously though, there are times when probably all of us has made "just
a simple database" that was not normalized in the correct way that later
turns out to be used a lot more than intended. Normalizing the database
at a later state requires a lot of mo
Your solution here is to normalize your database. Third normal form
will do it for you.
Daniel Önnerby wrote:
Just out of curiosity.
If I for instants have 1000 rows in a table with a lot of blobs and a
lot of them have the same data in them, is there any way to make a
plugin to sqlite that
Daniel Önnerby wrote:
Just out of curiosity.
If I for instants have 1000 rows in a table with a lot of blobs and a
lot of them have the same data in them, is there any way to make a
plugin to sqlite that in this case would just save a reference to
another blob if it's identical. I guess this c
Just out of curiosity.
If I for instants have 1000 rows in a table with a lot of blobs and a
lot of them have the same data in them, is there any way to make a
plugin to sqlite that in this case would just save a reference to
another blob if it's identical. I guess this could save a lot of spac
What are you using for compression?
Have you checked that you get a useful degree of compression on that
numeric data? You might find that it is not particularly amenable to
compression.
Hickey, Larry wrote:
I have a blob structure which is primarily doubles. Is there anyone with
some exper
Hickey, Larry uttered:
I have a blob structure which is primarily doubles. Is there anyone with
some experience with doing data compression to make the blobs smaller?
No experience with compressing blobs...
Tests I have
run so far indicate that compression is too slow on blobs of a few
hi,
i've written a field-based compression using bzip2.
my experience: the fields must have at least 50 bytes, or the compressed
data is bigger !
cu, gg
Hickey, Larry schrieb:
I have a blob structure which is primarily doubles. Is there anyone with
some experience with doing data compression t
I have a blob structure which is primarily doubles. Is there anyone with
some experience with doing data compression to make the blobs smaller?
Tests I have
run so far indicate that compression is too slow on blobs of a few meg to
be practical.
I get now at least 20 to 40 inserts per second bu
8 matches
Mail list logo