Hi

I'm putting lots of small (~10Kb) chunks of binary seismic data into large
objects in postgres 7.0.2. Basically just arrays of 2500 or so ints that
represent about a minutes worth of data. I put in the data at the rate of
about 1.5Mb per hour, but the disk usage of the database is growing at
about 6Mb per hour! A factor of 4 seems a bit excessive.

Is there significant overhead involoved in using large objects that aren't
very large? 

What might I be doing wrong?

Is there a better way to store these chunks?

thanks,
Philip


Reply via email to