On Fri, Nov 28, 2008 at 3:48 PM, Alvaro Herrera <[EMAIL PROTECTED]> wrote: > William Temperley escribió: >> So a 216 billion row table is probably out of the question. I was >> considering storing the 500 floats as bytea. > > What about a float array, float[]?
I guess that would be the obvious choice... Just a lot of storage space reqired I imagine. On Fri, Nov 28, 2008 at 4:03 PM, Grzegorz Jaśkiewicz <[EMAIL PROTECTED]> wrote: > > > you seriously don't want to use bytea to store anything, especially if the > datatype matching exists in db of choice. > also, consider partitioning it :) > > Try to follow rules of normalization, as with that sort of data - less > storage space used, the better :) Any more normalized and I'd have 216 billion rows! Add an index and I'd have - well, a far bigger table than 432 million rows each containing a float array - I think? Really I'm worried about reducing storage space and network overhead - therefore a nicely compressed chunk of binary would be perfect for the 500 values - wouldn't it? Will -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general