Re: I: [GENERAL] arrays and block size

2009-02-02 Thread Sam Mason
On Mon, Feb 02, 2009 at 09:48:37AM -0800, Scara Maccai wrote:
  I need to store a lot of int8 columns (2000-2500) in a table.
  I was thinking about using int8[]

An array of ints sounds like the way to go here as you wouldn't be able
to have that many columns.  TOAST is one non-obvious implementation
artifact that may cause extra work to be done and may cause confusion.
Tuples in PG can be a maximum of 8KB and hence 2000 INT8 values would
only just fit and 2500 values wouldn't.  TOAST docs are available here:

  http://www.postgresql.org/docs/current/static/storage-toast.html

 , and I would like to know:
  1) is there a max size for arrays? I guess I could have 1 GB worth
  of values,

Not that I'm aware of; as a quick test:

  CREATE TABLE arrtest ( v INT8[] );
  INSERT INTO arrtest SELECT ARRAY(SELECT generate_series(1,1000));

worked (very slowly) for me, but 1 failed with an out of memory
error (32bit build).

  but I would like a confirmation
  2) there won't be any updates, only inserts and selects; since I
  will be using ZFS would there be any benefit in changing the block
  size (for example to 32K) in postgresql?

No idea, maybe someone else?

  Would it be better to use
  ZFS compression instead of postgresql's (TOAST)? That is: should I
  EXTERNAL for that array column and set compression=on for ZFS? I
  tried ZFS compression=on a 200 columns table and it worked faster
  than with compression=off

Oh, I guess you know about TOAST.  Otherwise I can't help much, sorry!


-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


I: [GENERAL] arrays and block size

2009-02-02 Thread Scara Maccai
Anyone?



- Messaggio inoltrato -
 Da: Scara Maccai m_li...@yahoo.it
 A: pgsql-general pgsql-general@postgresql.org
 Inviato: Venerdì 30 gennaio 2009, 13:59:09
 Oggetto: [GENERAL] arrays and block size
 
 Hi,
 
 I need to store a lot of int8 columns (2000-2500) in a table.
 
 I was thinking about using int8[], and I would like to know:
 
 1) is there a max size for arrays? I guess I could have 1 GB worth of 
 values, 
 but I would like a confirmation
 2) there won't be any updates, only inserts and selects; since I will be 
 using 
 ZFS would there be any benefit in changing the block size (for example to 
 32K) 
 in postgresql? Would it be better to use ZFS compression instead of 
 postgresql's 
 (TOAST)? That is: should I EXTERNAL for that array column and set 
 compression=on 
 for ZFS? I tried ZFS compression=on a 200 columns table and it worked faster 
 than with compression=off





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] arrays and block size

2009-01-30 Thread Scara Maccai
Hi,

I need to store a lot of int8 columns (2000-2500) in a table.

I was thinking about using int8[], and I would like to know:

1) is there a max size for arrays? I guess I could have 1 GB worth of values, 
but I would like a confirmation
2) there won't be any updates, only inserts and selects; since I will be using 
ZFS would there be any benefit in changing the block size (for example to 32K) 
in postgresql? Would it be better to use ZFS compression instead of 
postgresql's (TOAST)? That is: should I EXTERNAL for that array column and set 
compression=on for ZFS? I tried ZFS compression=on a 200 columns table and it 
worked faster than with compression=off...










-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general