Hi all!

while playing on a new box i noticed that postgresql does not seem to be
able to cope with very large settings for maintenance_work_mem.

For a test I created a single table with 5 integer columns containing
about 1,8B rows 8(about 300M distinct values in the column I want to index):


foo=# select count(*) from testtable;
   count
 ------------
  1800201755
 (1 row)


I tried to create an index on one of the columns:

foo=# SET maintenance_work_mem to 4000000;
SET
foo=# CREATE INDEX a_idx ON testtable(a);
ERROR:  invalid memory alloc request size 1073741824

foo=# SET maintenance_work_mem to 3000000;
SET
foo=# CREATE INDEX a_idx ON testtable(a);
ERROR:  invalid memory alloc request size 1073741824

the error is generated pretty fast (a few seconds into the create index)

however:

foo=# SET maintenance_work_mem to 2000000;
SET
foo=# CREATE INDEX a_idx ON testtable(a);

is running now for about 10 hours with nearly no IO but pegging the
CPU-core it is running on at a constent 100%.

watching the process while this happens seems to indicate that the above
error occures after the backend exceeds about 3,1GB in resident size.

The box in question is a Dual Opteron 275 (4 cores @2,2Ghz) with 16GB of
 RAM and 24GB of swap. OS is Debian Sarge/AMD64 with a pure 64bit userland.


Stefan

---------------------------(end of broadcast)---------------------------
TIP 3: Have you checked our extensive FAQ?

               http://www.postgresql.org/docs/faq

Reply via email to