Re: [PERFORM] dbt-2 tuning results with postgresql-8.3.5

2008-12-24 Thread Simon Riggs
Hi Mark,

Good to see you producing results again.


On Sat, 2008-12-20 at 16:54 -0800, Mark Wong wrote:
 Here are links to how the throughput changes when increasing shared_buffers:
 
 http://pugs.postgresql.org/node/505

Only starnge thing here is the result at 22528MB. It's the only normal
one there. Seems to be a freeze occurring on most tests around the 30
minute mark, which delays many backends and reduces writes. 

Reduction in performance as shared_buffers increases looks normal.

Increase wal_buffers, but look for something else as well. Try to get a
backtrace from when the lock up happens. It may not be Postgres?

 And another series of tests to show how throughput changes when
 checkpoint_segments are increased:
 
 http://pugs.postgresql.org/node/503
 
 The links go to a graphical summary and raw data.  Note that the
 maximum theoretical throughput at this scale factor is approximately
 12000 notpm.
 
 My first glance takes tells me that the system performance is quite
 erratic when increasing the shared_buffers.  I'm also not what to
 gather from increasing the checkpoint_segments.  Is it simply that the
 more checkpoint segments you have, the more time the database spends
 fsyncing when at a checkpoint?

I would ignore the checkpoint_segment tests because you aren't using a
realistic value of shared_buffers. I doubt any such effect is noticeable
when you use a realistic value determined from set of tests 505.

-- 
 Simon Riggs   www.2ndQuadrant.com
 PostgreSQL Training, Services and Support


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] Troubles dumping a very large table.

2008-12-24 Thread Ted Allen
(NOTE: I tried sending this email from my excite account and it appears 
to have been blocked for whatever reason.  But if the message does get 
double posted, sorry for the inconvenience.)


Hey all,

Merry Christmas Eve, Happy Holidays, and all that good stuff.  At my 
work, I'm trying to upgrade my system from a 8.1 to 8.3 and I'm dumping 
a few large static tables ahead of time to limit the amount of downtime 
during the upgrade.  The trouble is, when I dump the largest table, 
which is 1.1 Tb with indexes, I keep getting the following error at the 
same point in the dump.


pg_dump: SQL command failed
pg_dump: Error message from server: ERROR:  invalid string enlargement 
request size 1

pg_dump: The command was: COPY public.large_table (id, data) TO stdout;

As you can see, the table is two columns, one column is an integer, and 
the other is bytea.   Each cell in the data column can be as large as 
600mb (we had bigger rows as well but we thought they were the source of 
the trouble and moved them elsewhere to be dealt with separately.)


We are dumping the table using this command.

/var/lib/pgsql-8.3.5/bin/pg_dump  -O -x -t large_table mydb | gzip -c  
  large_table.pgsql.gz


Originally we tried dumping the table with 
'/var/lib/pgsql-8.3.5/bin/pg_dump  -O -x -t -F c  large_table.dump' but 
that was to cpu intensive and slowed down other db processes too much.  
It failed using that command as well, but I believe it is because we did 
not have enough postgres temp hard drive space.  We have since symlinked 
the postgres temp space to a much bigger file system.


The stats of the db server is as follows,

Processors:  4x Opteron 2.4 Ghz cores
Memory: 16 GB 
Disks: 42x 15K SCSI 146 GB disks.


Also, the large table has been vacuumed recently.  Lastly, we are dumping the 
table over nfs to very large sata array.


Thanks again and Happy Holidays,
Ted


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Troubles dumping a very large table.

2008-12-24 Thread Tom Lane
Ted Allen tal...@blackducksoftware.com writes:
 during the upgrade.  The trouble is, when I dump the largest table, 
 which is 1.1 Tb with indexes, I keep getting the following error at the 
 same point in the dump.

 pg_dump: SQL command failed
 pg_dump: Error message from server: ERROR:  invalid string enlargement 
 request size 1
 pg_dump: The command was: COPY public.large_table (id, data) TO stdout;

 As you can see, the table is two columns, one column is an integer, and 
 the other is bytea.   Each cell in the data column can be as large as 
 600mb (we had bigger rows as well but we thought they were the source of 
 the trouble and moved them elsewhere to be dealt with separately.)

600mb measured how?  I have a feeling the problem is that the value
exceeds 1Gb when converted to text form...

regards, tom lane

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] How to unique-ify HUGE table?

2008-12-24 Thread Kynn Jones
Thank you all for the very helpful advice.  Upping work_mem made it possible
for me to generate the table within this century without bringing the server
to a near standstill.  I have not yet experimented with GROUP BY, but I'll
do this next.

Cheers,

Kynn