Thank you all for the very helpful advice. Upping work_mem made it possible
for me to generate the table within this century without bringing the server
to a near standstill. I have not yet experimented with GROUP BY, but I'll
do this next.
Cheers,
Kynn
Ted Allen writes:
> during the upgrade. The trouble is, when I dump the largest table,
> which is 1.1 Tb with indexes, I keep getting the following error at the
> same point in the dump.
> pg_dump: SQL command failed
> pg_dump: Error message from server: ERROR: invalid string enlargement
> r
(NOTE: I tried sending this email from my excite account and it appears
to have been blocked for whatever reason. But if the message does get
double posted, sorry for the inconvenience.)
Hey all,
Merry Christmas Eve, Happy Holidays, and all that good stuff. At my
work, I'm trying to upgrade
Hi Mark,
Good to see you producing results again.
On Sat, 2008-12-20 at 16:54 -0800, Mark Wong wrote:
> Here are links to how the throughput changes when increasing shared_buffers:
>
> http://pugs.postgresql.org/node/505
Only starnge thing here is the result at 22528MB. It's the only normal
on