On May 2, 2008, at 2:43 PM, Kevin Grittner wrote:
Alexy Khrabrov wrote:
SInce I don't index on that
new column, I'd assume my old indices would do -- do they change
because of rows deletions/insertions, with the effective new rows
addresses?
Every update is a delete and insert
On May 2, 2008, at 2:23 PM, Greg Smith wrote:
On Fri, 2 May 2008, Alexy Khrabrov wrote:
I created several indices for the primary table, yes.
That may be part of your problem. All of the indexes all are being
updated along with the main data in the row each time you touch a
record
On May 2, 2008, at 2:02 PM, Craig James wrote:
On Fri, May 2, 2008 at 2:26 PM, Alexy Khrabrov
<[EMAIL PROTECTED]> wrote:
I naively thought that if I have a 100,000,000 row table, of the form
(integer,integer,smallint,date), and add a real coumn to it, it
will scroll
through the
On May 2, 2008, at 1:40 PM, Scott Marlowe wrote:
Again, a database protects your data from getting scrambled should the
program updating it quit halfway through etc...
Right -- but this is a data mining work, I add a derived column to a
row, and it's computed from that very row and a small s
Interestingly, after shutting down the server with
shared_buffer=1500MB in the middle of that UPDATE, I see this:
bash-3.2$ /opt/bin/pg_ctl -D /data/pgsql/ stop
waiting for server to shut downLOG: received smart shutdown request
LOG: autovacuum launcher shutting down
.
On May 2, 2008, at 1:22 PM, Greg Smith wrote:
On Fri, 2 May 2008, Alexy Khrabrov wrote:
I have an UPDATE query updating a 100 million row table, and
allocate enough memory via shared_buffers=1500MB.
In addition to reducing that as you've been advised, you'll probably
need t
On May 2, 2008, at 1:13 PM, Tom Lane wrote:
I don't think you should figure on more than 1GB being
usefully available to Postgres, and you can't give all or even most of
that space to shared_buffers.
So how should I divide say a 512 MB between shared_buffers and, um,
what else? (new to pg
On May 2, 2008, at 12:30 PM, Scott Marlowe wrote:
On Fri, May 2, 2008 at 1:24 PM, Alexy Khrabrov
<[EMAIL PROTECTED]> wrote:
Greetings -- I have an UPDATE query updating a 100 million row
table, and
allocate enough memory via shared_buffers=1500MB. However, I see two
processes in to
Greetings -- I have an UPDATE query updating a 100 million row table,
and allocate enough memory via shared_buffers=1500MB. However, I see
two processes in top, the UPDATE process eating about 850 MB and the
writer process eating about 750 MB. The box starts paging. Why is
there the wri