On Fri, May 2, 2008 at 4:51 PM, Alexy Khrabrov <[EMAIL PROTECTED]> wrote:
>
> On May 2, 2008, at 1:40 PM, Scott Marlowe wrote:
>
> > Again, a database protects your data from getting scrambled should the
> > program updating it quit halfway through etc...
> >
>
> Right -- but this is a data minin
On Fri, 2 May 2008, PFC wrote:
CREATE TABLE derived AS SELECT ... FROM ... (perform all your derived
calculations here)
Given what you have said (that you really want all the data in one table)
it may be best to proceed like this:
First, take your original table, create an index on the prima
On May 2, 2008, at 2:43 PM, Kevin Grittner wrote:
Alexy Khrabrov wrote:
SInce I don't index on that
new column, I'd assume my old indices would do -- do they change
because of rows deletions/insertions, with the effective new rows
addresses?
Every update is a delete and insert. The new ver
>>> Alexy Khrabrov wrote:
> OK. I've cancelled all previous attempts at UPDATE and will now
> create some derived tables. See no changes in the previous huge
table
> -- the added column was completely empty. Dropped it. Should I
> vacuum just in case, or am I guaranteed not to have any
>>> Alexy Khrabrov wrote:
> SInce I don't index on that
> new column, I'd assume my old indices would do -- do they change
> because of rows deletions/insertions, with the effective new rows
> addresses?
Every update is a delete and insert. The new version of the row must
be added to th
On May 2, 2008, at 2:23 PM, Greg Smith wrote:
On Fri, 2 May 2008, Alexy Khrabrov wrote:
I created several indices for the primary table, yes.
That may be part of your problem. All of the indexes all are being
updated along with the main data in the row each time you touch a
record. Ther
I created several indices for the primary table, yes. Sure I can do a
table for a volatile column, but then I'll have to create a new such
table for each derived column -- that's why I tried to add a column to
the existing table. Yet seeing this is really slow, and I need to to
many der
On Fri, 2 May 2008, Alexy Khrabrov wrote:
I created several indices for the primary table, yes.
That may be part of your problem. All of the indexes all are being
updated along with the main data in the row each time you touch a record.
There's some optimization there in 8.3 but it doesn't
On May 2, 2008, at 2:02 PM, Craig James wrote:
On Fri, May 2, 2008 at 2:26 PM, Alexy Khrabrov
<[EMAIL PROTECTED]> wrote:
I naively thought that if I have a 100,000,000 row table, of the form
(integer,integer,smallint,date), and add a real coumn to it, it
will scroll
through the memory reas
On Fri, May 2, 2008 at 2:26 PM, Alexy Khrabrov <[EMAIL PROTECTED]> wrote:
I naively thought that if I have a 100,000,000 row table, of the form
(integer,integer,smallint,date), and add a real coumn to it, it will scroll
through the memory reasonably fast.
In Postgres, an update is the same as
On May 2, 2008, at 1:40 PM, Scott Marlowe wrote:
Again, a database protects your data from getting scrambled should the
program updating it quit halfway through etc...
Right -- but this is a data mining work, I add a derived column to a
row, and it's computed from that very row and a small s
On Fri, May 2, 2008 at 2:26 PM, Alexy Khrabrov <[EMAIL PROTECTED]> wrote:
>
> So how should I divide say a 512 MB between shared_buffers and, um, what
> else? (new to pg tuning :)
Don't worry so much about the rest of the settings. Maybe increase
sort_mem (aka work_mem) to something like 16M or
Interestingly, after shutting down the server with
shared_buffer=1500MB in the middle of that UPDATE, I see this:
bash-3.2$ /opt/bin/pg_ctl -D /data/pgsql/ stop
waiting for server to shut downLOG: received smart shutdown request
LOG: autovacuum launcher shutting down
.
On May 2, 2008, at 1:22 PM, Greg Smith wrote:
On Fri, 2 May 2008, Alexy Khrabrov wrote:
I have an UPDATE query updating a 100 million row table, and
allocate enough memory via shared_buffers=1500MB.
In addition to reducing that as you've been advised, you'll probably
need to increase chec
On May 2, 2008, at 1:13 PM, Tom Lane wrote:
I don't think you should figure on more than 1GB being
usefully available to Postgres, and you can't give all or even most of
that space to shared_buffers.
So how should I divide say a 512 MB between shared_buffers and, um,
what else? (new to pg
On Fri, 2 May 2008, Alexy Khrabrov wrote:
I have an UPDATE query updating a 100 million row table, and
allocate enough memory via shared_buffers=1500MB.
In addition to reducing that as you've been advised, you'll probably need
to increase checkpoint_segments significantly from the default (3)
"Scott Marlowe" <[EMAIL PROTECTED]> writes:
> On Fri, May 2, 2008 at 1:38 PM, Alexy Khrabrov <[EMAIL PROTECTED]> wrote:
>> I randomly increased values in postgresql.conf to
>>
>> shared_buffers = 1500MB
>> max_fsm_pages = 200
>> max_fsm_relations = 1
> On a laptop with 2G ram, 1.5Gig shar
On Fri, May 2, 2008 at 1:38 PM, Alexy Khrabrov <[EMAIL PROTECTED]> wrote:
>
>
> On May 2, 2008, at 12:30 PM, Scott Marlowe wrote:
>
>
> > On Fri, May 2, 2008 at 1:24 PM, Alexy Khrabrov <[EMAIL PROTECTED]>
> wrote:
> >
> > > Greetings -- I have an UPDATE query updating a 100 million row table,
> an
On May 2, 2008, at 12:30 PM, Scott Marlowe wrote:
On Fri, May 2, 2008 at 1:24 PM, Alexy Khrabrov
<[EMAIL PROTECTED]> wrote:
Greetings -- I have an UPDATE query updating a 100 million row
table, and
allocate enough memory via shared_buffers=1500MB. However, I see two
processes in top, the U
On Fri, May 2, 2008 at 1:24 PM, Alexy Khrabrov <[EMAIL PROTECTED]> wrote:
> Greetings -- I have an UPDATE query updating a 100 million row table, and
> allocate enough memory via shared_buffers=1500MB. However, I see two
> processes in top, the UPDATE process eating about 850 MB and the writer
> p
Greetings -- I have an UPDATE query updating a 100 million row table,
and allocate enough memory via shared_buffers=1500MB. However, I see
two processes in top, the UPDATE process eating about 850 MB and the
writer process eating about 750 MB. The box starts paging. Why is
there the wri
21 matches
Mail list logo