"Benjamin Krajmalnik" writes:
> That is what I thought.
> The trigger calls a 3000 row stored procedure which does all of the
> calculations to aggregate data into 3 separate tables and then insert the raw
> data point into a 4th table.
Youch. Seems like you might want to rethink the idea of d
That is what I thought.
The trigger calls a 3000 row stored procedure which does all of the
calculations to aggregate data into 3 separate tables and then insert the raw
data point into a 4th table.
> -Original Message-
> From: Pierre C [mailto:li...@peufeu.com]
> Sent: Thursday, July 1
Essentially, we insert a set of columns into a table, and each row fires
a trigger function which calls a very large stored procedure
For inserting lots of rows, COPY is much faster than INSERT because it
parses data (a lot) faster and is more "data-stream-friendly". However the
actual inse
On Jul 14, 2010, at 7:50 PM, Ben Chobot wrote:
> On Jul 14, 2010, at 6:57 PM, Scott Carey wrote:
>
>> But none of this explains why a 4-disk raid 10 is slower than a 1 disk
>> system. If there is no write-back caching on the RAID, it should still be
>> similar to the one disk setup.
>
> Many
On Jul 15, 2010, at 9:30 AM, Scott Carey wrote:
>> Many raid controllers are smart enough to always turn off write caching on
>> the drives, and also disable the feature on their own buffer without a BBU.
>> Add a BBU, and the cache on the controller starts getting used, but *not*
>> the cache
On Jul 15, 2010, at 12:40 PM, Ryan Wexler wrote:
> On Wed, Jul 14, 2010 at 7:50 PM, Ben Chobot wrote:
> On Jul 14, 2010, at 6:57 PM, Scott Carey wrote:
>
> > But none of this explains why a 4-disk raid 10 is slower than a 1 disk
> > system. If there is no write-back caching on the RAID, it sh
Thanks everyone for the input so far, Merlin's comment about the network gave
me one of those duh moments since I have been running these queries remotely
using pgadmin. I will experiment with this more tomorrow/Monday along with the
other suggestions that have been posted to hopefully narrow it
First of all, a little background.
We have a table which is used as a trigger table for entering and
processing data for a network monitoring system.
Essentially, we insert a set of columns into a table, and each row fires
a trigger function which calls a very large stored procedure which
aggrega
On Thu, 2010-07-15 at 10:41 -0400, Patrick Donlin wrote:
> Results when running on the v8.3.7 server
> Total query runtime: 32185 ms.
> 700536 rows retrieved.
>
> Results when running on the v8.4.4 server
> Total query runtime: 164227 ms.
> 700536 rows retrieved.
>
>
> Anyone have any
On Thu, Jul 15, 2010 at 11:12 AM, Patrick Donlin wrote:
> I'll read over that wiki entry, but for now here is the EXPLAIN ANALYZE
> output assuming I did it correctly. I have run vacuumdb --full --analyze,
> it actually runs as a nightly cron job.
>
> 8.4.4 Sever:
> "Unique (cost=202950.82..22752
Patrick Donlin wrote:
> I have run vacuumdb --full --analyze, it
> actually runs as a nightly cron job.
That's usually not wise -- VACUUM FULL can cause index bloat, and is
not normally necessary. If you have autovacuum turned on and run a
database vacuum each night, you can probably avoid
On Thu, Jul 15, 2010 at 9:41 AM, Patrick Donlin wrote:
> I have two servers with equal specs, one of them running 8.3.7 and the new
> server running 8.4.4. The only tweak I have made from the default install
> (from Ubuntu repositories) is increasing shared_buffers to 768MB. Both
> servers are run
FULL is usually bad. Stick to "vacuum analyze" and drop the full.
Do you have indexes on:
test.tid, testresult.fk_tid, questionresult.fk_trid and testresult.trid
-Andy
On 7/15/2010 10:12 AM, Patrick Donlin wrote:
I'll read over that wiki entry, but for now here is the EXPLAIN ANALYZE
outp
Excerpts from Patrick Donlin's message of jue jul 15 11:12:53 -0400 2010:
> I'll read over that wiki entry, but for now here is the EXPLAIN ANALYZE
> output assuming I did it correctly. I have run vacuumdb --full --analyze, it
> actually runs as a nightly cron job.
These plans seem identical (t
I'll read over that wiki entry, but for now here is the EXPLAIN ANALYZE output
assuming I did it correctly. I have run vacuumdb --full --analyze, it actually
runs as a nightly cron job.
8.4.4 Sever:
"Unique (cost=202950.82..227521.59 rows=702022 width=86) (actual
time=21273.371..22429.511 row
Patrick Donlin wrote:
> Anyone have any ideas on where I should start looking to figure
> this out?
You're going to want to run EXPLAIN ANALYZE for the slow query on
both servers. If you want the rest of us to be able to contribute
ideas, we'll need a little more information -- please read th
On 15 July 2010 15:41, Patrick Donlin wrote:
> I have two servers with equal specs, one of them running 8.3.7 and the new
> server running 8.4.4. The only tweak I have made from the default install
> (from Ubuntu repositories) is increasing shared_buffers to 768MB. Both
> servers are running 64-bi
I have two servers with equal specs, one of them running 8.3.7 and the new
server running 8.4.4. The only tweak I have made from the default install (from
Ubuntu repositories) is increasing shared_buffers to 768MB. Both servers are
running 64-bit, but are different releases of Ubuntu.
This is
Hello Zotov,
Somehow the equivalence d2.basedon=d1.id is not used in the slow query,
probably because the equivalence constant value would be used inside a
not-base expression (the OR). You can see that the equivalence values
*are* used by changing the or to an and and compare both queries. Th
19 matches
Mail list logo