Harry Mantheakis wrote:
The mystery remains, for me: why updating 100,000 records could
complete in as quickly as 5 seconds, whereas an attempt to update a
million records was still running after 25 minutes before we killed it?
The way you were doing this originally, it was joining every recor
- Forwarded Message -
>From: Denis de Bernardy
>To: Jens Hoffrichter
>Sent: Tuesday, June 28, 2011 12:59 AM
>Subject: Re: [PERFORM] Getting rid of a seq scan in query on a large table
>
>
>> Hash Cond: (posts.poster_id = posters.poster_id)
>
>> -> Seq Scan on post
Jenish wrote:
> This server is the dedicated database server.
>
> And I am testing the limit for the concurrent active users. When I
> am running my test for 400 concurrent user ie. Active connection.
> I am getting good performance but when I am running the same the
> same test for 950 concurr
Dne 27.6.2011 22:14, Jenish napsal(a):
> And I am testing the limit for the concurrent active users. When I am
> running my test for 400 concurrent user ie. Active connection. I am
> getting good performance but when I am running the same the same test
> for 950 concurrent users I am getting very
Hi ,
This server is the dedicated database server.
And I am testing the limit for the concurrent active users. When I am
running my test for 400 concurrent user ie. Active connection. I am getting
good performance but when I am running the same the same test for 950
concurrent users I am getting
Dne 27.6.2011 17:01, Jenish napsal(a):
> Hi,
>
> DB : POSTGRES 8.4.8
> OS : Debian
> HD : SAS 10k rpm
>
> Shared_buffer is 4096 25 % of RAM , effective_cache is 8GB 75% of RAM
How much data are we talking about? Does that fit into the shared
buffers or is it significantly larger? Do the trigger
Dne 27.6.2011 17:58, Jenish napsal(a):
>
> Hi,
>
> I have already checked all the statements present in the trigger, no one
> is taking more then 20 ms.
>
> I am using 8-Processor, Quad-Core Server ,CPU utilization is more then
> 90-95 % for all. (htop result)
So all cores are 95% utilized? Tha
On Mon, Jun 27, 2011 at 5:37 PM, wrote:
>> The mystery remains, for me: why updating 100,000 records could complete
>> in as quickly as 5 seconds, whereas an attempt to update a million
>> records was still running after 25 minutes before we killed it?
>
> Hi, there's a lot of possible causes. Us
On Mon, Jun 27, 2011 at 9:22 AM, Jenish wrote:
> Hi All,
>
> I am facing some performance issue with insert into some table.
>
> I am using postgres 8.4.x
>
> Table is having 3 before insert trigger and one after insert trigger.
>
> With all triggers enable it is inserting only 4-5 record per seco
Hi,
I have already checked all the statements present in the trigger, no one is
taking more then 20 ms.
I am using 8-Processor, Quad-Core Server ,CPU utilization is more then 90-95
% for all. (htop result)
DB has 960 concurrent users.
io : writing 3-4 MB per second or less (iotop result).
Scen
> The mystery remains, for me: why updating 100,000 records could complete
> in as quickly as 5 seconds, whereas an attempt to update a million
> records was still running after 25 minutes before we killed it?
Hi, there's a lot of possible causes. Usually this is caused by a plan
change - imagine
> Hi,
>
> DB : POSTGRES 8.4.8
> OS : Debian
> HD : SAS 10k rpm
>
> Shared_buffer is 4096 25 % of RAM , effective_cache is 8GB 75% of RAM
>
> After insert trigger is again calling 2 more trigger and insert record in
> another table depends on condition.
>
> with all trigger enable there are 8 inser
Harry Mantheakis wrote:
> I am glad to report that the 'salami-slice' approach worked nicely
> - all done in about 2.5 hours.
Glad to hear it!
> The mystery remains, for me: why updating 100,000 records could
> complete in as quickly as 5 seconds, whereas an attempt to update
> a million rec
I am glad to report that the 'salami-slice' approach worked nicely - all
done in about 2.5 hours.
Instead of using an all-in-one-go statement, we executed 800 statements,
each updating 100,000 records. On average it tool about 10-seconds for
each statement to return.
This is "thinking out of
Hi,
DB : POSTGRES 8.4.8
OS : Debian
HD : SAS 10k rpm
Shared_buffer is 4096 25 % of RAM , effective_cache is 8GB 75% of RAM
After insert trigger is again calling 2 more trigger and insert record in
another table depends on condition.
with all trigger enable there are 8 insert and 32 updates(app
Jenish wrote:
> I am using postgres 8.4.x
With x being what? On what OS and hardware?
> Table is having 3 before insert trigger and one after insert
> trigger.
>
> With all triggers enable it is inserting only 4-5 record per
> second.
>
> But if I disable after insert trigger it is able t
Jens Hoffrichter wrote:
> I'm having trouble getting rid of a sequential scan on a table
> with roughly 120k entries it.
Please post your configuration information and some information
about your hardware and OS.
http://wiki.postgresql.org/wiki/SlowQueryQuestions
Since the table scan went
Hi All,
I am facing some performance issue with insert into some table.
I am using postgres 8.4.x
Table is having 3 before insert trigger and one after insert trigger.
With all triggers enable it is inserting only 4-5 record per second.
But if I disable after insert trigger it is able to inser
Hi everyone,
I'm having trouble getting rid of a sequential scan on a table with roughly
120k entries it. Creation of an index on that particular column which
triggers the sequential scan doesn't do anything, VACUUM and ANALYZE has
been done on the table.
The table in question has the following d
19 matches
Mail list logo