Hi
I am performing a DB insertion and update for 3000+ records and while doing so
i get CPU utilization
to 100% with 67% of CPU used by postgres
I have also done optimization on queries too...
Is there any way to optimized the CPU utilization for postgres
I am currently using postgres
I am reposting as my original query was mangled
The link to the explain plan is here as it does not paste well into
the email body.
http://explain.depesz.com/s/kHa
The machine is a 2 cpu quad core 5430 with 32G RAM and 6x450G 15K
single raid-10 array
1G work_mem
default_statistics_target=1000
2010/6/1 Torsten Zühlsdorff :
> Hello,
>
> i have a set of unique data which about 150.000.000 rows. Regullary i get a
> list of data, which contains multiple times of rows than the already stored
> one. Often around 2.000.000.000 rows. Within this rows are many duplicates
> and often the set of al
Link to plan
http://explain.depesz.com/s/kHa
On Thu, Jun 3, 2010 at 11:43 AM, Andy Colson wrote:
> On 6/3/2010 12:47 PM, Anj Adu wrote:
>>
>> I cant seem to pinpoint why this query is slow . No full table scans
>> are being done. The hash join is taking maximum time. The table
>> dev4_act_action
On Thu, Jun 3, 2010 at 11:19 AM, Torsten Zühlsdorff
wrote:
> Scott Marlowe schrieb:
>>
>> On Tue, Jun 1, 2010 at 9:03 AM, Torsten Zühlsdorff
>> wrote:
>>>
>>> Hello,
>>>
>>> i have a set of unique data which about 150.000.000 rows. Regullary i get
>>> a
>>> list of data, which contains multiple t
Greg Smith wrote:
> I think this can only be resolved usefully for all of us at the
> RAID firmware level. If the controller had some logic that said
> "it's OK to not flush the cache when that call comes in if my
> battery is working fine", that would make this whole problem go
> away.
That
On Thu, Jun 3, 2010 at 1:31 PM, Greg Smith wrote:
> Scott Marlowe wrote:
>>
>> I think it's a case of the quickest, simplest answer to semi-new tech.
>> Not sure what to do with barriers? Just flush the whole cache.
>>
>
> Well, that really is the only useful thing you can do with regular SATA
>
Scott Marlowe wrote:
I think it's a case of the quickest, simplest answer to semi-new tech.
Not sure what to do with barriers? Just flush the whole cache.
Well, that really is the only useful thing you can do with regular SATA
drives; the ATA command set isn't any finer grained than that
Scott Marlowe wrote:
> I think it's a case of the quickest, simplest answer to semi-new
> tech. Not sure what to do with barriers? Just flush the whole
> cache.
>
> I'm guessing that this will get optimized in the future.
Let's hope so.
That reminds me, the write barrier concept is at lea
On Thu, Jun 3, 2010 at 12:40 PM, Kevin Grittner
wrote:
>
> Yeah, I read that long ago and I've disabled write barriers because
> of it; however, it still seems wrong that the RAID controller
> insists on flushing to the drives in write-back mode. Here are my
> reasons for wishing it was otherwise
On 6/3/2010 12:47 PM, Anj Adu wrote:
I cant seem to pinpoint why this query is slow . No full table scans
are being done. The hash join is taking maximum time. The table
dev4_act_action has only 3 rows.
box is a 2 cpu quad core intel 5430 with 32G RAM... Postgres 8.4.0
1G work_mem
20G effective_
Greg Smith wrote:
> Kevin Grittner wrote:
>> I've seen this, too (with xfs). Our RAID controller, in spite of
>> having BBU cache configured for writeback, waits for actual
>> persistence on disk for write barriers (unlike for fsync). This
>> does strike me as surprising to the point of borderin
Craig James wrote:
This is really hard to believe, because the bonnie++ numbers and dd(1)
numbers look good (see my original post). But it's totally
repeatable. It must be some really unfortunate "just missed the next
sector going by the write head" problem.
Commit performance is a separate
Kevin Grittner wrote:
I've seen this, too (with xfs). Our RAID controller, in spite of
having BBU cache configured for writeback, waits for actual
persistence on disk for write barriers (unlike for fsync). This
does strike me as surprising to the point of bordering on qualifying
as a bug.
Compl
I cant seem to pinpoint why this query is slow . No full table scans
are being done. The hash join is taking maximum time. The table
dev4_act_action has only 3 rows.
box is a 2 cpu quad core intel 5430 with 32G RAM... Postgres 8.4.0
1G work_mem
20G effective_cache
random_page_cost=1
default_statis
Matthew Wakeling wrote:
> On Thu, 3 Jun 2010, Craig James wrote:
>>> Also, are barriers *on* on the RAID1 mount and off on the RAID10
one?
>>
>> It was the barriers. "barrier=1" isn't just a bad idea on ext4,
>> it's a disaster.
>
> This worries me a little. Does your array have a battery-backed
On Thu, 3 Jun 2010, Craig James wrote:
Also, are barriers *on* on the RAID1 mount and off on the RAID10 one?
It was the barriers. "barrier=1" isn't just a bad idea on ext4, it's a
disaster.
This worries me a little. Does your array have a battery-backed cache? If
so, then it should be fast
On 6/2/10 4:40 PM, Mark Kirkwood wrote:
On 03/06/10 11:30, Craig James wrote:
I'm testing/tuning a new midsize server and ran into an inexplicable
problem. With an RAID10 drive, when I move the WAL to a separate RAID1
drive, TPS drops from over 1200 to less than 90! I've checked
everything and c
Craig James wrote:
I'm testing/tuning a new midsize server and ran into an inexplicable
problem. With an RAID10 drive, when I move the WAL to a separate
RAID1 drive, TPS drops from over 1200 to less than 90!
Normally <100 TPS means that the write cache on the WAL drive volume is
disabled (or
hi,
I'm sorry for not posting this first.
The server is the following and is being used exclusively for this
PostgreSQL instance:
PostgreSQL 8.4.2 on x86_64-pc-linux-gnu, compiled by GCC gcc-4.2.real (GCC)
4.2.4 (Ubuntu 4.2.4-1ubuntu4), 64-bit
Amazon EC2 Large Instance, 7.5GB memory, 64-bit
Thi
On Wed, Jun 2, 2010 at 7:30 PM, Craig James wrote:
> I'm testing/tuning a new midsize server and ran into an inexplicable
> problem. With an RAID10 drive, when I move the WAL to a separate RAID1
> drive, TPS drops from over 1200 to less than 90! I've checked everything
> and can't find a reason
On Wed, 2 Jun 2010, Jori Jovanovich wrote:
(2) Making the query faster by making the string match LESS specific (odd,
seems like it should be MORE)
No, that's the way round it should be. The LIMIT changes it all. Consider
if you have a huge table, and half of the entries match your WHERE claus
22 matches
Mail list logo