Re: [PERFORM] can't handle large number of INSERT/UPDATEs

2004-10-30 Thread Markus Bertheau
Turns out the man page of vmstat in procps was changed on Oct 8 2002:

http://cvs.sourceforge.net/viewcvs.py/procps/procps/vmstat.8?r1=1.1r2=1.2

in reaction to a debian bug report:

http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=157935

-- 
Markus Bertheau [EMAIL PROTECTED]


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [PERFORM] can't handle large number of INSERT/UPDATEs

2004-10-27 Thread Matt Clark

and certainly anyone who's been around a computer more than a week or
two knows which direction in and out are customarily seen from.
regards, tom lane

Apparently not whoever wrote the man page that everyone copied ;-)
Interesting. I checked this on several machines. They actually say 
different things.

Redhat 9- bi: Blocks sent to a block device (blocks/s).
Latest Cygwin- bi: Blocks sent to a block device (blocks/s).
Redhat 7.x- bi: Blocks sent to a block device (blocks/s).
Redhat AS3- bi: blocks sent out to a block device (in blocks/s)
I would say that I probably agree, things should be relative to the 
cpu. However, it doesn't seem to be something that was universally 
agreed upon. Or maybe the man-pages were all wrong, and only got 
updated recently.

Looks like the man pages are wrong, for RH7.3 at least.  It says bi is 
'blocks written', but an actual test like 'dd if=/dev/zero of=/tmp/test 
bs=1024 count=16384' on an otherwise nearly idle RH7.3 box gives:
  procs  memoryswap  io 
system cpu
r  b  w   swpd   free   buff  cache  si  sobibo   incs  us  
sy  id
0  0  0  75936 474704 230452 953580   0   0 0 0  106  2527   0   
0  99
0  0  0  75936 474704 230452 953580   0   0 0 16512  376  2572   
0   2  98
0  0  0  75936 474704 230452 953580   0   0 0 0  105  2537   
0   0 100

Which is in line with bo being 'blocks written'.
M
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


FW: [PERFORM] can't handle large number of INSERT/UPDATEs

2004-10-26 Thread Rod Dutton
Eliminate that contention point, and you will have solved your problem.

I agree,  If your updates are slow then you will get a queue building up.  

Make sure that:-
1) all your indexing is optimised.
2) you are doing regular vacuuming (bloated tables will cause a slow down
due to swapping).
3) your max_fsm_pages setting is large enough - it needs to be big enough to
hold all the transactions between vacuums (+ some spare for good measure).
4) do a full vacuum - do one to start and then do one after you have had 23
(above) in place for a while - if the full vacuum handles lots of dead
tuples then your max_fsm_pages setting is too low.
5) Also try reindexing or drop/recreate the indexes in question as...
PostgreSQL is unable to reuse B-tree index pages in certain cases. The
problem is that if indexed rows are deleted, those index pages can only be
reused by rows with similar values. For example, if indexed rows are deleted
and newly inserted/updated rows have much higher values, the new rows can't
use the index space made available by the deleted rows. Instead, such new
rows must be placed on new index pages. In such cases, disk space used by
the index will grow indefinitely, even if VACUUM is run frequently. 

Are your updates directly executed or do you use stored procs?  We had a
recent problem with stored procs as they store a one size fits all query
plan when compiled - this can be less than optimum in some cases.

We have a similar sounding app to yours and if tackled correctly then all
the above will make a massive difference in performance.

Rod

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Rod Taylor
Sent: 25 October 2004 22:19
To: Anjan Dave
Cc: Postgresql Performance
Subject: Re: [PERFORM] can't handle large number of INSERT/UPDATEs

On Mon, 2004-10-25 at 16:53, Anjan Dave wrote:
 Hi,
 
  
 
 I am dealing with an app here that uses pg to handle a few thousand
 concurrent web users. It seems that under heavy load, the INSERT and
 UPDATE statements to one or two specific tables keep queuing up, to
 the count of 150+ (one table has about 432K rows, other has about
 2.6Million rows), resulting in ?wait?s for other queries, and then

This isn't an index issue, it's a locking issue. Sounds like you have a
bunch of inserts and updates hitting the same rows over and over again.

Eliminate that contention point, and you will have solved your problem.

Free free to describe the processes involved, and we can help you do
that.



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [PERFORM] can't handle large number of INSERT/UPDATEs

2004-10-26 Thread Anjan Dave
It probably is locking issue. I got a long list of locks held when we ran select * 
from pg_locks during a peak time.

relation | database | transaction |  pid  |   mode   | granted 
--+--+-+---+--+-
17239 |17142 | |  3856 | AccessShareLock  | t
  |  |21196323 |  3875 | ExclusiveLock| t
16390 |17142 | |  3911 | AccessShareLock  | t
16595 |17142 | |  3782 | AccessShareLock  | t
17227 |17142 | |  3840 | AccessShareLock  | t
17227 |17142 | |  3840 | RowExclusiveLock | t
...
...


Vmstat would show a lot of disk IO at the same time.

Is this pointing towards a disk IO issue? (to that end, other than a higher CPU speed, 
and disabling HT, only thing changed is that it's RAID5 volume now, instead of a 
RAID10)

-anjan


-Original Message-
From: Rod Taylor [mailto:[EMAIL PROTECTED] 
Sent: Monday, October 25, 2004 5:19 PM
To: Anjan Dave
Cc: Postgresql Performance
Subject: Re: [PERFORM] can't handle large number of INSERT/UPDATEs

On Mon, 2004-10-25 at 16:53, Anjan Dave wrote:
 Hi,
 
  
 
 I am dealing with an app here that uses pg to handle a few thousand
 concurrent web users. It seems that under heavy load, the INSERT and
 UPDATE statements to one or two specific tables keep queuing up, to
 the count of 150+ (one table has about 432K rows, other has about
 2.6Million rows), resulting in waits for other queries, and then

This isn't an index issue, it's a locking issue. Sounds like you have a
bunch of inserts and updates hitting the same rows over and over again.

Eliminate that contention point, and you will have solved your problem.

Free free to describe the processes involved, and we can help you do
that.




---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [PERFORM] can't handle large number of INSERT/UPDATEs

2004-10-26 Thread Rod Taylor
On Tue, 2004-10-26 at 13:42, Anjan Dave wrote:
 It probably is locking issue. I got a long list of locks held when we ran select * 
 from pg_locks during a peak time.
 
 relation | database | transaction |  pid  |   mode   | granted 
 --+--+-+---+--+-
 17239 |17142 | |  3856 | AccessShareLock  | t

How many have granted = false?

 Vmstat would show a lot of disk IO at the same time.
 
 Is this pointing towards a disk IO issue?

Not necessarily. Is your IO reaching the limit or is it just heavy?


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [PERFORM] can't handle large number of INSERT/UPDATEs

2004-10-26 Thread Anjan Dave
None of the locks are in state false actually.

I don't have iostat on that machine, but vmstat shows a lot of writes to
the drives, and the runnable processes are more than 1:

procs  memory  swap  io system
cpu
 r  b   swpd   free   buff  cache   si   sobibo   incs us sy
wa id
 1  2  0 3857568 292936 279187600 0 44460 1264  2997 23
13 22 41
 2  2  0 3824668 292936 279188400 0 25262 1113  4797 28
12 29 31
 2  3  0 3784772 292936 279189600 0 38988 1468  6677 28
12 48 12
 2  4  0 3736256 292936 279190400 0 50970 1530  5217 19
12 49 20
 4  2  0 3698056 292936 279190800 0 43576 1369  7316 20
15 35 30
 2  1  0 3667124 292936 279192000 0 39174 1444  4659 25
16 35 24
 6  1  0 3617652 292936 279192800 0 52430 1347  4681 25
19 20 37
 1  3  0 352 292936 279086800 0 40156 1439  4394 20
14 29 37
 6  0  0 3797488 292936 256864800 0 17706 2272 21534 28
23 19 30
 0  0  0 3785396 292936 256873600 0  1156 1237 14057 33
8  0 59
 0  0  0 3783568 292936 256873600 0   704  512  1537  5
2  1 92
 1  0  0 3783188 292936 256875200 0   842  613  1919  6
1  1 92

-anjan

-Original Message-
From: Rod Taylor [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, October 26, 2004 1:49 PM
To: Anjan Dave
Cc: Postgresql Performance
Subject: RE: [PERFORM] can't handle large number of INSERT/UPDATEs

On Tue, 2004-10-26 at 13:42, Anjan Dave wrote:
 It probably is locking issue. I got a long list of locks held when we
ran select * from pg_locks during a peak time.
 
 relation | database | transaction |  pid  |   mode   | granted


--+--+-+---+--+-
 17239 |17142 | |  3856 | AccessShareLock  | t

How many have granted = false?

 Vmstat would show a lot of disk IO at the same time.
 
 Is this pointing towards a disk IO issue?

Not necessarily. Is your IO reaching the limit or is it just heavy?



---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [PERFORM] can't handle large number of INSERT/UPDATEs

2004-10-26 Thread Josh Berkus
Anjan,

 It probably is locking issue. I got a long list of locks held when we ran
 select * from pg_locks during a peak time.

Do the back-loaded tables have FKs on them? This would be a likely cause 
of lock contention, and thus serializing inserts/updates to the tables.

-- 
--Josh

Josh Berkus
Aglio Database Solutions
San Francisco

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [PERFORM] can't handle large number of INSERT/UPDATEs

2004-10-26 Thread Matt Clark

I don't have iostat on that machine, but vmstat shows a lot of writes to
the drives, and the runnable processes are more than 1:
6  1  0 3617652 292936 279192800 0 52430 1347  4681 25
19 20 37
 

Assuming that's the output of 'vmstat 1' and not some other delay, 
50MB/second of sustained writes is usually considered 'a lot'. 

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [PERFORM] can't handle large number of INSERT/UPDATEs

2004-10-26 Thread Anjan Dave
That is 1 or maybe 2 second interval.

One thing I am not sure is why 'bi' (disk writes) stays at 0 mostly,
it's the 'bo' column that shows high numbers (reads from disk). With so
many INSERT/UPDATEs, I would expect it the other way around...

-anjan



-Original Message-
From: Matt Clark [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, October 26, 2004 2:29 PM
To: Anjan Dave
Cc: Rod Taylor; Postgresql Performance
Subject: Re: [PERFORM] can't handle large number of INSERT/UPDATEs


I don't have iostat on that machine, but vmstat shows a lot of writes
to
the drives, and the runnable processes are more than 1:

 6  1  0 3617652 292936 279192800 0 52430 1347  4681 25
19 20 37
  

Assuming that's the output of 'vmstat 1' and not some other delay, 
50MB/second of sustained writes is usually considered 'a lot'. 


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [PERFORM] can't handle large number of INSERT/UPDATEs

2004-10-26 Thread Anjan Dave
It just seems that the more activity there is (that is when there's a
lot of disk activity) the checkpoints happen quicker too.

Here's a snapshot from the /var/log/messages - 

Oct 26 17:21:22 vl-pe6650-003 postgres[13978]: [2-1] LOG:  recycled
transaction
log file 000B007E
Oct 26 17:21:22 vl-pe6650-003 postgres[13978]: [3-1] LOG:  recycled
transaction
log file 000B007F
...
Oct 26 17:26:25 vl-pe6650-003 postgres[14273]: [2-1] LOG:  recycled
transaction
log file 000B0080
Oct 26 17:26:25 vl-pe6650-003 postgres[14273]: [3-1] LOG:  recycled
transaction
log file 000B0081
Oct 26 17:26:25 vl-pe6650-003 postgres[14273]: [4-1] LOG:  recycled
transaction
log file 000B0082
...
Oct 26 17:31:27 vl-pe6650-003 postgres[14508]: [2-1] LOG:  recycled
transaction
log file 000B0083
Oct 26 17:31:27 vl-pe6650-003 postgres[14508]: [3-1] LOG:  recycled
transaction
log file 000B0084
Oct 26 17:31:27 vl-pe6650-003 postgres[14508]: [4-1] LOG:  recycled
transaction
log file 000B0085
...

I have increased them from default 3 to 15. Haven't altered the
frequency though

Thanks,
Anjan 

-Original Message-
From: Tom Lane [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, October 26, 2004 5:53 PM
To: Anjan Dave
Cc: Rod Taylor; Postgresql Performance
Subject: Re: [PERFORM] can't handle large number of INSERT/UPDATEs 

Anjan Dave [EMAIL PROTECTED] writes:
 None of the locks are in state false actually.

In that case you don't have a locking problem.

 I don't have iostat on that machine, but vmstat shows a lot of writes
to
 the drives, and the runnable processes are more than 1:

I get the impression that you are just saturating the write bandwidth of
your disk :-(

It's fairly likely that this happens during checkpoints.  Look to see if
the postmaster has a child that shows itself as a checkpointer in ps
when the saturation is occurring.  You might be able to improve matters
by altering the checkpoint frequency parameters (though beware that
either too small or too large will likely make matters even worse).

regards, tom lane


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [PERFORM] can't handle large number of INSERT/UPDATEs

2004-10-26 Thread Anjan Dave
Ok, i was thinking from the disk perspective. Thanks!

-Original Message- 
From: Tom Lane [mailto:[EMAIL PROTECTED] 
Sent: Tue 10/26/2004 6:37 PM 
To: Anjan Dave 
Cc: Matt Clark; Rod Taylor; Postgresql Performance 
Subject: Re: [PERFORM] can't handle large number of INSERT/UPDATEs 



Anjan Dave [EMAIL PROTECTED] writes: 
 One thing I am not sure is why 'bi' (disk writes) stays at 0 mostly, 
 it's the 'bo' column that shows high numbers (reads from disk). With so 
 many INSERT/UPDATEs, I would expect it the other way around... 

Er ... it *is* the other way around.  bi is blocks in (to the CPU), 
bo is blocks out (from the CPU). 

regards, tom lane 


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [PERFORM] can't handle large number of INSERT/UPDATEs

2004-10-26 Thread Curtis Zinzilieta

On Tue, 26 Oct 2004, Tom Lane wrote:

 Anjan Dave [EMAIL PROTECTED] writes:
  One thing I am not sure is why 'bi' (disk writes) stays at 0 mostly,
  it's the 'bo' column that shows high numbers (reads from disk). With so
  many INSERT/UPDATEs, I would expect it the other way around...
 
 Er ... it *is* the other way around.  bi is blocks in (to the CPU),
 bo is blocks out (from the CPU).
 
   regards, tom lane

Ummm.

[EMAIL PROTECTED] T2]$ man vmstat
snip
FIELD DESCRIPTIONS
snip
   IO
   bi: Blocks sent to a block device (blocks/s).
   bo: Blocks received from a block device (blocks/s).

And on my read-heavy 7.4.2 system (running on rh8 at the moment)
(truncated for readability...)

[EMAIL PROTECTED] T2]# vmstat 1
   procs  memoryswap  io system 
 r  b  w   swpd   free   buff  cache  si  sobibo   incs  us  
 0  0  0 127592  56832 365496 2013788   0   1 3 64 0   4   
 2  0  0 127592  56868 365496 2013788   0   0 0 0  363   611   1   
 1  0  0 127592  57444 365508 2013788   0   0 8   972 1556  3616  11  
 0  0  1 127592  57408 365512 2013800   0   0 0   448  614  1216   5   
 0  0  0 127592  56660 365512 2013800   0   0 0 0  666  1150   6   
 0  3  1 127592  56680 365512 2013816   0   016   180 1280  2050   2   
 0  0  0 127592  56864 365516 2013852   0   020   728 2111  4360  11   
 0  0  0 127592  57952 365544 2013824   0   0 0   552 1153  2002  10   
 0  0  0 127592  57276 365544 2013824   0   0 0   504  718     5   
 1  0  0 127592  57244 365544 2013824   0   0 0   436 1495  2366   7   
 0  0  0 127592  57252 365544 2013824   0   0 0 0  618  1380   5   
 0  0  0 127592  57276 365556 2014192   0   0   360  1240 2418  5056  14   
 2  0  0 127592  56664 365564 2014176   0   0 0   156  658  1349   5   
 1  0  0 127592  55864 365568 2014184   0   0 0  1572 1388  3598   9   
 2  0  0 127592  56160 365572 2014184   0   0 0   536 4860  6621  13   

Which seems appropriate for both the database and the man page

-Curtis



---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [PERFORM] can't handle large number of INSERT/UPDATEs

2004-10-26 Thread Josh Berkus
Anjan,

 Oct 26 17:26:25 vl-pe6650-003 postgres[14273]: [4-1] LOG:  recycled
 transaction
 log file 000B0082
 ...
 Oct 26 17:31:27 vl-pe6650-003 postgres[14508]: [2-1] LOG:  recycled
 transaction
 log file 000B0083
 Oct 26 17:31:27 vl-pe6650-003 postgres[14508]: [3-1] LOG:  recycled
 transaction
 log file 000B0084
 Oct 26 17:31:27 vl-pe6650-003 postgres[14508]: [4-1] LOG:  recycled
 transaction
 log file 000B0085

Looks like you're running out of disk space for pending transactions.  Can you 
afford more checkpoint_segments?   Have you considered checkpoint_siblings?

-- 
--Josh

Josh Berkus
Aglio Database Solutions
San Francisco

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [PERFORM] can't handle large number of INSERT/UPDATEs

2004-10-26 Thread Anjan Dave
Josh,
 
I have increased them to 30, will see if that helps. Space is not a concern. slightly 
longer recovery time could be fine too. Wonder what people use (examples) for this 
value for high volume databases (except for dump/restore)...?
 
I don't know what is checkpoint_sibling. I'll read about it if there's some info on it 
somewhere.
 
Thanks,
Anjan
 
-Original Message- 
From: Josh Berkus [mailto:[EMAIL PROTECTED] 
Sent: Tue 10/26/2004 8:42 PM 
To: [EMAIL PROTECTED] 
Cc: Anjan Dave; Tom Lane; Rod Taylor 
Subject: Re: [PERFORM] can't handle large number of INSERT/UPDATEs



Anjan, 

 Oct 26 17:26:25 vl-pe6650-003 postgres[14273]: [4-1] LOG:  recycled 
 transaction 
 log file 000B0082 
 ... 
 Oct 26 17:31:27 vl-pe6650-003 postgres[14508]: [2-1] LOG:  recycled 
 transaction 
 log file 000B0083 
 Oct 26 17:31:27 vl-pe6650-003 postgres[14508]: [3-1] LOG:  recycled 
 transaction 
 log file 000B0084 
 Oct 26 17:31:27 vl-pe6650-003 postgres[14508]: [4-1] LOG:  recycled 
 transaction 
 log file 000B0085 

Looks like you're running out of disk space for pending transactions.  Can you 
afford more checkpoint_segments?   Have you considered checkpoint_siblings? 

-- 
--Josh 

Josh Berkus 
Aglio Database Solutions 
San Francisco 


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [PERFORM] can't handle large number of INSERT/UPDATEs

2004-10-26 Thread Tom Lane
Curtis Zinzilieta [EMAIL PROTECTED] writes:
 On Tue, 26 Oct 2004, Tom Lane wrote:
 Er ... it *is* the other way around.  bi is blocks in (to the CPU),
 bo is blocks out (from the CPU).

 Ummm.
 [EMAIL PROTECTED] T2]$ man vmstat
bi: Blocks sent to a block device (blocks/s).
bo: Blocks received from a block device (blocks/s).

You might want to have a word with your OS vendor.  My vmstat
man page says

   IO
   bi: Blocks received from a block device (blocks/s).
   bo: Blocks sent to a block device (blocks/s).

and certainly anyone who's been around a computer more than a week or
two knows which direction in and out are customarily seen from.

regards, tom lane

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [PERFORM] can't handle large number of INSERT/UPDATEs

2004-10-26 Thread Iain
Turbo linux 7 sems to be agreeing with Curtis,
(B
(Bbi: $B%V%m%C%/%G%P%$%9$KAw$i$l$?%V%m%C%/(B (blocks/s)$B!#(B
(Bbo: $B%V%m%C%/%G%P%$%9$+$i<u$1<h$C$?%V%m%C%/(B (blocks/s)$B!#(B
(B
(BSorry it's in Japanese but bi says "blocks sent to block device" and bo is 
(B"blocks received from block device".
(B
(BI don't know that much about it but the actual output seems to suggest that 
(Bthe man page is wrong. I find it just the slightest bit amusing that such 
(Berrors in the docs should be translated faithfully when translating 
(Binvariably introduces errors of it's own ;)
(B
(BRegards
(BIain
(B
(B
(B- Original Message - 
(BFrom: "Tom Lane" [EMAIL PROTECTED]
(BTo: "Curtis Zinzilieta" [EMAIL PROTECTED]
(BCc: "Anjan Dave" [EMAIL PROTECTED]; "Matt Clark" [EMAIL PROTECTED]; "Rod 
(BTaylor" [EMAIL PROTECTED]; "Postgresql Performance" 
(B[EMAIL PROTECTED]
(BSent: Wednesday, October 27, 2004 12:21 PM
(BSubject: Re: [PERFORM] can't handle large number of INSERT/UPDATEs
(B
(B
(B Curtis Zinzilieta [EMAIL PROTECTED] writes:
(B On Tue, 26 Oct 2004, Tom Lane wrote:
(B Er ... it *is* the other way around.  bi is blocks in (to the CPU),
(B bo is blocks out (from the CPU).
(B
(B Ummm.
(B [EMAIL PROTECTED] T2]$ man vmstat
(Bbi: Blocks sent to a block device (blocks/s).
(Bbo: Blocks received from a block device (blocks/s).
(B
(B You might want to have a word with your OS vendor.  My vmstat
(B man page says
(B
(B   IO
(B   bi: Blocks received from a block device (blocks/s).
(B   bo: Blocks sent to a block device (blocks/s).
(B
(B and certainly anyone who's been around a computer more than a week or
(B two knows which direction "in" and "out" are customarily seen from.
(B
(B regards, tom lane
(B
(B ---(end of broadcast)---
(B TIP 4: Don't 'kill -9' the postmaster 
(B
(B
(B---(end of broadcast)---
(BTIP 2: you can get off all lists at once with the unregister command
(B(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])

[PERFORM] can't handle large number of INSERT/UPDATEs

2004-10-25 Thread Anjan Dave








Hi,



I am dealing with an app here that uses pg to handle a few
thousand concurrent web users. It seems that under heavy load, the INSERT and
UPDATE statements to one or two specific tables keep queuing up, to the count
of 150+ (one table has about 432K rows, other has about 2.6Million rows), resulting
in waits for other queries, and then everything piles up, with the
load average shooting up to 10+. 



We (development) have gone through the queries/explain
analyzes and made sure the appropriate indexes exist among other efforts put
in.



I would like to know if there is anything that can be
changed for better from the systems perspective. Heres what I have done and
some recent changes from the system side:



-Upgraded from 7.4.0 to 7.4.1 sometime ago

-Upgraded from RH8 to RHEL 3.0

-The settings from postgresql.conf (carried over, basically)
are:

 shared_buffers
= 10240 (80MB)

 max_connections
= 400

 sort_memory
= 1024

 effective_cache_size
= 262144 (2GB)

 checkpoint_segments
= 15

stats_start_collector = true

stats_command_string = true 

Rest everything is at default



In /etc/sysctl.conf (512MB shared
mem)

kernel.shmall = 536870912

kernel.shmmax = 536870912



-This is a new Dell 6650 (quad XEON 2.2GHz, 8GB RAM,
Internal HW RAID10), RHEL 3.0 (2.4.21-20.ELsmp), PG 7.4.1

-Vaccum Full run everyday

-contrib/Reindex run everyday

-Disabled HT in BIOS



I would greatly appreciate any helpful ideas.



Thanks in advance,



Anjan








Re: [PERFORM] can't handle large number of INSERT/UPDATEs

2004-10-25 Thread Rod Taylor
On Mon, 2004-10-25 at 16:53, Anjan Dave wrote:
 Hi,
 
  
 
 I am dealing with an app here that uses pg to handle a few thousand
 concurrent web users. It seems that under heavy load, the INSERT and
 UPDATE statements to one or two specific tables keep queuing up, to
 the count of 150+ (one table has about 432K rows, other has about
 2.6Million rows), resulting in waits for other queries, and then

This isn't an index issue, it's a locking issue. Sounds like you have a
bunch of inserts and updates hitting the same rows over and over again.

Eliminate that contention point, and you will have solved your problem.

Free free to describe the processes involved, and we can help you do
that.



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [PERFORM] can't handle large number of INSERT/UPDATEs

2004-10-25 Thread Dustin Sallings
On Oct 25, 2004, at 13:53, Anjan Dave wrote:
I am dealing with an app here that uses pg to handle a few thousand 
concurrent web users. It seems that under heavy load, the INSERT and 
UPDATE statements to one or two specific tables keep queuing up, to 
the count of 150+ (one table has about 432K rows, other has about 
2.6Million rows), resulting in waits for other queries, and then 
everything piles up, with the load average shooting up to 10+.
	Depending on your requirements and all that, but I had a similar issue 
in one of my applications and made the problem disappear entirely by 
serializing the transactions into a separate thread (actually, a thread 
pool) responsible for performing these transactions.  This reduced the 
load on both the application server and the DB server.

	Not a direct answer to your question, but I've found that a lot of 
times when someone has trouble scaling a database application, much of 
the performance win can be in trying to be a little smarter about how 
and when the database is accessed.

--
SPY  My girlfriend asked me which one I like better.
pub  1024/3CAE01D5 1994/11/03 Dustin Sallings [EMAIL PROTECTED]
|Key fingerprint =  87 02 57 08 02 D0 DA D6  C8 0F 3E 65 51 98 D8 BE
L___ I hope the answer won't upset her. 
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]