Re: [PERFORM] can't handle large number of INSERT/UPDATEs

2004-10-26 Thread Matt Clark
and certainly anyone who's been around a computer more than a week or two knows which direction "in" and "out" are customarily seen from. regards, tom lane Apparently not whoever wrote the man page that everyone copied ;-) Interesting. I checked this on several machines. They actually

Re: [PERFORM] can't handle large number of INSERT/UPDATEs

2004-10-26 Thread Iain
Turbo linux 7 sems to be agreeing with Curtis, (B (Bbi: $B%V%m%C%/%G%P%$%9$KAw$i$l$?%V%m%C%/(B (blocks/s)$B!#(B (Bbo: $B%V%m%C%/%G%P%$%9$+$i (B (BSorry it's in Japanese but bi says "blocks sent to block device" and bo is (B"blocks received from block device". (B (BI don't know that mu

Re: [PERFORM] can't handle large number of INSERT/UPDATEs

2004-10-26 Thread John Meinel
Tom Lane wrote: Curtis Zinzilieta <[EMAIL PROTECTED]> writes: On Tue, 26 Oct 2004, Tom Lane wrote: Er ... it *is* the other way around. bi is blocks in (to the CPU), bo is blocks out (from the CPU). Ummm. [EMAIL PROTECTED] T2]$ man vmstat bi: Blocks sent to a block device (blocks/s).

Re: [PERFORM] can't handle large number of INSERT/UPDATEs

2004-10-26 Thread Tom Lane
Curtis Zinzilieta <[EMAIL PROTECTED]> writes: > On Tue, 26 Oct 2004, Tom Lane wrote: >> Er ... it *is* the other way around. bi is blocks in (to the CPU), >> bo is blocks out (from the CPU). > Ummm. > [EMAIL PROTECTED] T2]$ man vmstat >bi: Blocks sent to a block device (blocks/s). >

Re: [PERFORM] can't handle large number of INSERT/UPDATEs

2004-10-26 Thread Anjan Dave
Josh, I have increased them to 30, will see if that helps. Space is not a concern. slightly longer recovery time could be fine too. Wonder what people use (examples) for this value for high volume databases (except for dump/restore)...? I don't know what is checkpoint_sibling. I'll read about

Re: [PERFORM] can't handle large number of INSERT/UPDATEs

2004-10-26 Thread Josh Berkus
Anjan, > Oct 26 17:26:25 vl-pe6650-003 postgres[14273]: [4-1] LOG:  recycled > transaction > log file "000B0082" > ... > Oct 26 17:31:27 vl-pe6650-003 postgres[14508]: [2-1] LOG:  recycled > transaction > log file "000B0083" > Oct 26 17:31:27 vl-pe6650-003 postgres[14508]: [3-1] LO

Re: [PERFORM] [PATCHES] [HACKERS] ARC Memory Usage analysis

2004-10-26 Thread Josh Berkus
Thomas, > As a result, I was intending to inflate the value of > effective_cache_size to closer to the amount of unused RAM on some of > the machines I admin (once I've verified that they all have a unified > buffer cache). Is that correct? Currently, yes. Right now, e_c_s is used just to inform

Re: [PERFORM] [PATCHES] [HACKERS] ARC Memory Usage analysis

2004-10-26 Thread Thomas F.O'Connell
Simon, As a postgres DBA, I find your comments about how not to use effective_cache_size instructive, but I'm still not sure how I should arrive at a target value for it. On most of the machines on which I admin postgres, I generally set shared_buffers to 10,000 (using what seems to have been t

Re: [PERFORM] can't handle large number of INSERT/UPDATEs

2004-10-26 Thread Curtis Zinzilieta
On Tue, 26 Oct 2004, Tom Lane wrote: > "Anjan Dave" <[EMAIL PROTECTED]> writes: > > One thing I am not sure is why 'bi' (disk writes) stays at 0 mostly, > > it's the 'bo' column that shows high numbers (reads from disk). With so > > many INSERT/UPDATEs, I would expect it the other way around... >

Re: [PERFORM] can't handle large number of INSERT/UPDATEs

2004-10-26 Thread Anjan Dave
Ok, i was thinking from the disk perspective. Thanks! -Original Message- From: Tom Lane [mailto:[EMAIL PROTECTED] Sent: Tue 10/26/2004 6:37 PM To: Anjan Dave Cc: Matt Clark; Rod Taylor; Postgresql Performance Subject: Re: [PERFORM] can't

Re: [PERFORM] can't handle large number of INSERT/UPDATEs

2004-10-26 Thread Tom Lane
"Anjan Dave" <[EMAIL PROTECTED]> writes: > One thing I am not sure is why 'bi' (disk writes) stays at 0 mostly, > it's the 'bo' column that shows high numbers (reads from disk). With so > many INSERT/UPDATEs, I would expect it the other way around... Er ... it *is* the other way around. bi is blo

Re: [PERFORM] Sequential Scan with LIMIT

2004-10-26 Thread John Meinel
Jaime Casanova wrote: [...] I'm not sure. They all return the same information. of course, both queries will return the same but that's just because you forced it. LIMIT and DISTINCT are different things so they behave and are plenned different. What's also weird is stuff like: SELECT DISTINCT(NU

Re: [PERFORM] can't handle large number of INSERT/UPDATEs

2004-10-26 Thread Anjan Dave
It just seems that the more activity there is (that is when there's a lot of disk activity) the checkpoints happen quicker too. Here's a snapshot from the /var/log/messages - Oct 26 17:21:22 vl-pe6650-003 postgres[13978]: [2-1] LOG: recycled transaction log file "000B007E" Oct 26 17:21:

Re: [PERFORM] can't handle large number of INSERT/UPDATEs

2004-10-26 Thread Tom Lane
"Anjan Dave" <[EMAIL PROTECTED]> writes: > None of the locks are in state false actually. In that case you don't have a locking problem. > I don't have iostat on that machine, but vmstat shows a lot of writes to > the drives, and the runnable processes are more than 1: I get the impression that

Re: [PERFORM] can't handle large number of INSERT/UPDATEs

2004-10-26 Thread Anjan Dave
That is 1 or maybe 2 second interval. One thing I am not sure is why 'bi' (disk writes) stays at 0 mostly, it's the 'bo' column that shows high numbers (reads from disk). With so many INSERT/UPDATEs, I would expect it the other way around... -anjan -Original Message- From: Matt Clark [

Re: [PERFORM] can't handle large number of INSERT/UPDATEs

2004-10-26 Thread Anjan Dave
Andrew/Josh, Josh also suggested to check for any FK/referential integrity checks, but I am told that we don't have any foreign key constraints. Thanks, anjan -Original Message- From: Andrew McMillan [mailto:[EMAIL PROTECTED] Sent: Tuesday, October 26, 2004 4:51 PM To: Anjan Dave Cc: [E

Re: [PERFORM] can't handle large number of INSERT/UPDATEs

2004-10-26 Thread Andrew McMillan
On Mon, 2004-10-25 at 16:53 -0400, Anjan Dave wrote: > Hi, > > > > I am dealing with an app here that uses pg to handle a few thousand > concurrent web users. It seems that under heavy load, the INSERT and > UPDATE statements to one or two specific tables keep queuing up, to > the count of 150+

Re: [PERFORM] Sequential Scan with LIMIT

2004-10-26 Thread Jaime Casanova
--- John Meinel <[EMAIL PROTECTED]> escribió: > Curt Sampson wrote: > > On Sun, 24 Oct 2004, John Meinel wrote: > > > > > >>I was looking into another problem, and I found > something that surprised > >>me. If I'm doing "SELECT * FROM mytable WHERE col > = 'myval' LIMIT 1.". > >>Now "col" is in

Re: [PERFORM] Measuring server performance with psql and pgAdmin

2004-10-26 Thread Josh Berkus
Joost, > 1. The manual says about "explain analyze" : "The ANALYZE option causes the > statement to be actually executed, not only planned. The total elapsed time > expended within each plan node (in milliseconds) and total number of rows > it actually returned are added to the display." Does this

Re: [PERFORM] can't handle large number of INSERT/UPDATEs

2004-10-26 Thread Matt Clark
I don't have iostat on that machine, but vmstat shows a lot of writes to the drives, and the runnable processes are more than 1: 6 1 0 3617652 292936 279192800 0 52430 1347 4681 25 19 20 37 Assuming that's the output of 'vmstat 1' and not some other delay, 50MB/second of sus

Re: [PERFORM] can't handle large number of INSERT/UPDATEs

2004-10-26 Thread Josh Berkus
Anjan, > It probably is locking issue. I got a long list of locks held when we ran > select * from pg_locks during a peak time. Do the back-loaded tables have FKs on them? This would be a likely cause of lock contention, and thus serializing inserts/updates to the tables. -- --Josh Josh B

Re: [PERFORM] can't handle large number of INSERT/UPDATEs

2004-10-26 Thread Anjan Dave
None of the locks are in state false actually. I don't have iostat on that machine, but vmstat shows a lot of writes to the drives, and the runnable processes are more than 1: procs memory swap io system cpu r b swpd free buff cache si sobi

Re: [PERFORM] can't handle large number of INSERT/UPDATEs

2004-10-26 Thread Rod Taylor
On Tue, 2004-10-26 at 13:42, Anjan Dave wrote: > It probably is locking issue. I got a long list of locks held when we ran select * > from pg_locks during a peak time. > > relation | database | transaction | pid | mode | granted > --+--+-+---+---

Re: [PERFORM] can't handle large number of INSERT/UPDATEs

2004-10-26 Thread Anjan Dave
It probably is locking issue. I got a long list of locks held when we ran select * from pg_locks during a peak time. relation | database | transaction | pid | mode | granted --+--+-+---+--+- 17239 |17142 |

Re: [PERFORM] Large Database Performance suggestions

2004-10-26 Thread Tom Lane
Joshua Marsh <[EMAIL PROTECTED]> writes: > shared_buffers = 1000 # min 16, at least max_connections*2, 8KB each This is on the small side for an 8G machine. I'd try 1 or so. > sort_mem = 4096000 Yikes. You do realize you just said that *each sort operation* can use 4G? (Actually,

Re: [PERFORM] Large Database Performance suggestions

2004-10-26 Thread Joshua Marsh
Thanks for all of your help so far. Here is some of the information you guys were asking for: Test System: 2x AMD Opteron 244 (1.8Ghz) 8GB RAM 7x 72GB SCSI HDD (Raid 5) postrgesql.conf information: #--- # RESOURCE USAGE (exc

[PERFORM] Measuring server performance with psql and pgAdmin

2004-10-26 Thread Joost Kraaijeveld
Hi all, I am (stilll) converting a database from a Clarion Topspeed database to Postgresql 7.4.5 on Debian Linux 2.6.6-1. The program that uses the database uses a query like "select * from table" to show the user the contents of a table. This query cannot be changed (it is generated by Clarion

FW: [PERFORM] can't handle large number of INSERT/UPDATEs

2004-10-26 Thread Rod Dutton
>>Eliminate that contention point, and you will have solved your problem. I agree, If your updates are slow then you will get a queue building up. Make sure that:- 1) all your indexing is optimised. 2) you are doing regular vacuuming (bloated tables will cause a slow down due to swapping). 3)