Re: [PERFORM] [HACKERS] 8.3beta1 testing on Solaris

2007-11-15 Thread Bruce Momjian

This has been saved for the 8.4 release:

http://momjian.postgresql.org/cgi-bin/pgpatches_hold

---

Jignesh K. Shah wrote:
> 
> I  changed  CLOG Buffers to 16
> 
> Running the test again:
> # ./read.d
> dtrace: script './read.d' matched 2 probes
> CPU IDFUNCTION:NAME
>   0   1027   :tick-5sec
> 
>   /export/home0/igen/pgdata/pg_clog/0024   
> -27530282192961
>   /export/home0/igen/pgdata/pg_clog/0025   
> -27530282111041
> # ./read.d
> dtrace: script './read.d' matched 2 probes
> CPU IDFUNCTION:NAME
>   1   1027   :tick-5sec
> 
> # ./read.d
> dtrace: script './read.d' matched 2 probes
> CPU IDFUNCTION:NAME
>   1   1027   :tick-5sec
> 
> # ./read.d
> dtrace: script './read.d' matched 2 probes
> CPU IDFUNCTION:NAME
>   0   1027   :tick-5sec
> 
>   /export/home0/igen/pgdata/pg_clog/0025   
> -27530281947201
> 
> 
> So Tom seems to be correct that it is a case of CLOG Buffer thrashing. 
> But since I saw the same problem with two different workloads, I think 
> people hitting this problem is pretty high.
> 
> Also I am bit surprised that CLogControlFile did not show up as being 
> hot.. Maybe because not much writes are going on .. Or maybe since I did 
> not trace all 500 users to see their hot lock status..
> 
> 
> Dmitri has another workload to test, I might try that out later on to 
> see if it causes similar impact or not.
> 
> Of course I havent seen my throughput go up yet since I am already CPU 
> bound... But this is good since the number of IOPS to the disk are 
> reduced (and hence system calls).
> 
> 
> If I take this as my baseline number.. I can then proceed to hunt other 
> bottlenecks
> 
> 
> Whats the view of the community?
> 
> Hunt down CPU utilizations or Lock waits next?
> 
> Your votes are crucial on where I put my focus.
> 
> Another thing Josh B told me to check out was the wal_writer_delay setting:
> 
> I have done two settings with almost equal performance (with the CLOG 16 
> setting) .. One with 100ms and other default at 200ms.. Based on the 
> runs it seemed that the 100ms was slightly better than the default .. 
> (Plus the risk of loosing data is reduced from 600ms to 300ms)
> 
> Thanks.
> 
> Regards,
> Jignesh
> 
> 
> 
> 
> Tom Lane wrote:
> > "Jignesh K. Shah" <[EMAIL PROTECTED]> writes:
> >   
> >> So the ratio of reads vs writes to clog files is pretty huge..
> >> 
> >
> > It looks to me that the issue is simply one of not having quite enough
> > CLOG buffers.  Your first run shows 8 different pages being fetched and
> > the second shows 10.  Bearing in mind that we "pin" the latest CLOG page
> > into buffers, there are only NUM_CLOG_BUFFERS-1 buffers available for
> > older pages, so what we've got here is thrashing for the available
> > slots.
> >
> > Try increasing NUM_CLOG_BUFFERS to 16 and see how it affects this test.
> >
> > regards, tom lane
> >
> > ---(end of broadcast)---
> > TIP 3: Have you checked our extensive FAQ?
> >
> >http://www.postgresql.org/docs/faq
> >   
> 
> ---(end of broadcast)---
> TIP 4: Have you searched our list archives?
> 
>http://archives.postgresql.org

-- 
  Bruce Momjian  <[EMAIL PROTECTED]>http://momjian.us
  EnterpriseDB http://postgres.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [PERFORM] [HACKERS] 8.3beta1 testing on Solaris

2007-10-26 Thread Jignesh K. Shah


I  changed  CLOG Buffers to 16

Running the test again:
# ./read.d
dtrace: script './read.d' matched 2 probes
CPU IDFUNCTION:NAME
 0   1027   :tick-5sec

 /export/home0/igen/pgdata/pg_clog/0024   
-27530282192961
 /export/home0/igen/pgdata/pg_clog/0025   
-27530282111041

# ./read.d
dtrace: script './read.d' matched 2 probes
CPU IDFUNCTION:NAME
 1   1027   :tick-5sec

# ./read.d
dtrace: script './read.d' matched 2 probes
CPU IDFUNCTION:NAME
 1   1027   :tick-5sec

# ./read.d
dtrace: script './read.d' matched 2 probes
CPU IDFUNCTION:NAME
 0   1027   :tick-5sec

 /export/home0/igen/pgdata/pg_clog/0025   
-27530281947201



So Tom seems to be correct that it is a case of CLOG Buffer thrashing. 
But since I saw the same problem with two different workloads, I think 
people hitting this problem is pretty high.


Also I am bit surprised that CLogControlFile did not show up as being 
hot.. Maybe because not much writes are going on .. Or maybe since I did 
not trace all 500 users to see their hot lock status..



Dmitri has another workload to test, I might try that out later on to 
see if it causes similar impact or not.


Of course I havent seen my throughput go up yet since I am already CPU 
bound... But this is good since the number of IOPS to the disk are 
reduced (and hence system calls).



If I take this as my baseline number.. I can then proceed to hunt other 
bottlenecks



Whats the view of the community?

Hunt down CPU utilizations or Lock waits next?

Your votes are crucial on where I put my focus.

Another thing Josh B told me to check out was the wal_writer_delay setting:

I have done two settings with almost equal performance (with the CLOG 16 
setting) .. One with 100ms and other default at 200ms.. Based on the 
runs it seemed that the 100ms was slightly better than the default .. 
(Plus the risk of loosing data is reduced from 600ms to 300ms)


Thanks.

Regards,
Jignesh




Tom Lane wrote:

"Jignesh K. Shah" <[EMAIL PROTECTED]> writes:
  

So the ratio of reads vs writes to clog files is pretty huge..



It looks to me that the issue is simply one of not having quite enough
CLOG buffers.  Your first run shows 8 different pages being fetched and
the second shows 10.  Bearing in mind that we "pin" the latest CLOG page
into buffers, there are only NUM_CLOG_BUFFERS-1 buffers available for
older pages, so what we've got here is thrashing for the available
slots.

Try increasing NUM_CLOG_BUFFERS to 16 and see how it affects this test.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq
  


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [PERFORM] [HACKERS] 8.3beta1 testing on Solaris

2007-10-26 Thread Tom Lane
"Jignesh K. Shah" <[EMAIL PROTECTED]> writes:
> So the ratio of reads vs writes to clog files is pretty huge..

It looks to me that the issue is simply one of not having quite enough
CLOG buffers.  Your first run shows 8 different pages being fetched and
the second shows 10.  Bearing in mind that we "pin" the latest CLOG page
into buffers, there are only NUM_CLOG_BUFFERS-1 buffers available for
older pages, so what we've got here is thrashing for the available
slots.

Try increasing NUM_CLOG_BUFFERS to 16 and see how it affects this test.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] [HACKERS] 8.3beta1 testing on Solaris

2007-10-26 Thread Jignesh K. Shah

Also to give perspective on the equivalent writes on CLOG

I used the following script which runs for 10 sec to track all writes to 
the clog directory and here is what it came up with... (This is with 500 
users running)


# cat write.d
#!/usr/sbin/dtrace -s
syscall::write:entry
/execname=="postgres" && 
dirname(fds[arg0].fi_pathname)=="/export/home0/igen/pgdata/pg_clog"/

{
   @write[fds[arg0].fi_pathname,arg1] = count();
}
tick-10sec
{
exit(0);
}

# ./write.d
dtrace: script './write.d' matched 2 probes
CPU IDFUNCTION:NAME
 3   1026  :tick-10sec

 /export/home0/igen/pgdata/pg_clog/001E   
-27530282770881

#
I modified read.d to do a 5sec read
# ./read.d
dtrace: script './read.d' matched 3 probes
CPU IDFUNCTION:NAME
 0  1   :BEGIN
 0   1027   :tick-5sec

 /export/home0/igen/pgdata/pg_clog/001F   
-27530282688961
 /export/home0/igen/pgdata/pg_clog/001F   
-27530282525121
 /export/home0/igen/pgdata/pg_clog/001F   
-27530282852802
 /export/home0/igen/pgdata/pg_clog/001F   
-27530282770883
 /export/home0/igen/pgdata/pg_clog/001F   
-27530282361283
 /export/home0/igen/pgdata/pg_clog/001E   
-27530282852805
 /export/home0/igen/pgdata/pg_clog/001E   
-27530282361289
 /export/home0/igen/pgdata/pg_clog/001E   
-2753028277088   13
 /export/home0/igen/pgdata/pg_clog/001E   
-2753028268896   15
 /export/home0/igen/pgdata/pg_clog/001E   
-2753028252512   27

#

So the ratio of reads vs writes to clog files is pretty huge..


-Jignesh



Jignesh K. Shah wrote:

Tom,

Here is what I did:

I started aggregating all read information:

First I also had added group by pid(arg0,arg1, pid) and the counts 
were all coming as 1


Then I just grouped by filename and location (arg0,arg1 of reads) and 
the counts came back as


# cat read.d
#!/usr/sbin/dtrace -s
syscall::read:entry
/execname=="postgres"/
{
   @read[fds[arg0].fi_pathname, arg1] = count();
}


# ./read.d
dtrace: script './read.d' matched 1 probe
^C

 /export/home0/igen/pgdata/pg_clog/0014   
-27530282934721
 /export/home0/igen/pgdata/pg_clog/0014   
-27530282770881
 /export/home0/igen/pgdata/pg_clog/0015   
-27530282443202
 /export/home0/igen/pgdata/pg_clog/0015   
-2753028268896   14
 /export/home0/igen/pgdata/pg_clog/0015   
-2753028260704   25
 /export/home0/igen/pgdata/pg_clog/0015   
-2753028252512   27
 /export/home0/igen/pgdata/pg_clog/0015   
-2753028277088   28
 /export/home0/igen/pgdata/pg_clog/0015   
-2753028293472   37



FYI  I pressed ctrl-c within like less than a second

So to me this seems that multiple processes are reading the same page 
from different pids. (This was with about 600 suers active.


Aparently we do have a problem that we are reading the same buffer 
address again.  (Same as not being cached anywhere or not finding it 
in cache anywhere).


I reran lock wait script on couple of processes and did not see 
CLogControlFileLock  as a problem..


# ./83_lwlock_wait.d 14341

Lock IdMode   Count
  WALInsertLock   Exclusive   1
  ProcArrayLock   Exclusive  16

Lock Id   Combined Time (ns)
  WALInsertLock   383109
  ProcArrayLock198866236

# ./83_lwlock_wait.d 14607

Lock IdMode   Count
  WALInsertLock   Exclusive   2
  ProcArrayLock   Exclusive  15

Lock Id   Combined Time (ns)
  WALInsertLock55243
  ProcArrayLock 69700140

#

What will help you find out why it is reading the same page again?


-Jignesh



Jignesh K. Shah wrote:
I agree with Tom..  somehow I think  increasing NUM_CLOG_BUFFERS is 
just avoiding the symptom to a later value.. I promise to look more 
into it before making any recommendations to increase NUM_CLOG_BUFFERs.



Because though "iGen"  showed improvements in that area by increasing 
num_clog_buffers , EAStress had shown no improvements.. Plus the 
reason I think this is not the problem in 8.3beta1 since the Lock 
Output clearly does not show CLOGControlFile as to be the issue which 
I had seen in earlier case.  So I dont think that increasing 
NUM_CLOG_BUFFERS will change thing here.


Now I dont understand the code pretty well yet I see three hotspots 
and not sure if they are related to each other
* ProcArrayLock waits  - causing Waits  as reported by 
83_lockwait.

Re: [PERFORM] [HACKERS] 8.3beta1 testing on Solaris

2007-10-26 Thread Jignesh K. Shah

Tom,

Here is what I did:

I started aggregating all read information:

First I also had added group by pid(arg0,arg1, pid) and the counts 
were all coming as 1


Then I just grouped by filename and location (arg0,arg1 of reads) and 
the counts came back as


# cat read.d
#!/usr/sbin/dtrace -s
syscall::read:entry
/execname=="postgres"/
{
   @read[fds[arg0].fi_pathname, arg1] = count();
}


# ./read.d
dtrace: script './read.d' matched 1 probe
^C

 /export/home0/igen/pgdata/pg_clog/0014   
-27530282934721
 /export/home0/igen/pgdata/pg_clog/0014   
-27530282770881
 /export/home0/igen/pgdata/pg_clog/0015   
-27530282443202
 /export/home0/igen/pgdata/pg_clog/0015   
-2753028268896   14
 /export/home0/igen/pgdata/pg_clog/0015   
-2753028260704   25
 /export/home0/igen/pgdata/pg_clog/0015   
-2753028252512   27
 /export/home0/igen/pgdata/pg_clog/0015   
-2753028277088   28
 /export/home0/igen/pgdata/pg_clog/0015   
-2753028293472   37



FYI  I pressed ctrl-c within like less than a second

So to me this seems that multiple processes are reading the same page 
from different pids. (This was with about 600 suers active.


Aparently we do have a problem that we are reading the same buffer 
address again.  (Same as not being cached anywhere or not finding it in 
cache anywhere).


I reran lock wait script on couple of processes and did not see 
CLogControlFileLock  as a problem..


# ./83_lwlock_wait.d 14341

Lock IdMode   Count
  WALInsertLock   Exclusive   1
  ProcArrayLock   Exclusive  16

Lock Id   Combined Time (ns)
  WALInsertLock   383109
  ProcArrayLock198866236

# ./83_lwlock_wait.d 14607

Lock IdMode   Count
  WALInsertLock   Exclusive   2
  ProcArrayLock   Exclusive  15

Lock Id   Combined Time (ns)
  WALInsertLock55243
  ProcArrayLock 69700140

#

What will help you find out why it is reading the same page again?


-Jignesh



Jignesh K. Shah wrote:
I agree with Tom..  somehow I think  increasing NUM_CLOG_BUFFERS is 
just avoiding the symptom to a later value.. I promise to look more 
into it before making any recommendations to increase NUM_CLOG_BUFFERs.



Because though "iGen"  showed improvements in that area by increasing 
num_clog_buffers , EAStress had shown no improvements.. Plus the 
reason I think this is not the problem in 8.3beta1 since the Lock 
Output clearly does not show CLOGControlFile as to be the issue which 
I had seen in earlier case.  So I dont think that increasing 
NUM_CLOG_BUFFERS will change thing here.


Now I dont understand the code pretty well yet I see three hotspots 
and not sure if they are related to each other
* ProcArrayLock waits  - causing Waits  as reported by 
83_lockwait.d script
* SimpleLRUReadPage - causing read IOs as reported by 
iostat/rsnoop.d

* GetSnapshotData - causing CPU utilization  as reported by hotuser

But I will shut up and do more testing.

Regards,
Jignesh



Tom Lane wrote:

Josh Berkus <[EMAIL PROTECTED]> writes:
 
Actually, 32 made a significant difference as I recall ... do you 
still have the figures for that, Jignesh?



I'd want to see a new set of test runs backing up any call for a change
in NUM_CLOG_BUFFERS --- we've changed enough stuff around this area that
benchmarks using code from a few months back shouldn't carry a lot of
weight.

regards, tom lane
  


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] [HACKERS] 8.3beta1 testing on Solaris

2007-10-26 Thread Jignesh K. Shah
I agree with Tom..  somehow I think  increasing NUM_CLOG_BUFFERS is just 
avoiding the symptom to a later value.. I promise to look more into it 
before making any recommendations to increase NUM_CLOG_BUFFERs.



Because though "iGen"  showed improvements in that area by increasing 
num_clog_buffers , EAStress had shown no improvements.. Plus the reason 
I think this is not the problem in 8.3beta1 since the Lock Output 
clearly does not show CLOGControlFile as to be the issue which I had 
seen in earlier case.  So I dont think that increasing NUM_CLOG_BUFFERS 
will change thing here.


Now I dont understand the code pretty well yet I see three hotspots and 
not sure if they are related to each other
* ProcArrayLock waits  - causing Waits  as reported by 
83_lockwait.d script
* SimpleLRUReadPage - causing read IOs as reported by 
iostat/rsnoop.d

* GetSnapshotData - causing CPU utilization  as reported by hotuser

But I will shut up and do more testing.

Regards,
Jignesh



Tom Lane wrote:

Josh Berkus <[EMAIL PROTECTED]> writes:
  
Actually, 32 made a significant difference as I recall ... do you still have 
the figures for that, Jignesh?



I'd want to see a new set of test runs backing up any call for a change
in NUM_CLOG_BUFFERS --- we've changed enough stuff around this area that
benchmarks using code from a few months back shouldn't carry a lot of
weight.

regards, tom lane
  


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] [HACKERS] 8.3beta1 testing on Solaris

2007-10-26 Thread Jignesh K. Shah
The problem I saw was first highlighted by EAStress runs with PostgreSQL 
on Solaris with 120-150 users. I just replicated that via my smaller 
internal benchmark that we use here to recreate that problem.


EAStress should be just fine to highlight it.. Just put pg_clog on 
O_DIRECT or something so that all IOs go to disk making it easier to 
observe.


In the meanwhile I will try to get more information.


Regards,
Jignesh


Tom Lane wrote:

Gregory Stark <[EMAIL PROTECTED]> writes:
  

Didn't we already go through this? He and Simon were pushing to bump up
NUM_CLOG_BUFFERS and you were arguing that the test wasn't representative and
some other clog.c would have to be reengineered to scale well to larger
values. 



AFAIR we never did get any clear explanation of what the test case is.
I guess it must be write-mostly, else lazy XID assignment would have
helped this by reducing the rate of XID consumption.

It's still true that I'm leery of a large increase in the number of
buffers without reengineering slru.c.  That code was written on the
assumption that there were few enough buffers that a linear search
would be fine.  I'd hold still for 16, or maybe even 32, but I dunno
how much impact that will have for such a test case.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match
  


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] [HACKERS] 8.3beta1 testing on Solaris

2007-10-26 Thread Gregory Stark
"Josh Berkus" <[EMAIL PROTECTED]> writes:

> Actually, 32 made a significant difference as I recall ... do you still have 
> the figures for that, Jignesh?

Well it made a difference but it didn't remove the bottleneck, it just moved
it. IIRC under that benchmark Jignesh was able to run with x sessions
efficiently with 8 clog buffers, x + 100 or so sessions with 16 clog buffers
and x + 200 or so sessions with 32 clog buffers.

It happened that x + 200 was > the number of sessions he wanted to run the
benchmark at so it helped the benchmark results quite a bit. But that was just
an artifact of how many sessions the benchmark needed. A user who needs 1200
sessions or who has a different transaction load might find he needs more clog
buffers to alleviate the bottleneck. And of course most (all?) normal users
use far fewer sessions and won't run into this bottleneck at all.

Raising NUM_CLOG_BUFFERS just moves around the arbitrary bottleneck. This
benchmark is useful in that it gives us an idea where the bottleneck lies for
various values of NUM_CLOG_BUFFERS but it doesn't tell us what value realistic
users are likely to bump into.

-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [PERFORM] [HACKERS] 8.3beta1 testing on Solaris

2007-10-25 Thread Tom Lane
Josh Berkus <[EMAIL PROTECTED]> writes:
> Actually, 32 made a significant difference as I recall ... do you still have 
> the figures for that, Jignesh?

I'd want to see a new set of test runs backing up any call for a change
in NUM_CLOG_BUFFERS --- we've changed enough stuff around this area that
benchmarks using code from a few months back shouldn't carry a lot of
weight.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [PERFORM] [HACKERS] 8.3beta1 testing on Solaris

2007-10-25 Thread Josh Berkus
Tom,

> It's still true that I'm leery of a large increase in the number of
> buffers without reengineering slru.c.  That code was written on the
> assumption that there were few enough buffers that a linear search
> would be fine.  I'd hold still for 16, or maybe even 32, but I dunno
> how much impact that will have for such a test case.

Actually, 32 made a significant difference as I recall ... do you still have 
the figures for that, Jignesh?

The test case is a workload called "iGen" which is a "fixed" TPCC-like 
workload.  I've been trying to talk Sun into open-sourcing it, but no dice so 
far.  It is heavy on writes, and (like TPCC) consists mostly of one-line 
transactions.

-- 
Josh Berkus
PostgreSQL @ Sun
San Francisco

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [PERFORM] [HACKERS] 8.3beta1 testing on Solaris

2007-10-25 Thread Tom Lane
Gregory Stark <[EMAIL PROTECTED]> writes:
> Didn't we already go through this? He and Simon were pushing to bump up
> NUM_CLOG_BUFFERS and you were arguing that the test wasn't representative and
> some other clog.c would have to be reengineered to scale well to larger
> values. 

AFAIR we never did get any clear explanation of what the test case is.
I guess it must be write-mostly, else lazy XID assignment would have
helped this by reducing the rate of XID consumption.

It's still true that I'm leery of a large increase in the number of
buffers without reengineering slru.c.  That code was written on the
assumption that there were few enough buffers that a linear search
would be fine.  I'd hold still for 16, or maybe even 32, but I dunno
how much impact that will have for such a test case.

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster