Re: [PERFORM] dbt-2 tuning results with postgresql-8.3.5

2008-12-22 Thread Mark Wong
On Sun, Dec 21, 2008 at 10:56 PM, Gregory Stark  wrote:
> Mark Wong  writes:
>
>> On Dec 20, 2008, at 5:33 PM, Gregory Stark wrote:
>>
>>> "Mark Wong"  writes:
>>>
 To recap, dbt2 is a fair-use derivative of the TPC-C benchmark.  We
 are using a 1000 warehouse database, which amounts to about 100GB of
 raw text data.
>>>
>>> Really? Do you get conforming results with 1,000 warehouses? What's  the 
>>> 95th
>>> percentile response time?
>>
>> No, the results are not conforming.  You and others have pointed that  out
>> already.  The 95th percentile response time are calculated on each  page of 
>> the
>> previous links.
>
> Where exactly? Maybe I'm blind but I don't see them.

Here's an example:

http://207.173.203.223/~markwkm/community6/dbt2/baseline.1000.1/report/

The links on the blog entries should be pointing to their respective
reports.  I spot checked a few and it seems I got some right.  I
probably didn't make it clear you needed to click on the results to
see the reports.

>> I find your questions a little odd for the input I'm asking for.  Are  you
>> under the impression we are trying to publish benchmarking  results?  Perhaps
>> this is a simple misunderstanding?
>
> Hm, perhaps. The "conventional" way to run TPC-C is to run it with larger and
> larger scale factors until you find out the largest scale factor you can get a
> conformant result at. In other words the scale factor is an output, not an
> input variable.
>
> You're using TPC-C just as an example workload and looking to see how to
> maximize the TPM for a given scale factor. I guess there's nothing wrong with
> that as long as everyone realizes it's not a TPC-C benchmark.

Perhaps, but we're not trying to run a TPC-C benchmark.  We're trying
to illustrate how performance changes with an understood OLTP
workload.  The purpose is to show how the system bahaves more so than
what the maximum transactions are.  We try to advertise the kit the
and work for self learning, we never try to pass dbt-2 off as a
benchmarking kit.

> Except that if the 95th percentile response times are well above a second I
> have to wonder whether the situation reflects an actual production OLTP system
> well. It implies there are so many concurrent sessions that any given query is
> being context switched out for seconds at a time.
>
> I have to imagine that a real production system would consider the system
> overloaded as soon as queries start taking significantly longer than they take
> on an unloaded system. People monitor the service wait times and queue depths
> for i/o systems closely and having several seconds of wait time is a highly
> abnormal situation.

We attempt to illustrate the response times on the reports.  For
example, there is a histogram (drawn as a scatter plot) illustrating
the number of transactions vs. the response time for each transaction.
 This is for the New Order transaction:

http://207.173.203.223/~markwkm/community6/dbt2/baseline.1000.1/report/dist_n.png

We also plot the response time for a transaction vs the elapsed time
(also as a scatter plot).  Again, this is for the New Order
transaction:

http://207.173.203.223/~markwkm/community6/dbt2/baseline.1000.1/report/rt_n.png

> I'm not sure how bad that is for the benchmarks. The only effect that comes to
> mind is that it might exaggerate the effects of some i/o intensive operations
> that under normal conditions might not cause any noticeable impact like wal
> log file switches or even checkpoints.

I'm not sure I'm following.  Is this something than can be shown by
any stats collection or profiling?  This vaguely reminds me of the the
significant spikes in system time (and dips everywhere else) when the
operating system is fsyncing during a checkpoint that we've always
observed when running this in the past.

> If you have a good i/o controller it might confuse your results a bit when
> you're comparing random and sequential i/o because the controller might be
> able to sort requests by physical position better than in a typical oltp
> environment where the wait queues are too short to effectively do that.

Thanks for the input.

Regards,
Mark

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] dbt-2 tuning results with postgresql-8.3.5

2008-12-22 Thread Greg Smith

On Sat, 20 Dec 2008, Mark Wong wrote:

Here are links to how the throughput changes when increasing 
shared_buffers: http://pugs.postgresql.org/node/505 My first glance 
takes tells me that the system performance is quite erratic when 
increasing the shared_buffers.


If you smooth that curve out a bit, you have to throw out the 22528MB 
figure as meaningless--particularly since it's way too close to the cliff 
where performance dives hard.  The sweet spot looks to me like 11264MB to 
17408MB.  I'd say 14336MB is the best performing setting that's in the 
middle of a stable area.


And another series of tests to show how throughput changes when 
checkpoint_segments are increased: http://pugs.postgresql.org/node/503 
I'm also not what to gather from increasing the checkpoint_segments.


What was shared_buffers set to here?  Those two settings are not 
completely independent, for example at a tiny buffer size it's not as 
obvious there's a win in spreading the checkpoints out more.  It's 
actually a 3-D graph, with shared_buffers and checkpoint_segments as two 
axes and the throughput as the Z value.


Since that's quite time consuming to map out in its entirety, the way I'd 
suggest navigating the territory more efficiently is to ignore the 
defaults altogether.  Start with a configuration that someone familiar 
with tuning the database would pick for this hardware:  8192MB for 
shared_buffers and 100 checkpoint segments would be a reasonable base 
point.  Run the same tests you did here, but with the value you're not 
changing set to those much larger values rather than the database 
defaults, and then I think you'd end with something more interesting. 
Also, I think the checkpoint_segments values >500 are a bit much, given 
what level of recovery time would come with a crash at that setting. 
Smaller steps from a smaller range would be better there I think.


--
* Greg Smith gsm...@gregsmith.com http://www.gregsmith.com Baltimore, MD

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] dbt-2 tuning results with postgresql-8.3.5

2008-12-22 Thread Gregory Stark
"Mark Wong"  writes:

>> I'm not sure how bad that is for the benchmarks. The only effect that comes 
>> to
>> mind is that it might exaggerate the effects of some i/o intensive operations
>> that under normal conditions might not cause any noticeable impact like wal
>> log file switches or even checkpoints.
>
> I'm not sure I'm following.  

All I'm saying is that the performance characteristics won't be the same when
the service wait times are 1-10 seconds rather than the 20-30ms at which alarm
bells would start to ring on a real production system.

I'm not exactly sure what changes it might make though.

-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com
  Ask me about EnterpriseDB's RemoteDBA services!

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] dbt-2 tuning results with postgresql-8.3.5

2008-12-22 Thread Gregory Stark
"Mark Wong"  writes:

> Thanks for the input.

In a more constructive vein:

1) autovacuum doesn't seem to be properly tracked. It looks like you're just
   tracking the autovacuum process and not the actual vacuum subprocesses
   which it spawns.

2) The response time graphs would be more informative if you excluded the
   ramp-up portion of the test. As it is there are big spikes at the low end
   but it's not clear whether they're really part of the curve or due to
   ramp-up. This is especially visible in the stock-level graph where it
   throws off the whole y scale.

-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com
  Ask me about EnterpriseDB's On-Demand Production Tuning

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] Slow table update

2008-12-22 Thread Laszlo Nagy

SQL:

update product set sz_category_id=null where am_style_kw1 is not null 
and sz_category_id is not null


query plan:

"Seq Scan on product  (cost=0.00..647053.30 rows=580224 width=1609)"
"  Filter: ((am_style_kw1 IS NOT NULL) AND (sz_category_id IS NOT NULL))"

Information on the table:

row count ~ 2 million
table size: 4841 MB
toast table size: 277mb
indexes size: 4434 MB

Computer: FreeBSD 7.0 stable, Dual Xeon Quad code 5420 2.5GHZ, 8GB 
memory, 6 ES SATA disks in hw RAID 6 (+2GB write back cache) for the 
database.


Autovacuum is enabled. We also perform "vacuum analyze" on the database, 
each day.


Here are some non-default values from postgresql.conf:

shared_buffers=400MB
maintenance_work_mem = 256MB
max_fsm_pages = 100

There was almost no load on the machine (CPU: mostly idle, IO: approx. 
5% total) when we started this update.


Maybe I'm wrong with this, but here is a quick calculation: the RAID 
array should do at least 100MB/sec. Reading the whole table should not 
take more than 1 min. I think about 20% of the rows should have been 
updated. Writting out all changes should not take too much time. I 
believe that this update should have been completed within 2-3 minutes.


In reality, after 2600 seconds I have cancelled the query. We monitored 
disk I/O and it was near 100% all the time.


What is wrong?

Thank you,

  Laszlo


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Slow table update

2008-12-22 Thread Laszlo Nagy

Laszlo Nagy wrote:

SQL:

update product set sz_category_id=null where am_style_kw1 is not null 
and sz_category_id is not null

Hmm, this query:

select count(*) from product where am_style_kw1 is not null and 
sz_category_id is not null and sz_category_id<>4809


opens in 10 seconds. The update would not finish in 2600 seconds. I 
don't understand.


L


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Slow table update

2008-12-22 Thread Gregory Williamson
Laszlo Nagy wrote:

> 
> Laszlo Nagy wrote:
> > SQL:
> >
> > update product set sz_category_id=null where am_style_kw1 is not null 
> > and sz_category_id is not null
> Hmm, this query:
> 
> ?select count(*) from product where am_style_kw1 is not null and 
> sz_category_id is not null and sz_category_id<>4809
> 
> opens in 10 seconds. The update would not finish in 2600 seconds. I 
> don't understand.

If the table has some sort of FK relations it might be being slowed by the need 
to check a row meant to be deleted has any children.

Perhaps triggers ? 

If the table is very bloated with lots of dead rows (but you did say you vacuum 
frequently and check the results to make sure they are effective?) that would 
slow it down.

A long running transaction elsewhere that is blocking the delete ? Did you 
check the locks ?

HTH,

Greg Williamson
Senior DBA
DigitalGlobe

Confidentiality Notice: This e-mail message, including any attachments, is for 
the sole use of the intended recipient(s) and may contain confidential and 
privileged information and must be protected in accordance with those 
provisions. Any unauthorized review, use, disclosure or distribution is 
prohibited. If you are not the intended recipient, please contact the sender by 
reply e-mail and destroy all copies of the original message.

(My corporate masters made me say this.)




Re: [PERFORM] Slow table update

2008-12-22 Thread Laszlo Nagy




If the table has some sort of FK relations it might be being slowed by 
the need to check a row meant to be deleted has any children.


If you look at my SQL, there is only one column to be updated. That 
column has no foreign key constraint. (It should have, but we did not 
want to add that constraint in order to speed up updates.)



Perhaps triggers ?


Table "product" has no triggers.



If the table is very bloated with lots of dead rows (but you did say 
you vacuum frequently and check the results to make sure they are 
effective?) that would slow it down.


I'm not sure how to check if the vacuum was effective. But we have 
max_fsm_pages=100 in postgresql.conf, and I do not get any errors 
from the daily vacuum script, so I presume that the table hasn't got too 
many dead rows.


Anyway, the table size is only 4GB. Even if half of the rows are dead, 
the update should run quite quickly. Another argument is that when I 
"select count(*)" instead of "UPDATE", then I get the result in 10 
seconds. I don't think that dead rows can make such a big difference 
between reading and writing.


My other idea was that there are so many indexes on this table, maybe 
the update is slow because of the indexes? The column being updated has 
only one index on it, and that is 200MB. But I have heard somewhere that 
because of PostgreSQL's multi version system, sometimes the system needs 
to update indexes with columns that are not being updated. I'm not sure. 
Might this be the problem?



A long running transaction elsewhere that is blocking the delete ? Did 
you check the locks ?


Sorry, this was an update. A blocking transaction would never explain 
why the disk I/O went up to 100% for 2600 seconds.


  L


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Slow table update

2008-12-22 Thread Laszlo Nagy
I just tested the same on a test machine. It only has one processor 1GB 
memory, and one SATA disk. The same "select count(*)" was 58 seconds. I 
started the same UPDATE with EXPLAIN ANALYZE. It is running since 1000 
seconds. I'm now 100% sure that the problem is with the database, 
because this machine has nothing but a postgresql server running on it. 
I'll post the output of explain analyze later.



--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Slow table update

2008-12-22 Thread Tom Lane
Laszlo Nagy  writes:
>> If the table has some sort of FK relations it might be being slowed by 
>> the need to check a row meant to be deleted has any children.
>> 
> If you look at my SQL, there is only one column to be updated. That 
> column has no foreign key constraint.

That was not the question that was asked.

> My other idea was that there are so many indexes on this table, maybe 
> the update is slow because of the indexes?

Updating indexes is certainly very far from being free.  How many is
"many"?

regards, tom lane

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] [ADMIN] rebellious pg stats collector (reopened case)

2008-12-22 Thread Laszlo Nagy



and see if its output changes when you start to trace it.
  

%cat test.c
#include 

int main() {
   while(1) {
   sleep(5);
   printf("ppid = %d\n", getppid());
   }
}

%gcc -o test test.c
%./test
ppid = 47653
ppid = 47653
ppid = 47653 # Started "truss -p 48864" here!
ppid = 49073
ppid = 49073
ppid = 49073


Agreed, but we need to understand what the tools being used to
investigate the problem are doing ...
  

Unfortunately, I'm not able to install strace:

# pwd
/usr/ports/devel/strace
# make
===>  strace-4.5.7 is only for i386, while you are running amd64.
*** Error code 1

Stop in /usr/ports/devel/strace.

I'll happily install any trace tool, but have no clue which one would help.




--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] [ADMIN] rebellious pg stats collector (reopened case)

2008-12-22 Thread Laszlo Nagy

Posted to the wrong list by mistake. Sorry.

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] [ADMIN] rebellious pg stats collector (reopened case)

2008-12-22 Thread Alvaro Herrera
Laszlo Nagy wrote:

> %gcc -o test test.c
> %./test
> ppid = 47653
> ppid = 47653
> ppid = 47653 # Started "truss -p 48864" here!
> ppid = 49073
> ppid = 49073
> ppid = 49073

I think you should report that as a bug to Sun.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] temp_tablespaces and RAID

2008-12-22 Thread Marc Mamin

Hello,

To improve performances, I would like to try moving the temp_tablespaces
locations outside of our RAID system.
Is it a good practice ?


Thanks,

Marc Mamin

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] dbt-2 tuning results with postgresql-8.3.5

2008-12-22 Thread Kevin Grittner
>>> "Mark Wong"  wrote: 
 
> The DL380 G5 is an 8 core Xeon E5405 with 32GB of
> memory.  The MSA70 is a 25-disk 15,000 RPM SAS array, currently
> configured as a 25-disk RAID-0 array.
 
> number of connections (250):
 
> Moving forward, what other parameters (or combinations of) do people
> feel would be valuable to illustrate with this workload?
 
To configure PostgreSQL for OLTP on that hardware, I would strongly
recommend the use of a connection pool which queues requests above
some limit on concurrent queries.  My guess is that you'll see best
results with a limit somewhere aound 40, based on my tests indicating
that performance drops off above (cpucount * 2) + spindlecount.
 
I wouldn't consider tests of the other parameters as being very useful
before tuning this.  This is more or less equivalent to the "engines"
configuration in Sybase, for example.
 
-Kevin

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] temp_tablespaces and RAID

2008-12-22 Thread Scott Marlowe
On Mon, Dec 22, 2008 at 7:40 AM, Marc Mamin  wrote:
>
> Hello,
>
> To improve performances, I would like to try moving the temp_tablespaces
> locations outside of our RAID system.
> Is it a good practice ?

Maybe yes, maybe no.  If you move it to a single slow drive, then it
could well slow things down a fair bit when the system needs temp
space. OTOH, if the queries that would need temp space are few and far
between, and they're slowing down the rest of the system in weird
ways, it might be the right thing to do.

I'm afraid we don't have enough information to say if it's the right
thing to do right now, but there are reasons to do it (and not).

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] dbt-2 tuning results with postgresql-8.3.5

2008-12-22 Thread Mark Wong
On Mon, Dec 22, 2008 at 12:59 AM, Greg Smith  wrote:
> On Sat, 20 Dec 2008, Mark Wong wrote:
>
>> Here are links to how the throughput changes when increasing
>> shared_buffers: http://pugs.postgresql.org/node/505 My first glance takes
>> tells me that the system performance is quite erratic when increasing the
>> shared_buffers.
>
> If you smooth that curve out a bit, you have to throw out the 22528MB figure
> as meaningless--particularly since it's way too close to the cliff where
> performance dives hard.  The sweet spot looks to me like 11264MB to 17408MB.
>  I'd say 14336MB is the best performing setting that's in the middle of a
> stable area.
>
>> And another series of tests to show how throughput changes when
>> checkpoint_segments are increased: http://pugs.postgresql.org/node/503 I'm
>> also not what to gather from increasing the checkpoint_segments.
>
> What was shared_buffers set to here?  Those two settings are not completely
> independent, for example at a tiny buffer size it's not as obvious there's a
> win in spreading the checkpoints out more.  It's actually a 3-D graph, with
> shared_buffers and checkpoint_segments as two axes and the throughput as the
> Z value.

The shared_buffers are the default, 24MB.  The database parameters are
saved, probably unclearly, here's an example link:

http://207.173.203.223/~markwkm/community6/dbt2/baseline.1000.1/db/param.out

> Since that's quite time consuming to map out in its entirety, the way I'd
> suggest navigating the territory more efficiently is to ignore the defaults
> altogether.  Start with a configuration that someone familiar with tuning
> the database would pick for this hardware:  8192MB for shared_buffers and
> 100 checkpoint segments would be a reasonable base point.  Run the same
> tests you did here, but with the value you're not changing set to those much
> larger values rather than the database defaults, and then I think you'd end
> with something more interesting. Also, I think the checkpoint_segments
> values >500 are a bit much, given what level of recovery time would come
> with a crash at that setting. Smaller steps from a smaller range would be
> better there I think.

I should probably run your pgtune script, huh?

Regards,
Mark

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] dbt-2 tuning results with postgresql-8.3.5

2008-12-22 Thread Mark Wong
On Mon, Dec 22, 2008 at 2:56 AM, Gregory Stark  wrote:
> "Mark Wong"  writes:
>
>> Thanks for the input.
>
> In a more constructive vein:
>
> 1) autovacuum doesn't seem to be properly tracked. It looks like you're just
>   tracking the autovacuum process and not the actual vacuum subprocesses
>   which it spawns.

Hrm, tracking just the launcher process certainly doesn't help.  Are
the spawned processed short lived?  I take a snapshot of
/proc//io data every 60 seconds.  The only thing I see named
autovacuum is the launcher process.  Or perhaps I can't read?  Here is
the raw data of the /proc//io captures:

http://207.173.203.223/~markwkm/community6/dbt2/baseline.1000.1/db/iopp.out

> 2) The response time graphs would be more informative if you excluded the
>   ramp-up portion of the test. As it is there are big spikes at the low end
>   but it's not clear whether they're really part of the curve or due to
>   ramp-up. This is especially visible in the stock-level graph where it
>   throws off the whole y scale.

Ok, we'll take note and see what we can do.

Regards,
Mark

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] dbt-2 tuning results with postgresql-8.3.5

2008-12-22 Thread Mark Wong
On Mon, Dec 22, 2008 at 7:27 AM, Kevin Grittner
 wrote:
 "Mark Wong"  wrote:
>
>> The DL380 G5 is an 8 core Xeon E5405 with 32GB of
>> memory.  The MSA70 is a 25-disk 15,000 RPM SAS array, currently
>> configured as a 25-disk RAID-0 array.
>
>> number of connections (250):
>
>> Moving forward, what other parameters (or combinations of) do people
>> feel would be valuable to illustrate with this workload?
>
> To configure PostgreSQL for OLTP on that hardware, I would strongly
> recommend the use of a connection pool which queues requests above
> some limit on concurrent queries.  My guess is that you'll see best
> results with a limit somewhere aound 40, based on my tests indicating
> that performance drops off above (cpucount * 2) + spindlecount.

Yeah, we are using a homegrown connection concentrator as part of the
test kit, but it's not very intelligent.

> I wouldn't consider tests of the other parameters as being very useful
> before tuning this.  This is more or less equivalent to the "engines"
> configuration in Sybase, for example.

Right, I have the database configured for 250 connections but I'm
using 200 of them.  I'm pretty sure for this scale factor 200 is more
than enough.  Nevertheless I should go through the exercise.

Regards,
Mark

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] dbt-2 tuning results with postgresql-8.3.5

2008-12-22 Thread Greg Smith

On Mon, 22 Dec 2008, Mark Wong wrote:


The shared_buffers are the default, 24MB.  The database parameters are
saved, probably unclearly, here's an example link:

http://207.173.203.223/~markwkm/community6/dbt2/baseline.1000.1/db/param.out


That's a bit painful to slog through to find what was changed from the 
defaults.  How about saving the output from this query instead, or in 
addition to the version sorted by name:


select name,setting,source,short_desc from pg_settings order by 
source,name;


Makes it easier to ignore everything that isn't set.


I should probably run your pgtune script, huh?


That's basically where the suggestions for center points I made came from. 
The only other thing that does that might be interesting to examine is 
that it bumps up checkpoint_completion_target to 0.9 once you've got a 
large number of checkpoint_segments.


--
* Greg Smith gsm...@gregsmith.com http://www.gregsmith.com Baltimore, MD

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance