Re: [PERFORM] performance on new linux box

2010-07-16 Thread Ben Chobot
On Jul 15, 2010, at 2:40 PM, Ryan Wexler wrote:

> On Thu, Jul 15, 2010 at 12:35 PM, Ben Chobot  wrote:
> On Jul 15, 2010, at 9:30 AM, Scott Carey wrote:
> 
> >> Many raid controllers are smart enough to always turn off write caching on 
> >> the drives, and also disable the feature on their own buffer without a 
> >> BBU. Add a BBU, and the cache on the controller starts getting used, but 
> >> *not* the cache on the drives.
> >
> > This does not make sense.
> > Write caching on all hard drives in the last decade are safe because they 
> > support a write cache flush command properly.  If the card is "smart" it 
> > would issue the drive's write cache flush command to fulfill an fsync() or 
> > barrier request with no BBU.
> 
> You're missing the point. If the power dies suddenly, there's no time to 
> flush any cache anywhere. That's the entire point of the BBU - it keeps the 
> RAM powered up on the raid card. It doesn't keep the disks spinning long 
> enough to flush caches.
> --
> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance
> 
> So you are saying write caching is a dangerous proposition on a raid card 
> with or without BBU?


Er, no, sorry, I am not being very clear it seems. 


Using a cache for write caching is dangerous, unless you protect it with a 
battery. Caches on a raid card can be protected by a BBU, so, when you use a 
BBU, write caching on the raid card is safe. (Just don't read the firmware 
changelog for your raid card or you will always be paranoid.) If you don't have 
a BBU, many raid cards default to disabling caching. You can still enable it, 
but the card will often tell you it's a bad idea.

There are also caches on all your disk drives. Write caching there is always 
dangerous, which is why almost all raid cards always disable the hard drive 
write caching, with or without a BBU. I'm not even sure how many raid cards let 
you enable the write cache on a drive... hopefully, not many.

Re: [PERFORM] performance on new linux box

2010-07-16 Thread Ryan Wexler
On Thu, Jul 15, 2010 at 12:35 PM, Ben Chobot  wrote:

> On Jul 15, 2010, at 9:30 AM, Scott Carey wrote:
>
> >> Many raid controllers are smart enough to always turn off write caching
> on the drives, and also disable the feature on their own buffer without a
> BBU. Add a BBU, and the cache on the controller starts getting used, but
> *not* the cache on the drives.
> >
> > This does not make sense.
> > Write caching on all hard drives in the last decade are safe because they
> support a write cache flush command properly.  If the card is "smart" it
> would issue the drive's write cache flush command to fulfill an fsync() or
> barrier request with no BBU.
>
> You're missing the point. If the power dies suddenly, there's no time to
> flush any cache anywhere. That's the entire point of the BBU - it keeps the
> RAM powered up on the raid card. It doesn't keep the disks spinning long
> enough to flush caches.
> --
> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance
>

So you are saying write caching is a dangerous proposition on a raid card
with or without BBU?


Re: [PERFORM] performance on new linux box

2010-07-16 Thread Pierre C


Most (all?) hard drives have cache built into them. Many raid cards have  
cache built into them. When the power dies, all the data in any cache is  
lost, which is why it's dangerous to use it for write caching. For that  
reason, you can attach a BBU to a raid card which keeps the cache alive  
until the power is restored (hopefully). But no hard drive I am aware of  
lets you attach a battery, so using a hard drive's cache for write  
caching will always be dangerous.


That's why many raid cards will always disable write caching on the hard  
drives themselves, and only enable write caching using their own memory  
when a BBU is installed.


Does that make more sense?



Actually write cache is only dangerous if the OS and postgres think some  
stuff is written to the disk when in fact it is only in the cache and not  
written yet. When power is lost, cache contents are SUPPOSED to be lost.  
In a normal situation, postgres and the OS assume nothing is written to  
the disk (ie, it may be in cache not on disk) until a proper cache flush  
is issued and responded to by the hardware. That's what xlog and journals  
are for. If the hardware doesn't lie, and the kernel/FS doesn't have any  
bugs, no problem. You can't get decent write performance on rotating media  
without a write cache somewhere...



--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] performance on new linux box

2010-07-16 Thread Scott Marlowe
On Thu, Jul 15, 2010 at 10:30 AM, Scott Carey  wrote:
>
> On Jul 14, 2010, at 7:50 PM, Ben Chobot wrote:
>
>> On Jul 14, 2010, at 6:57 PM, Scott Carey wrote:
>>
>>> But none of this explains why a 4-disk raid 10 is slower than a 1 disk 
>>> system.  If there is no write-back caching on the RAID, it should still be 
>>> similar to the one disk setup.
>>
>> Many raid controllers are smart enough to always turn off write caching on 
>> the drives, and also disable the feature on their own buffer without a BBU. 
>> Add a BBU, and the cache on the controller starts getting used, but *not* 
>> the cache on the drives.
>
> This does not make sense.

Basically, you can have cheap, fast and dangerous (drive with write
cache enabled, which responds positively to fsync even when it hasn't
actually fsynced the data.  You can have cheap, slow and safe with a
drive that has a cache but since it'll be fsyncing it all the the time
the write cache won't actually get used, or fast, expensive, and safe,
which is what a BBU RAID card gets by saying the data is fsynced when
it's actually just in cache, but a safe cache that won't get lost on
power down.

I don't find it that complicated.

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] performance on new linux box

2010-07-16 Thread Scott Carey

On Jul 15, 2010, at 12:35 PM, Ben Chobot wrote:

> On Jul 15, 2010, at 9:30 AM, Scott Carey wrote:
> 
>>> Many raid controllers are smart enough to always turn off write caching on 
>>> the drives, and also disable the feature on their own buffer without a BBU. 
>>> Add a BBU, and the cache on the controller starts getting used, but *not* 
>>> the cache on the drives.
>> 
>> This does not make sense.
>> Write caching on all hard drives in the last decade are safe because they 
>> support a write cache flush command properly.  If the card is "smart" it 
>> would issue the drive's write cache flush command to fulfill an fsync() or 
>> barrier request with no BBU.
> 
> You're missing the point. If the power dies suddenly, there's no time to 
> flush any cache anywhere. That's the entire point of the BBU - it keeps the 
> RAM powered up on the raid card. It doesn't keep the disks spinning long 
> enough to flush caches.

If the power dies suddenly, then the data that is in the OS RAM will also be 
lost.  What about that? 

Well it doesn't matter because the DB is only relying on data being persisted 
to disk that it thinks has been persisted to disk via fsync().

The data in the disk cache is the same thing as RAM.  As long as fsync() works 
_properly_ which is true for any file system + disk combination with a damn 
(not HFS+ on OSX, not FAT, not a few other things), then it will tell the drive 
to flush its cache _before_ fsync() returns.  There is NO REASON for a raid 
card to turn off a drive cache unless it does not trust the drive cache.  In 
write-through mode, it should not return to the OS with a fsync, direct write, 
or other "the OS thinks this data is persisted now" call until it has flushed 
the disk cache.  That does not mean it has to turn off the disk cache.
-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] performance on new linux box

2010-07-16 Thread Scott Carey

On Jul 15, 2010, at 6:22 PM, Scott Marlowe wrote:

> On Thu, Jul 15, 2010 at 10:30 AM, Scott Carey  wrote:
>> 
>> On Jul 14, 2010, at 7:50 PM, Ben Chobot wrote:
>> 
>>> On Jul 14, 2010, at 6:57 PM, Scott Carey wrote:
>>> 
 But none of this explains why a 4-disk raid 10 is slower than a 1 disk 
 system.  If there is no write-back caching on the RAID, it should still be 
 similar to the one disk setup.
>>> 
>>> Many raid controllers are smart enough to always turn off write caching on 
>>> the drives, and also disable the feature on their own buffer without a BBU. 
>>> Add a BBU, and the cache on the controller starts getting used, but *not* 
>>> the cache on the drives.
>> 
>> This does not make sense.
> 
> Basically, you can have cheap, fast and dangerous (drive with write
> cache enabled, which responds positively to fsync even when it hasn't
> actually fsynced the data.  You can have cheap, slow and safe with a
> drive that has a cache but since it'll be fsyncing it all the the time
> the write cache won't actually get used, or fast, expensive, and safe,
> which is what a BBU RAID card gets by saying the data is fsynced when
> it's actually just in cache, but a safe cache that won't get lost on
> power down.
> 
> I don't find it that complicated.

It doesn't make sense that a raid 10 will be slower than a 1-disk setup unless 
the former respects fsync() and the latter does not.  Individual drive write 
cache does not explain the situation.  That is what does not make sense.

When in _write-through_ mode, there is no reason to turn off the drive's write 
cache unless the drive does not properly respect its cache-flush command, or 
the RAID card is too dumb to issue cache-flush commands.  The RAID card simply 
has to issue its writes, then issue the flush commands, then return to the OS 
when those complete.  With drive write caches on, this is perfectly safe.  The 
only way it is unsafe is if the drive lies and returns from a cache flush 
before the data from its cache is actually flushed.

Some SSD's on the market currently lie.  A handful of the thousands of all hard 
drive models in the server, desktop, and laptop space in the last decade did 
not respect the cache flush command properly, and none of them in the SAS/SCSI 
or 'enterprise SATA' space lie to my knowledge.  Information on this topic has 
come across this list several times.

The explanation why one setup respects fsync() and another does not almost 
always lies in the FS + OS combination.  HFS+ on OSX does not respect fsync.  
ext3 until recently only did fdatasync() when you told it to fsync() (which is 
fine for postgres' transaction log anyway).

A raid card, especially with any SAS/SCSI drives has no reason to turn off the 
drive's write cache unless it _wants_ to return to the OS before the data is on 
the drive.  That condition occurs in write-back cache mode when the RAID card's 
cache is safe via a battery or some other mechanism.  In that case, it should 
turn off the drive's write cache so that it can be sure  that data is on disk 
when a power fails without having to call the cache-flush command on every 
write.  That way, it can remove data from its RAM as soon as the drive returns 
from the write.
In write-through mode it should turn the caches back on and rely on the flush 
command to pass through direct writes, cache flush demands, and barrier 
requests.  It could optionally turn the caches off, but that won't improve data 
safety unless the drive cannot faithfully flush its cache.



-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] performance on new linux box

2010-07-16 Thread Ben Chobot
On Jul 15, 2010, at 8:16 PM, Scott Carey wrote:

> On Jul 15, 2010, at 12:35 PM, Ben Chobot wrote:
> 
>> On Jul 15, 2010, at 9:30 AM, Scott Carey wrote:
>> 
 Many raid controllers are smart enough to always turn off write caching on 
 the drives, and also disable the feature on their own buffer without a 
 BBU. Add a BBU, and the cache on the controller starts getting used, but 
 *not* the cache on the drives.
>>> 
>>> This does not make sense.
>>> Write caching on all hard drives in the last decade are safe because they 
>>> support a write cache flush command properly.  If the card is "smart" it 
>>> would issue the drive's write cache flush command to fulfill an fsync() or 
>>> barrier request with no BBU.
>> 
>> You're missing the point. If the power dies suddenly, there's no time to 
>> flush any cache anywhere. That's the entire point of the BBU - it keeps the 
>> RAM powered up on the raid card. It doesn't keep the disks spinning long 
>> enough to flush caches.
> 
> If the power dies suddenly, then the data that is in the OS RAM will also be 
> lost.  What about that? 
> 
> Well it doesn't matter because the DB is only relying on data being persisted 
> to disk that it thinks has been persisted to disk via fsync().

Right, we agree that only what has been fsync()'d has a chance to be safe

> The data in the disk cache is the same thing as RAM.  As long as fsync() 
> works _properly_ which is true for any file system + disk combination with a 
> damn (not HFS+ on OSX, not FAT, not a few other things), then it will tell 
> the drive to flush its cache _before_ fsync() returns.  There is NO REASON 
> for a raid card to turn off a drive cache unless it does not trust the drive 
> cache.  In write-through mode, it should not return to the OS with a fsync, 
> direct write, or other "the OS thinks this data is persisted now" call until 
> it has flushed the disk cache.  That does not mean it has to turn off the 
> disk cache.

...and here you are also right in that a write-through write cache is safe, 
with or without a battery. A write-through cache is a win for things that don't 
often fsync, but my understanding is that with a database, you end up fsyncing 
all the time, which makes a write-through cache not worth very much. The only 
good way to get good *database* performance out of spinning media is with a 
write-back cache, and the only way to make that safe is to hook up a BBU.


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] performance on new linux box

2010-07-16 Thread Craig Ringer
On 16/07/10 06:18, Ben Chobot wrote:

> There are also caches on all your disk drives. Write caching there is always 
> dangerous, which is why almost all raid cards always disable the hard drive 
> write caching, with or without a BBU. I'm not even sure how many raid cards 
> let you enable the write cache on a drive... hopefully, not many.

AFAIK Disk drive caches can be safe to leave in write-back mode (ie
write cache enabled) *IF* the OS uses write barriers (properly) and the
drive understands them.

Big if.

--
Craig Ringer

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] performance on new linux box

2010-07-16 Thread Craig Ringer
On 16/07/10 09:22, Scott Marlowe wrote:
> On Thu, Jul 15, 2010 at 10:30 AM, Scott Carey  wrote:
>>
>> On Jul 14, 2010, at 7:50 PM, Ben Chobot wrote:
>>
>>> On Jul 14, 2010, at 6:57 PM, Scott Carey wrote:
>>>
 But none of this explains why a 4-disk raid 10 is slower than a 1 disk 
 system.  If there is no write-back caching on the RAID, it should still be 
 similar to the one disk setup.
>>>
>>> Many raid controllers are smart enough to always turn off write caching on 
>>> the drives, and also disable the feature on their own buffer without a BBU. 
>>> Add a BBU, and the cache on the controller starts getting used, but *not* 
>>> the cache on the drives.
>>
>> This does not make sense.
> 
> Basically, you can have cheap, fast and dangerous (drive with write
> cache enabled, which responds positively to fsync even when it hasn't
> actually fsynced the data.  You can have cheap, slow and safe with a
> drive that has a cache but since it'll be fsyncing it all the the time
> the write cache won't actually get used, or fast, expensive, and safe,
> which is what a BBU RAID card gets by saying the data is fsynced when
> it's actually just in cache, but a safe cache that won't get lost on
> power down.

Speaking of BBUs... do you ever find yourself wishing you could use
software RAID with battery backup?

I tend to use software RAID quite heavily on non-database servers, as
it's cheap, fast, portable from machine to machine, and (in the case of
Linux 'md' raid) reliable. Alas, I can't really use it for DB servers
due to the need for write-back caching.

There's no technical reason I know of why sw raid couldn't write-cache
to some non-volatile memory on the host. A dedicated  a battery-backed
pair of DIMMS on a PCI-E card mapped into memory would be ideal. Failing
that, a PCI-E card with onboard RAM+BATT or fast flash that presents an
AHCI interface so it can be used as a virtual HDD would do pretty well.
Even one of those SATA "RAM Drive" units would do the job, though
forcing everything though the SATA2 bus would be a performance downside.

The only issue I see with sw raid write caching is that it probably
couldn't be done safely on the root file system. The OS would have to
come up, init software raid, and find the caches before it'd be safe to
read or write volumes with s/w raid write caching enabled. It's not the
sort of thing that'd be practical to implement in GRUB's raid support.

--
Craig Ringer

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Identical query slower on 8.4 vs 8.3

2010-07-16 Thread Igor Neyman
 

> -Original Message-
> From: Patrick Donlin [mailto:pdon...@oaisd.org] 
> Sent: Thursday, July 15, 2010 11:13 AM
> To: Kevin Grittner; pgsql-performance@postgresql.org
> Subject: Re: Identical query slower on 8.4 vs 8.3
> 
> I'll read over that wiki entry, but for now here is the 
> EXPLAIN ANALYZE output assuming I did it correctly. I have 
> run vacuumdb --full --analyze,  it actually runs as a nightly 
> cron job.
> 
> 8.4.4 Sever:
> "Unique  (cost=202950.82..227521.59 rows=702022 width=86) 
> (actual time=21273.371..22429.511 rows=700536 loops=1)"
> "  ->  Sort  (cost=202950.82..204705.87 rows=702022 width=86) 
> (actual time=21273.368..22015.948 rows=700536 loops=1)"
> "Sort Key: test.tid, testresult.trscore, 
> testresult.trpossiblescore, testresult.trstart, 
> testresult.trfinish, testresult.trscorebreakdown, 
> testresult.fk_sid, test.tname, qr.qrscore, qr.qrtotalscore, 
> testresult.trid, qr.qrid"
> "Sort Method:  external merge  Disk: 71768kB"
> "->  Hash Join  (cost=2300.82..34001.42 rows=702022 
> width=86) (actual time=64.388..1177.468 rows=700536 loops=1)"
> "  Hash Cond: (qr.fk_trid = testresult.trid)"
> "  ->  Seq Scan on questionresult qr  
> (cost=0.00..12182.22 rows=702022 width=16) (actual 
> time=0.090..275.518 rows=702022 loops=1)"
> "  ->  Hash  (cost=1552.97..1552.97 rows=29668 
> width=74) (actual time=63.042..63.042 rows=29515 loops=1)"
> "->  Hash Join  (cost=3.35..1552.97 
> rows=29668 width=74) (actual time=0.227..39.111 rows=29515 loops=1)"
> "  Hash Cond: (testresult.fk_tid = test.tid)"
> "  ->  Seq Scan on testresult  
> (cost=0.00..1141.68 rows=29668 width=53) (actual 
> time=0.019..15.622 rows=29668 loops=1)"
> "  ->  Hash  (cost=2.60..2.60 rows=60 
> width=21) (actual time=0.088..0.088 rows=60 loops=1)"
> "->  Seq Scan on test  
> (cost=0.00..2.60 rows=60 width=21) (actual time=0.015..0.044 
> rows=60 loops=1)"
> "Total runtime: 22528.820 ms"
> 
> 8.3.7 Server:
> "Unique  (cost=202950.82..227521.59 rows=702022 width=86) 
> (actual time=22157.714..23343.461 rows=700536 loops=1)"
> "  ->  Sort  (cost=202950.82..204705.87 rows=702022 width=86) 
> (actual time=22157.706..22942.018 rows=700536 loops=1)"
> "Sort Key: test.tid, testresult.trscore, 
> testresult.trpossiblescore, testresult.trstart, 
> testresult.trfinish, testresult.trscorebreakdown, 
> testresult.fk_sid, test.tname, qr.qrscore, qr.qrtotalscore, 
> testresult.trid, qr.qrid"
> "Sort Method:  external merge  Disk: 75864kB"
> "->  Hash Join  (cost=2300.82..34001.42 rows=702022 
> width=86) (actual time=72.842..1276.634 rows=700536 loops=1)"
> "  Hash Cond: (qr.fk_trid = testresult.trid)"
> "  ->  Seq Scan on questionresult qr  
> (cost=0.00..12182.22 rows=702022 width=16) (actual 
> time=0.112..229.987 rows=702022 loops=1)"
> "  ->  Hash  (cost=1552.97..1552.97 rows=29668 
> width=74) (actual time=71.421..71.421 rows=29515 loops=1)"
> "->  Hash Join  (cost=3.35..1552.97 
> rows=29668 width=74) (actual time=0.398..44.524 rows=29515 loops=1)"
> "  Hash Cond: (testresult.fk_tid = test.tid)"
> "  ->  Seq Scan on testresult  
> (cost=0.00..1141.68 rows=29668 width=53) (actual 
> time=0.117..20.890 rows=29668 loops=1)"
> "  ->  Hash  (cost=2.60..2.60 rows=60 
> width=21) (actual time=0.112..0.112 rows=60 loops=1)"
> "->  Seq Scan on test  
> (cost=0.00..2.60 rows=60 width=21) (actual time=0.035..0.069 
> rows=60 loops=1)"
> "Total runtime: 23462.639 ms"
> 
> 
> Thanks for the quick responses and being patient with me not 
> providing enough information.
> -Patrick
> 

Well, now that you've got similar runtime on both 8.4.4 and 8.3.7, here
is a suggestion to improve performance of this query based on EXPLAIN
ANALYZE you proveded (should have done it in your first e-mail).

EXPLAIN ANALYZE shows that most of the time (22015 ms on 8.4.4) spent on
sorting you result set.
And according to this: "Sort Method:  external merge  Disk: 71768kB" -
sorting is done using disk, meaning your work_mem setting is not
sufficient to do this sort in memory (I didn't go back through this
thread far enough, to see if you provided info on how it is set).

I'd suggest to increase the value up to ~80MB, if not for the system,
may be just for the session running this query.
Then see if performance improved.

And, with query performance issues always start with EXPLAIN ANALYZE.

Regards,
Igor Neyman 

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Identical query slower on 8.4 vs 8.3

2010-07-16 Thread tv

> I'd suggest to increase the value up to ~80MB, if not for the system,
> may be just for the session running this query.
> Then see if performance improved.

Don't forget you can do this for the given query without affecting the
other queries - just do something like

SET work_mem = 128M

and then run the query - it should work fine. This is great for testing
and to set environment for special users (batch processes etc.).

regards
Tomas


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] performance on new linux box

2010-07-16 Thread Greg Smith

Scott Carey wrote:

As long as fsync() works _properly_ which is true for any file system + disk combination 
with a damn (not HFS+ on OSX, not FAT, not a few other things), then it will tell the 
drive to flush its cache _before_ fsync() returns.  There is NO REASON for a raid card to 
turn off a drive cache unless it does not trust the drive cache.  In write-through mode, 
it should not return to the OS with a fsync, direct write, or other "the OS thinks 
this data is persisted now" call until it has flushed the disk cache.  That does not 
mean it has to turn off the disk cache.
  


Assuming that the operating system will pass through fsync calls to 
flush data all the way to drive level in all situations is an extremely 
dangerous assumption.  Most RAID controllers don't know how to force 
things out of the individual drive caches; that's why they turn off 
write caching on them.  Few filesystems get the details right to handle 
individual drive cache flushing correctly.  On Linux, XFS and ext4 are 
the only two with any expectation that will happen, and of those two 
ext4 is still pretty new and therefore should still be presumed to be buggy.


Please don't advise people about what is safe based on theoretical 
grounds here, in practice there are way too many bugs in the 
implementation of things like drive barriers to trust them most of the 
time.  There is no substitute for a pull the plug test using something 
that looks for bad cache flushes, i.e. diskchecker.pl:  
http://brad.livejournal.com/2116715.html  If you do that you'll discover 
you must turn off the individual drive caches when using a 
battery-backed RAID controller, and you can't ever trust barriers on 
ext3 because of bugs that were only fixed in ext4.


--
Greg Smith  2ndQuadrant US  Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com   www.2ndQuadrant.us


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance