Re: [PERFORM] bgwriter, checkpoints, curious (seeing delays)

2010-02-26 Thread Greg Smith

Tory M Blue wrote:

2010-02-25 22:10:41 PSTLOG:  checkpoint complete: wrote 44503
buffers (23.2%); 0 transaction log file(s) added, 0 removed, 20
recycled; write=148.539 s, sync=0.000 s, total=148.540 s
  
This one is typical for your list so I'll only comment on it.  This is 
writing out 350MB spread over 148 seconds, which means your background 
checkpoint I/O is about a 2.4MB/s stream.  That's a moderate amount that 
could be tough for some systems, but note that your sync time is 
nothing.  Generally, when someone sees a long pause that's caused by a 
checkpoint, the sync number is really high.  Your disks seem to be 
keeping up with the checkpoint overhead moderately well.  (Mind you, the 
zero sync time is because you have 'fsync = off ', which will eventually 
result in your database being corrupted after a badly timed outage one 
day; you really don't want to do that)


My guess is your connections are doing some sort of DNS operation that 
periodically stalls waiting for a 5-second timeout.  There's nothing in 
your checkpoint data suggesting it's a likely cause of a delay that 
long--and it would be a lot more random if that were the case, too.  Bad 
checkpoint spikes will be seconds sometimes, no time at all others; a 
heavy grouping at 5 seconds doesn't match the pattern they have at all.


--
Greg Smith  2ndQuadrant US  Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com   www.2ndQuadrant.us


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] Multiple data base on same server

2010-02-26 Thread elias ghanem
Hi,

I'm using postgresql 8.4

I need to install multiple postgresql dbs on one server but I have some
questions:

-Is there any problems (performance wise or other) if I have 10 to 15 DBs on
the same server?

-Each DB needs 10 tablespaces, so if I create 10 different tablespaces for
each DB I will have 100 to 150 table space on the same server. So can this
also cause any problems?

Thanks 



Re: [PERFORM] Multiple data base on same server

2010-02-26 Thread Richard Huxton

On 26/02/10 09:37, elias ghanem wrote:

Hi,

I'm using postgresql 8.4

I need to install multiple postgresql dbs on one server but I have some
questions:

-Is there any problems (performance wise or other) if I have 10 to 15 DBs on
the same server?


Clearly that's going to depend on what they're all doing and how big a 
server you have. There's no limitation in PostgreSQL that stops you though.



-Each DB needs 10 tablespaces, so if I create 10 different tablespaces for
each DB I will have 100 to 150 table space on the same server. So can this
also cause any problems?


Do you have 200-300+ disks to put these tablespaces on? If not, I'm not 
clear what you are trying to do. Why does each DB need 10 tablespaces?


--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Multiple data base on same server

2010-02-26 Thread Richard Huxton

On 26/02/10 12:45, elias ghanem wrote:

Hi,
Thanks for your answer,
Concerning the second point, each db have different table that are logically
related (for ex, tables for configuration, tables for business...) plus I'm
planning to put the indexes on their own tablespaces.
Concerning the disks I will maybe stored on multiple disks (but surely not
200-300). So I'm just wondering If this big number of tablespaces on a same
db server may cause problems,


If the tablespaces aren't on different disks, I'm not sure what the 
point is.


Do you perhaps mean schemas? So you have e.g. a system schema with 
tables users, activity_log etc? There's no problem with 20-30 
schemas per database.


--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] bgwriter, checkpoints, curious (seeing delays)

2010-02-26 Thread Kevin Grittner
Tory M Blue  wrote:
 
 2010-02-25 22:53:13 PST LOG: checkpoint starting: time
 2010-02-25 22:53:17 PST postgres postgres [local] LOG: unexpected
 EOF on client connection
 2010-02-25 22:55:43 PST LOG: checkpoint complete: wrote 34155
 buffers (17.8%); 0 transaction log file(s) added, 0 removed, 15
 recycled; write=150.045 s, sync=0.000 s, total=150.046 s
 
Did that unexpected EOF correspond to a connection attempt that gave
up based on time?
 
-Kevin


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Multiple data base on same server

2010-02-26 Thread elias ghanem
Hi,
Thanks for your answer,
Concerning the second point, each db have different table that are logically
related (for ex, tables for configuration, tables for business...) plus I'm
planning to put the indexes on their own tablespaces.
Concerning the disks I will maybe stored on multiple disks (but surely not
200-300). So I'm just wondering If this big number of tablespaces on a same
db server may cause problems,
Thanks again.

-Original Message-
From: pgsql-performance-ow...@postgresql.org
[mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of Richard Huxton
Sent: Friday, February 26, 2010 1:44 PM
To: elias ghanem
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Multiple data base on same server

On 26/02/10 09:37, elias ghanem wrote:
 Hi,

 I'm using postgresql 8.4

 I need to install multiple postgresql dbs on one server but I have some
 questions:

 -Is there any problems (performance wise or other) if I have 10 to 15 DBs
on
 the same server?

Clearly that's going to depend on what they're all doing and how big a 
server you have. There's no limitation in PostgreSQL that stops you though.

 -Each DB needs 10 tablespaces, so if I create 10 different tablespaces for
 each DB I will have 100 to 150 table space on the same server. So can this
 also cause any problems?

Do you have 200-300+ disks to put these tablespaces on? If not, I'm not 
clear what you are trying to do. Why does each DB need 10 tablespaces?

-- 
   Richard Huxton
   Archonet Ltd

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Multiple data base on same server

2010-02-26 Thread Craig James

Richard Huxton wrote:

On 26/02/10 12:45, elias ghanem wrote:

Hi,
Thanks for your answer,
Concerning the second point, each db have different table that are 
logically
related (for ex, tables for configuration, tables for business...) 
plus I'm

planning to put the indexes on their own tablespaces.
Concerning the disks I will maybe stored on multiple disks (but surely 
not
200-300). So I'm just wondering If this big number of tablespaces on a 
same

db server may cause problems,


If the tablespaces aren't on different disks, I'm not sure what the 
point is.


Our policy is that *every* database has its own tablespace.  It doesn't cost 
you anything, and it gives you great flexibility if you add new disks.  You can 
easily move an entire database, or a bunch of databases, by just moving the 
data pointing to the new location with symlinks.  Once you put a bunch of 
databases into a single tablespace, moving subsets of them becomes very 
difficult.

It also makes it really easy to find who is using resources.

We operate about 450 databases spread across several servers.  Postgres has no 
trouble at all managing hundreds of databases.

Craig

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Multiple data base on same server

2010-02-26 Thread elias ghanem
Ok thanks guys for your time

-Original Message-
From: Craig James [mailto:craig_ja...@emolecules.com] 
Sent: Friday, February 26, 2010 4:34 PM
To: Richard Huxton
Cc: elias ghanem; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Multiple data base on same server

Richard Huxton wrote:
 On 26/02/10 12:45, elias ghanem wrote:
 Hi,
 Thanks for your answer,
 Concerning the second point, each db have different table that are 
 logically
 related (for ex, tables for configuration, tables for business...) 
 plus I'm
 planning to put the indexes on their own tablespaces.
 Concerning the disks I will maybe stored on multiple disks (but surely 
 not
 200-300). So I'm just wondering If this big number of tablespaces on a 
 same
 db server may cause problems,
 
 If the tablespaces aren't on different disks, I'm not sure what the 
 point is.

Our policy is that *every* database has its own tablespace.  It doesn't cost
you anything, and it gives you great flexibility if you add new disks.  You
can easily move an entire database, or a bunch of databases, by just moving
the data pointing to the new location with symlinks.  Once you put a bunch
of databases into a single tablespace, moving subsets of them becomes very
difficult.

It also makes it really easy to find who is using resources.

We operate about 450 databases spread across several servers.  Postgres has
no trouble at all managing hundreds of databases.

Craig

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Multiple data base on same server

2010-02-26 Thread Ing. Marcos Ortiz Valmaseda

elias ghanem escribió:


Hi,

I’m using postgresql 8.4

I need to install multiple postgresql dbs on one server but I have 
some questions:


-Is there any problems (performance wise or other) if I have 10 to 15 
DBs on the same server?


-Each DB needs 10 tablespaces, so if I create 10 different tablespaces 
for each DB I will have 100 to 150 table space on the same server. So 
can this also cause any problems?


Thanks

It´s depends of the features of the server. If is a good server, for 
example of 16 GB to 32 of RAM, with 8 a 16 processors, with a good SAN 
with RAID -1 for the pg_xlog directory and RAID-10 for the $PG_DATA 
using ZFS if you are using Solaris or FreeBSD and xfs or ext3 using 
Linux , on a Operating Systems of 64 bits, I think that this load can be 
supported.


There are installations of PostgreSQL with more than 400 db, but the 
environment is very distribuided on several servers.
About the tablespaces, It´s very necesary to have 10 tablespaces on each 
database? Normally, you can separate the table or the tables with more 
activity to a rapid disc array (I ´m thinking on a SSD array), other 
tablespace for the indexes if you have many, and for example with 
pl/proxy you could handle the partitions of your data.
There is not necessaty to have 100 or 150 tablespaces on the same 
server. You can separate this on a SAN, you can have two or more main 
PostgreSQL servers and several slaves with the data replicated on any 
case of data corruption on the main servers.
Rebember look the configuration of the performance of the PostgreSQL 
servers: work_mem, shared_buffers, etc

Regards and I hope that comments helps to you.


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] bgwriter, checkpoints, curious (seeing delays)

2010-02-26 Thread Tory M Blue
On Fri, Feb 26, 2010 at 5:09 AM, Kevin Grittner
kevin.gritt...@wicourts.gov wrote:
 Tory M Blue  wrote:

 2010-02-25 22:53:13 PST LOG: checkpoint starting: time
 2010-02-25 22:53:17 PST postgres postgres [local] LOG: unexpected
 EOF on client connection
 2010-02-25 22:55:43 PST LOG: checkpoint complete: wrote 34155
 buffers (17.8%); 0 transaction log file(s) added, 0 removed, 15
 recycled; write=150.045 s, sync=0.000 s, total=150.046 s

 Did that unexpected EOF correspond to a connection attempt that gave
 up based on time?

 -Kevin

Kevin

Good question, I'm unclear what that was. I mean it's a LOG, so not a
client connection, that really kind of confused me. I don't normally
see EOF of client and an EOF on client from local, that's really
really weird

Tory

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] bgwriter, checkpoints, curious (seeing delays)

2010-02-26 Thread Ben Chobot
On Feb 26, 2010, at 11:23 AM, Tory M Blue wrote:

 On Fri, Feb 26, 2010 at 5:09 AM, Kevin Grittner
 kevin.gritt...@wicourts.gov wrote:
 Tory M Blue  wrote:
 
 2010-02-25 22:53:13 PST LOG: checkpoint starting: time
 2010-02-25 22:53:17 PST postgres postgres [local] LOG: unexpected
 EOF on client connection
 2010-02-25 22:55:43 PST LOG: checkpoint complete: wrote 34155
 buffers (17.8%); 0 transaction log file(s) added, 0 removed, 15
 recycled; write=150.045 s, sync=0.000 s, total=150.046 s
 
 Did that unexpected EOF correspond to a connection attempt that gave
 up based on time?
 
 -Kevin
 
 Kevin
 
 Good question, I'm unclear what that was. I mean it's a LOG, so not a
 client connection, that really kind of confused me. I don't normally
 see EOF of client and an EOF on client from local, that's really
 really weird

We see that from our monitoring software testing port 5432.

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] bgwriter, checkpoints, curious (seeing delays)

2010-02-26 Thread Tory M Blue
2010/2/25 Devrim GÜNDÜZ dev...@gunduz.org:
 On Thu, 2010-02-25 at 22:12 -0800, Tory M Blue wrote:
 shared_buffers = 1500MB

 Some people tend to increase this to 2.2GB(32-bit) or 4-6 GB (64 bit),
 if needed. Please note that more shared_buffers will lead to more
 pressure on bgwriter, but it also has lots of benefits, too.

 work_mem = 100MB

 This is too much. Since you have 300 connections, you will probably swap
 because of this setting, since each connection may use this much
 work_mem. The rule of the thumb is to set this to a lower general value
 (say, 1-2 MB), and set it per-query when needed.

I'm slightly confused. Most things I've read, including running
pg_tune for grins puts this around 100MB, 98MB for pgtune.  1-2MB just
seems really low to me. And Ignore the 300 connections, thats an upper
limit, I usually run a max of 40-45 but usually around 20 connections
per sec.


Also is there a way to log if there are any deadlocks happening (I'm
not seeing them in logs)
deadlock_timeout = 5s

Tory

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] bgwriter, checkpoints, curious (seeing delays)

2010-02-26 Thread Tory M Blue
2010/2/25 Devrim GÜNDÜZ dev...@gunduz.org:
 On Thu, 2010-02-25 at 23:01 -0800, Tory M Blue wrote:

 Checkpoint_timeout is the default and that looks like 5 mins (300
 seconds). And is obviously why I have such a discrepancy between time
 reached and requested.

 If you have a high load, you may want to start tuning with 15 minutes,
 and bump it to 30 mins if needed. Also you may want to decrease segments
 value based on your findings, since increasing only one of them won't
 help you a lot.

 As I wrote before, pg_stat_bgwriter is your friend here.

Actually these servers have almost no load. Really they run very cool

Load Average (for the day):
Cur: 0.16 Load
Avg: 0.22 Load

So I don't think it's load  No network issues (that I've found)  and
while the server will eventually eat all the memory it's currently
sitting with 4 gb free.

Mem Total (for the day):
Cur: 25.55 GBytes
Avg: 25.55 GBytes
Max: 25.55 GBytes
Mem Available (for the day):
Cur: 4.72 GBytes
Avg: 5.15 GBytes
Max: 5.71 GBytes
Bytes Used (for the day):
Cur: 20.83 GBytes
Avg: 20.40 GBytes
Max: 21.20 GBytes


Thanks for your pointers, I'm continuing to look and will do some
tests today. I also hear you about fsync and will do some testing here
to see why this was set (been running for 4-6 years), and that setting
was probably set way way back in the day and it's survived each
upgrade/hardware/storage update.

Tory

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] bgwriter, checkpoints, curious (seeing delays)

2010-02-26 Thread Jorge Montero


 Tory M Blue tmb...@gmail.com 02/26/10 12:52 PM 

 This is too much. Since you have 300 connections, you will probably swap
 because of this setting, since each connection may use this much
 work_mem. The rule of the thumb is to set this to a lower general value
 (say, 1-2 MB), and set it per-query when needed.

 I'm slightly confused. Most things I've read, including running
 pg_tune for grins puts this around 100MB, 98MB for pgtune.  1-2MB just
 seems really low to me. And Ignore the 300 connections, thats an upper
 limit, I usually run a max of 40-45 but usually around 20 connections
per sec.

It has been said in the list before that pg_tune is extremely aggressive when 
it comes to work_mem.

100MB is just a whole lot of memory for something that is dedicated mostly to 
sorting. Some of my relatively heavy duty queries, which end up manipulating 
hundreds of thousands of rows in subqueries, do just fine with quite a bit less.

1-2MB is good enough for many families of queries, but it's hard to say what 
the right default should be for you. The right number can be estimated by 
running explain analyze on your most common queries, with parameters that are 
representative to regular use, and see how much memory they actually claim to 
use. In my case, for example, most of my queries do just fine with 10 MB, while 
the reporting queries that accumulate quite a bit of deta request up to 60MB.

If your average query needs 100 MB, it'd still mean that 40 connections take 4 
gigs worth of work memory, which might be better spent caching the database.

Now, if your system is so over-specced that wasting a few gigs of RAM doesn't 
impact your performance one bit, then you might not have to worry about this at 
all.



-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] bgwriter, checkpoints, curious (seeing delays)

2010-02-26 Thread Fernando Hevia
 

 -Mensaje original-
 De: Tory M Blue
 
 2010/2/25 Devrim GÜNDÜZ dev...@gunduz.org:
  On Thu, 2010-02-25 at 22:12 -0800, Tory M Blue wrote:
  shared_buffers = 1500MB
 
  Some people tend to increase this to 2.2GB(32-bit) or 4-6 
 GB (64 bit), 
  if needed. Please note that more shared_buffers will lead to more 
  pressure on bgwriter, but it also has lots of benefits, too.
 
  work_mem = 100MB
 
  This is too much. Since you have 300 connections, you will probably 
  swap because of this setting, since each connection may use 
 this much 
  work_mem. The rule of the thumb is to set this to a lower general 
  value (say, 1-2 MB), and set it per-query when needed.
 
 I'm slightly confused. Most things I've read, including 
 running pg_tune for grins puts this around 100MB, 98MB for 
 pgtune.  1-2MB just seems really low to me. And Ignore the 
 300 connections, thats an upper limit, I usually run a max of 
 40-45 but usually around 20 connections per sec.
 

If you have maximum 45 users running simultaneously a rather complex query
that requires... say 3 sorts, thats 45 x 100MB x 3 = 13.5 GB of RAM used up
because of this particular work_mem setting. Doesn't mean it will happen
just that your settings make this scenario possible.

So, to avoid this scenario, the suggestion is to keep work_mem low and
adjust it per query if required. I find 1-2 MB too low for my particular
requirements so I have it in 8 MB. Anyway, due to your server having a lot
of RAM your setting might make sense. But all that memory would probably be
better used if it was available for caching.

 
 Also is there a way to log if there are any deadlocks 
 happening (I'm not seeing them in logs) deadlock_timeout = 5s
 

In postgresql.conf:
log_lock_waits = on # log lock waits = deadlock_timeout


Regards,
Fernando.


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] bgwriter, checkpoints, curious (seeing delays)

2010-02-26 Thread Tory M Blue
On Fri, Feb 26, 2010 at 11:49 AM, Jorge Montero
jorge_mont...@homedecorators.com wrote:


 Tory M Blue tmb...@gmail.com 02/26/10 12:52 PM 

 This is too much. Since you have 300 connections, you will probably swap
 because of this setting, since each connection may use this much
 work_mem. The rule of the thumb is to set this to a lower general value
 (say, 1-2 MB), and set it per-query when needed.


 1-2MB is good enough for many families of queries, but it's hard to say what 
 the right default should be for you. The right number can be estimated by 
 running explain analyze on your most common queries, with parameters that are 
 representative to regular use, and see how much memory they actually claim to 
 use. In my case, for example, most of my queries do just fine with 10 MB, 
 while the reporting queries that accumulate quite a bit of deta request up to 
 60MB.

 If your average query needs 100 MB, it'd still mean that 40 connections take 
 4 gigs worth of work memory, which might be better spent caching the database.

Ya my boxes are pretty well stacked, but a question. How does one get
the memory usage of a query. You state to look at explain analyze but
this gives timing and costs, but is one of the numbers memory or do I
have to take values and do some math?

--
 Function Scan on listings_search  (cost=0.00..260.00 rows=1000
width=108) (actual time=904.374..904.383 rows=10 loops=1)
 Total runtime: 904.411 ms

Thanks
Tory

Also don't think this 5 second thing is the DB.. Sure is not checkpoints.

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] SSD + RAID

2010-02-26 Thread Bruce Momjian

I have added documentation about the ATAPI drive flush command, and the
typical SSD behavior.

---

Greg Smith wrote:
 Ron Mayer wrote:
  Bruce Momjian wrote:

  Agreed, thought I thought the problem was that SSDs lie about their
  cache flush like SATA drives do, or is there something I am missing?
  
 
  There's exactly one case I can find[1] where this century's IDE
  drives lied more than any other drive with a cache:
 
 Ron is correct that the problem of mainstream SATA drives accepting the 
 cache flush command but not actually doing anything with it is long gone 
 at this point.  If you have a regular SATA drive, it almost certainly 
 supports proper cache flushing.  And if your whole software/storage 
 stacks understands all that, you should not end up with corrupted data 
 just because there's a volative write cache in there.
 
 But the point of this whole testing exercise coming back into vogue 
 again is that SSDs have returned this negligent behavior to the 
 mainstream again.  See 
 http://opensolaris.org/jive/thread.jspa?threadID=121424 for a discussion 
 of this in a ZFS context just last month.  There are many documented 
 cases of Intel SSDs that will fake a cache flush, such that the only way 
 to get good reliable writes is to totally disable their writes 
 caches--at which point performance is so bad you might as well have 
 gotten a RAID10 setup instead (and longevity is toast too).
 
 This whole area remains a disaster area and extreme distrust of all the 
 SSD storage vendors is advisable at this point.  Basically, if I don't 
 see the capacitor responsible for flushing outstanding writes, and get a 
 clear description from the manufacturer how the cached writes are going 
 to be handled in the event of a power failure, at this point I have to 
 assume the answer is badly and your data will be eaten.  And the 
 prices for SSDs that meet that requirement are still quite steep.  I 
 keep hoping somebody will address this market at something lower than 
 the standard enterprise prices.  The upcoming SandForce designs seem 
 to have thought this through correctly:  
 http://www.anandtech.com/storage/showdoc.aspx?i=3702p=6  But the 
 product's not out to the general public yet (just like the Seagate units 
 that claim to have capacitor backups--I heard a rumor those are also 
 Sandforce designs actually, so they may be the only ones doing this 
 right and aiming at a lower price).
 
 -- 
 Greg Smith  2ndQuadrant US  Baltimore, MD
 PostgreSQL Training, Services and Support
 g...@2ndquadrant.com   www.2ndQuadrant.us
 

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com
  PG East:  http://www.enterprisedb.com/community/nav-pg-east-2010.do
  + If your life is a hard drive, Christ can be your backup. +
Index: doc/src/sgml/wal.sgml
===
RCS file: /cvsroot/pgsql/doc/src/sgml/wal.sgml,v
retrieving revision 1.62
diff -c -c -r1.62 wal.sgml
*** doc/src/sgml/wal.sgml	20 Feb 2010 18:28:37 -	1.62
--- doc/src/sgml/wal.sgml	27 Feb 2010 01:37:03 -
***
*** 59,66 
 same concerns about data loss exist for write-back drive caches as
 exist for disk controller caches.  Consumer-grade IDE and SATA drives are
 particularly likely to have write-back caches that will not survive a
!power failure.  Many solid-state drives also have volatile write-back
!caches.  To check write caching on productnameLinux/ use
 commandhdparm -I/;  it is enabled if there is a literal*/ next
 to literalWrite cache/; commandhdparm -W/ to turn off
 write caching.  On productnameFreeBSD/ use
--- 59,69 
 same concerns about data loss exist for write-back drive caches as
 exist for disk controller caches.  Consumer-grade IDE and SATA drives are
 particularly likely to have write-back caches that will not survive a
!power failure, though acronymATAPI-6/ introduced a drive cache
!flush command that some file systems use, e.g. acronymZFS/.
!Many solid-state drives also have volatile write-back
!caches, and many do not honor cache flush commands by default.
!To check write caching on productnameLinux/ use
 commandhdparm -I/;  it is enabled if there is a literal*/ next
 to literalWrite cache/; commandhdparm -W/ to turn off
 write caching.  On productnameFreeBSD/ use

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance