On Thu, Jun 26, 2008 at 9:49 AM, Peter T. Breuer <[EMAIL PROTECTED]> wrote: > "Also sprach Merlin Moncure:" >> As discussed down thread, software raid still gets benefits of >> write-back caching on the raid controller...but there are a couple of > > (I wish I knew what write-back caching was!)
hardware raid controllers generally have some dedicated memory for caching. the controllers can be configured in one of two modes: (the jargon is so common it's almost standard) write back: raid controller can lie to host o/s. when o/s asks controller to sync, controller can hold data in cache (for a time) write through: raid controller can not lie. all sync requests must pass through to disk The thinking is, the bbu on the controller can hold scheduled writes in memory (for a time) and replayed to disk when server restarts in event of power failure. This is a reasonable compromise between data integrity and performance. 'write back' caching provides insane burst IOPS (because you are writing to controller cache) and somewhat improved sustained IOPS because the controller is reorganizing writes on the fly in (hopefully) optimal fashion. > This imposes a considerable extra resource burden. It's a mystery to me > However the lack of extra buffering is really deliberate (double > buffering is a horrible thing in many ways, not least because of the <snip> completely unconvincing. the overhead of various cache layers is completely minute compared to a full fault to disk that requires a seek which is several orders of magnitude slower. The linux software raid algorithms are highly optimized, and run on a presumably (much faster) cpu than what the controller supports. However, there is still some extra oomph you can get out of letting the raid controller do what the software raid can't...namely delay sync for a time. merlin -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance