On Tuesday 17 August 2010 10:29:10 Greg Smith wrote:
> Andres Freund wrote:
> > An fsync() equals a barrier so it has the effect of stopping
> > reordering around it - especially on systems with larger multi-disk
> > arrays thats pretty expensive.
> > You can achieve surprising speedups, at least i
Bruce Momjian wrote:
Scott Carey wrote:
Don't ever have WAL and data on the same OS volume as ext3.
...
One partition for WAL, one for data. If using ext3 this is essentially
a performance requirement no matter how your array is set up underneath.
Do we need to document this?
No
Andres Freund wrote:
An fsync() equals a barrier so it has the effect of stopping
reordering around it - especially on systems with larger multi-disk
arrays thats pretty expensive.
You can achieve surprising speedups, at least in my experience, by
forcing the kernel to start writing out pages *wi
Scott Carey wrote:
> Don't ever have WAL and data on the same OS volume as ext3.
>
> If data=writeback, performance will be fine, data integrity will be ok
> for WAL, but data integrity will not be sufficient for the data
> partition. If data=ordered, performance will be very bad, but data
> inte
On Mon, Aug 16, 2010 at 04:54:19PM -0400, Greg Smith wrote:
> Andres Freund wrote:
> >A new checkpointing logic + a new syncing logic
> >(prepare_fsync() earlier and then fsync() later) would be a nice
> >thing. Do you plan to work on that?
> The background writer already caches fsync calls into a
Andres Freund wrote:
A new checkpointing logic + a new syncing logic
(prepare_fsync() earlier and then fsync() later) would be a nice
thing. Do you plan to work on that?
The background writer already caches fsync calls into a queue, so the
prepare step you're thinking needs to be there is a
On Mon, Aug 16, 2010 at 04:13:22PM -0400, Greg Smith wrote:
> Andres Freund wrote:
> >Or use -o sync. Or configure a ridiciuosly low dirty_memory amount
> >(which has a problem on large systems because 1% can still be too
> >much. Argh.)...
>
> -o sync completely trashes performance, and trying to
Andres Freund wrote:
Or use -o sync. Or configure a ridiciuosly low dirty_memory amount
(which has a problem on large systems because 1% can still be too
much. Argh.)...
-o sync completely trashes performance, and trying to set the
dirty_ratio values to even 1% doesn't really work due to th
On Mon, Aug 16, 2010 at 01:46:21PM -0400, Greg Smith wrote:
> Scott Carey wrote:
> >This is because an fsync on ext3 flushes _all dirty pages in the file
> >system_ to disk, not just those for the file being fsync'd.
> >One partition for WAL, one for data. If using ext3 this is
> >essentially a p
Scott Carey wrote:
This is because an fsync on ext3 flushes _all dirty pages in the file system_
to disk, not just those for the file being fsync'd.
One partition for WAL, one for data. If using ext3 this is essentially a performance requirement no matter how your array is set up underneath.
Don't ever have WAL and data on the same OS volume as ext3.
If data=writeback, performance will be fine, data integrity will be ok for WAL,
but data integrity will not be sufficient for the data partition.
If data=ordered, performance will be very bad, but data integrity will be OK.
This is beca
Bruce Momjian wrote:
We recomment 'data=writeback' for ext3 in our docs
Only for the WAL though, which is fine, and I think spelled out clearly
enough in the doc section you quoted. Ken's system has one big RAID
volume, which means he'd be mounting the data files with 'writeback'
too; th
Greg Smith wrote:
> > 2) Should I configure the ext3 file system with noatime and/or
> > data=writeback or data=ordered? My controller has a battery, the
> > logical drive has write cache enabled (write-back), and the physical
> > devices have write cache disabled (write-through).
>
> data=ord
>>> As others said, RAID6 is RAID5 + a hot spare.
>>
>> No. RAID6 is NOT RAID5 plus a hot spare.
>
> The original phrase was that RAID 6 was like RAID 5 with a hot spare
> ALREADY BUILT IN.
Built-in, or not - it is neither. It is more than that, actually. RAID
6 is like RAID 5 in that it uses pari
On Sun, Aug 8, 2010 at 12:46 AM, Scott Carey wrote:
>
> On Aug 5, 2010, at 4:09 PM, Scott Marlowe wrote:
>
>> On Thu, Aug 5, 2010 at 4:27 PM, Pierre C wrote:
>>>
1) Should I switch to RAID 10 for performance? I see things like "RAID 5
is bad for a DB" and "RAID 5 is slow with <= 6 driv
On Aug 5, 2010, at 4:09 PM, Scott Marlowe wrote:
> On Thu, Aug 5, 2010 at 4:27 PM, Pierre C wrote:
>>
>>> 1) Should I switch to RAID 10 for performance? I see things like "RAID 5
>>> is bad for a DB" and "RAID 5 is slow with <= 6 drives" but I see little on
>>> RAID 6.
>>
>> As others said, R
> Yes, I know that. I am very familiar with how RAID6 works. RAID5
> with the hot spare already rebuilt / built in is a good enough answer
> for management where big words like parity might scare some PHBs.
>
>> In terms of storage cost, it IS like paying for RAID5 + a hot spare,
>> but the prote
On Fri, Aug 6, 2010 at 11:32 AM, Justin Pitts wrote:
As others said, RAID6 is RAID5 + a hot spare.
>>>
>>> No. RAID6 is NOT RAID5 plus a hot spare.
>>
>> The original phrase was that RAID 6 was like RAID 5 with a hot spare
>> ALREADY BUILT IN.
>
> Built-in, or not - it is neither. It is more
On Fri, Aug 6, 2010 at 3:17 AM, Matthew Wakeling wrote:
> On Thu, 5 Aug 2010, Scott Marlowe wrote:
>>
>> RAID6 is basically RAID5 with a hot spare already built into the
>> array.
>
> On Fri, 6 Aug 2010, Pierre C wrote:
>>
>> As others said, RAID6 is RAID5 + a hot spare.
>
> No. RAID6 is NOT RAID5
On Thu, 5 Aug 2010, Scott Marlowe wrote:
RAID6 is basically RAID5 with a hot spare already built into the
array.
On Fri, 6 Aug 2010, Pierre C wrote:
As others said, RAID6 is RAID5 + a hot spare.
No. RAID6 is NOT RAID5 plus a hot spare.
RAID5 uses a single parity datum (XOR) to ensure protec
On 06/08/10 12:31, Mark Kirkwood wrote:
On 06/08/10 11:58, Alan Hodgson wrote:
On Thursday, August 05, 2010, Mark
Kirkwood
wrote:
Normally I'd agree with the others and recommend RAID10 - but you say
you have an OLAP workload - if it is *heavily* read biased you may get
better performance with
On 06/08/10 11:58, Alan Hodgson wrote:
On Thursday, August 05, 2010, Mark Kirkwood
wrote:
Normally I'd agree with the others and recommend RAID10 - but you say
you have an OLAP workload - if it is *heavily* read biased you may get
better performance with RAID5 (more effective disks to read f
On Thursday, August 05, 2010, Mark Kirkwood
wrote:
> Normally I'd agree with the others and recommend RAID10 - but you say
> you have an OLAP workload - if it is *heavily* read biased you may get
> better performance with RAID5 (more effective disks to read from).
> Having said that, your sequent
On 06/08/10 06:28, Kenneth Cox wrote:
I am using PostgreSQL 8.3.7 on a dedicated IBM 3660 with 24GB RAM
running CentOS 5.4 x86_64. I have a ServeRAID 8k controller with 6
SATA 7500RPM disks in RAID 6, and for the OLAP workload it feels*
slow. I have 6 more disks to add, and the RAID has to be
On Thu, Aug 5, 2010 at 5:13 PM, Dave Crooke wrote:
> Definitely switch to RAID-10 it's not merely that it's a fair bit
> faster on normal operations (less seek contention), it's **WAY** faster than
> any parity based RAID (RAID-2 through RAID-6) in degraded mode when you lose
> a disk and hav
Definitely switch to RAID-10 it's not merely that it's a fair bit
faster on normal operations (less seek contention), it's **WAY** faster than
any parity based RAID (RAID-2 through RAID-6) in degraded mode when you lose
a disk and have to rebuild it. This is something many people don't test fo
On Thu, Aug 5, 2010 at 4:27 PM, Pierre C wrote:
>
>> 1) Should I switch to RAID 10 for performance? I see things like "RAID 5
>> is bad for a DB" and "RAID 5 is slow with <= 6 drives" but I see little on
>> RAID 6.
>
> As others said, RAID6 is RAID5 + a hot spare.
>
> Basically when you UPDATE a
On 8/5/10 11:28 AM, Kenneth Cox wrote:
I am using PostgreSQL 8.3.7 on a dedicated IBM 3660 with 24GB RAM
running CentOS 5.4 x86_64. I have a ServeRAID 8k controller with 6 SATA
7500RPM disks in RAID 6, and for the OLAP workload it feels* slow
My current performance is 85MB/s write, 151 MB/s
1) Should I switch to RAID 10 for performance? I see things like "RAID
5 is bad for a DB" and "RAID 5 is slow with <= 6 drives" but I see
little on RAID 6.
As others said, RAID6 is RAID5 + a hot spare.
Basically when you UPDATE a row, at some point postgres will write the
page which con
Kenneth Cox wrote:
1) Should I switch to RAID 10 for performance? I see things like
"RAID 5 is bad for a DB" and "RAID 5 is slow with <= 6 drives" but I
see little on RAID 6. RAID 6 was the original choice for more usable
space with good redundancy. My current performance is 85MB/s write,
1
On Thu, Aug 5, 2010 at 12:28 PM, Kenneth Cox wrote:
> I am using PostgreSQL 8.3.7 on a dedicated IBM 3660 with 24GB RAM running
> CentOS 5.4 x86_64. I have a ServeRAID 8k controller with 6 SATA 7500RPM
> disks in RAID 6, and for the OLAP workload it feels* slow. I have 6 more
> disks to add, and
On Thursday, August 05, 2010, "Kenneth Cox" wrote:
> 1) Should I switch to RAID 10 for performance? I see things like "RAID 5
> is bad for a DB" and "RAID 5 is slow with <= 6 drives" but I see little
> on RAID 6. RAID 6 was the original choice for more usable space with
> good redundancy. My cu
I am using PostgreSQL 8.3.7 on a dedicated IBM 3660 with 24GB RAM running
CentOS 5.4 x86_64. I have a ServeRAID 8k controller with 6 SATA 7500RPM
disks in RAID 6, and for the OLAP workload it feels* slow. I have 6 more
disks to add, and the RAID has to be rebuilt in any case, but first I
33 matches
Mail list logo