Re: [PERFORM] setting up raid10 with more than 4 drives

2007-05-31 Thread Steinar H. Gunderson
On Wed, May 30, 2007 at 12:41:46AM -0400, Jonah H. Harris wrote:
 Yeah, I've never seen a way to RAID-1 more than 2 drives either.

pannekake:~ grep -A 1 md0 /proc/mdstat 
md0 : active raid1 dm-20[2] dm-19[1] dm-18[0]
  64128 blocks [3/3] [UUU]

It's not a big device, but I can ensure you it exists :-)

/* Steinar */
-- 
Homepage: http://www.sesse.net/

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [PERFORM] setting up raid10 with more than 4 drives

2007-05-31 Thread Sander Steffann

Hi,

Op 1-jun-2007, om 1:39 heeft Steinar H. Gunderson het volgende  
geschreven:

On Wed, May 30, 2007 at 12:41:46AM -0400, Jonah H. Harris wrote:

Yeah, I've never seen a way to RAID-1 more than 2 drives either.


pannekake:~ grep -A 1 md0 /proc/mdstat
md0 : active raid1 dm-20[2] dm-19[1] dm-18[0]
  64128 blocks [3/3] [UUU]

It's not a big device, but I can ensure you it exists :-)


I talked to someone yesterday who did a 10 or 11 way RAID1 with Linux  
MD for high performance video streaming. Seemed to work very well.


- Sander


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [PERFORM] setting up raid10 with more than 4 drives

2007-05-30 Thread Peter Childs

On 30/05/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:


On Wed, 30 May 2007, Jonah H. Harris wrote:

 On 5/29/07, Luke Lonergan [EMAIL PROTECTED] wrote:
  AFAIK you can't RAID1 more than two drives, so the above doesn't make
  sense
  to me.

 Yeah, I've never seen a way to RAID-1 more than 2 drives either.  It
 would have to be his first one:

 D1 + D2 = MD0 (RAID 1)
 D3 + D4 = MD1 ...
 D5 + D6 = MD2 ...
 MD0 + MD1 + MD2 = MDF (RAID 0)


I don't know what the failure mode ends up being, but on linux I had no
problems creating what appears to be a massively redundant (but small)
array

md0 : active raid1 sdo1[10](S) sdn1[8] sdm1[7] sdl1[6] sdk1[5] sdj1[4]
sdi1[3] sdh1[2] sdg1[9] sdf1[1] sde1[11](S) sdd1[0]
   896 blocks [10/10] [UU]

David Lang



Good point, also if you had Raid 1 with 3 drives with some bit errors at
least you can take a vote on whats right. Where as if you only have 2 and
they disagree how do you know which is right other than pick one and hope...
But whatever it will be slower to keep in sync on a heavy write system.

Peter.


Re: [PERFORM] setting up raid10 with more than 4 drives

2007-05-30 Thread Stephen Frost
* Peter Childs ([EMAIL PROTECTED]) wrote:
 Good point, also if you had Raid 1 with 3 drives with some bit errors at
 least you can take a vote on whats right. Where as if you only have 2 and
 they disagree how do you know which is right other than pick one and hope...
 But whatever it will be slower to keep in sync on a heavy write system.

I'm not sure, but I don't think most RAID1 systems do reads against all
drives and compare the results before returning it to the caller...  I'd
be curious if I'm wrong.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [PERFORM] setting up raid10 with more than 4 drives

2007-05-30 Thread Gregory Stark

Jonah H. Harris [EMAIL PROTECTED] writes:

 On 5/29/07, Luke Lonergan [EMAIL PROTECTED] wrote:
 AFAIK you can't RAID1 more than two drives, so the above doesn't make sense
 to me.

Sure you can. In fact it's a very common backup strategy. You build a
three-way mirror and then when it comes time to back it up you break it into a
two-way mirror and back up the orphaned array at your leisure. When it's done
you re-add it and rebuild the degraded array. Good raid controllers can
rebuild the array at low priority squeezing in the reads in idle cycles.

I don't think you normally do it for performance though since there's more to
be gained by using larger stripes. In theory you should get the same boost on
reads as widening your stripes but of course you get no benefit on writes. And
I'm not sure raid controllers optimize raid1 accesses well in practice either.

-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com


---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [PERFORM] setting up raid10 with more than 4 drives

2007-05-30 Thread Luke Lonergan
Hi Peter,

On 5/30/07 12:29 AM, Peter Childs [EMAIL PROTECTED] wrote:

 Good point, also if you had Raid 1 with 3 drives with some bit errors at least
 you can take a vote on whats right. Where as if you only have 2 and they
 disagree how do you know which is right other than pick one and hope... But
 whatever it will be slower to keep in sync on a heavy write system.

Much better to get a RAID system that checksums blocks so that good is
known.  Solaris ZFS does that, as do high end systems from EMC and HDS.

- Luke



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [PERFORM] setting up raid10 with more than 4 drives

2007-05-30 Thread Michael Stone

On Wed, May 30, 2007 at 07:06:54AM -0700, Luke Lonergan wrote:

On 5/30/07 12:29 AM, Peter Childs [EMAIL PROTECTED] wrote:

Good point, also if you had Raid 1 with 3 drives with some bit errors at least
you can take a vote on whats right. Where as if you only have 2 and they
disagree how do you know which is right other than pick one and hope... But
whatever it will be slower to keep in sync on a heavy write system.


Much better to get a RAID system that checksums blocks so that good is
known.  Solaris ZFS does that, as do high end systems from EMC and HDS.


I don't see how that's better at all; in fact, it reduces to exactly the 
same problem: given two pieces of data which disagree, which is right?  
The ZFS hashes do a better job of error detection, but that's still not 
the same thing as a voting system (3 copies, 2 of 3 is correct answer) 
to resolve inconsistencies.


Mike Stone

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [PERFORM] setting up raid10 with more than 4 drives

2007-05-30 Thread Luke Lonergan
 I don't see how that's better at all; in fact, it reduces to 
 exactly the same problem: given two pieces of data which 
 disagree, which is right?  

The one that matches the checksum.

- Luke


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] setting up raid10 with more than 4 drives

2007-05-30 Thread Michael Stone

On Wed, May 30, 2007 at 10:36:48AM -0400, Luke Lonergan wrote:
I don't see how that's better at all; in fact, it reduces to 
exactly the same problem: given two pieces of data which 
disagree, which is right?  


The one that matches the checksum.


And you know the checksum is good, how?

Mike Stone

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [PERFORM] setting up raid10 with more than 4 drives

2007-05-30 Thread Luke Lonergan
It's created when the data is written to both drives.

This is standard stuff, very well proven: try googling 'self healing zfs'.

- Luke

Msg is shrt cuz m on ma treo

 -Original Message-
From:   Michael Stone [mailto:[EMAIL PROTECTED]
Sent:   Wednesday, May 30, 2007 11:11 AM Eastern Standard Time
To: pgsql-performance@postgresql.org
Subject:Re: [PERFORM] setting up raid10 with more than 4 drives

On Wed, May 30, 2007 at 10:36:48AM -0400, Luke Lonergan wrote:
 I don't see how that's better at all; in fact, it reduces to 
 exactly the same problem: given two pieces of data which 
 disagree, which is right?  

The one that matches the checksum.

And you know the checksum is good, how?

Mike Stone

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org



Re: [PERFORM] setting up raid10 with more than 4 drives

2007-05-30 Thread PFC
On Wed, 30 May 2007 16:36:48 +0200, Luke Lonergan  
[EMAIL PROTECTED] wrote:



I don't see how that's better at all; in fact, it reduces to
exactly the same problem: given two pieces of data which
disagree, which is right?


The one that matches the checksum.


- postgres tells OS write this block
- OS sends block to drives A and B
- drive A happens to be lucky and seeks faster, writes data
	- student intern carrying pizzas for senior IT staff trips over power  
cord*

- boom
- drive B still has old block

	Both blocks have correct checksum, so only a version counter/timestamp  
could tell.
	Fortunately if fsync() is honored correctly (did you check ?) postgres  
will zap such errors in recovery.


	Smart RAID1 or 0+1 controllers (including software RAID) will distribute  
random reads to both disks (but not writes obviously).


	* = this happened at my old job, yes they had a very frightening server  
room, or more precisely cave ; I never went there, I didn't want to be  
the one fired for tripping over the wire...



From Linux Software RAID howto :

- benchmarking (quite brief !)

http://unthought.net/Software-RAID.HOWTO/Software-RAID.HOWTO-9.html#ss9.5

- read Data Scrubbing here :
http://gentoo-wiki.com/HOWTO_Install_on_Software_RAID

- yeah but does it work ? (scary)
http://bugs.donarmstrong.com/cgi-bin/bugreport.cgi?bug=405919

 md/sync_action
  This can be used to monitor and control the resync/recovery
  process of MD. In particular, writing check here will cause
  the array to read all data block and check that they are
  consistent (e.g. parity is correct, or all mirror replicas are
  the same). Any discrepancies found are NOT corrected.

  A count of problems found will be stored in md/mismatch_count.

  Alternately, repair can be written which will cause the same
  check to be performed, but any errors will be corrected.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [PERFORM] setting up raid10 with more than 4 drives

2007-05-30 Thread PFC


Oh by the way, I saw a nifty patch in the queue :

Find a way to reduce rotational delay when repeatedly writing last WAL page
Currently fsync of WAL requires the disk platter to perform a full  
rotation to fsync again.
One idea is to write the WAL to different offsets that might reduce the  
rotational delay.


	This will not work if the WAL is on RAID1, because two disks never spin  
exactly at the same speed...


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [PERFORM] setting up raid10 with more than 4 drives

2007-05-30 Thread Luke Lonergan

 This is standard stuff, very well proven: try googling 'self healing zfs'.

The first hit on this search is a demo of ZFS detecting corruption of one of
the mirror pair using checksums, very cool:
  
http://www.opensolaris.org/os/community/zfs/demos/selfheal/;jsessionid=52508
D464883F194061E341F58F4E7E1

The bad drive is pointed out directly using the checksum and the data
integrity is preserved.

- Luke



Re: [PERFORM] setting up raid10 with more than 4 drives

2007-05-30 Thread mark
On Wed, May 30, 2007 at 08:51:45AM -0700, Luke Lonergan wrote:
  This is standard stuff, very well proven: try googling 'self healing zfs'.
 The first hit on this search is a demo of ZFS detecting corruption of one of
 the mirror pair using checksums, very cool:
   
 http://www.opensolaris.org/os/community/zfs/demos/selfheal/;jsessionid=52508
 D464883F194061E341F58F4E7E1
 
 The bad drive is pointed out directly using the checksum and the data
 integrity is preserved.

One part is corruption. Another is ordering and consistency. ZFS represents
both RAID-style storage *and* journal-style file system. I imagine consistency
and ordering is handled through journalling.

Cheers,
mark

-- 
[EMAIL PROTECTED] / [EMAIL PROTECTED] / [EMAIL PROTECTED] 
__
.  .  _  ._  . .   .__.  . ._. .__ .   . . .__  | Neighbourhood Coder
|\/| |_| |_| |/|_ |\/|  |  |_  |   |/  |_   | 
|  | | | | \ | \   |__ .  |  | .|. |__ |__ | \ |__  | Ottawa, Ontario, Canada

  One ring to rule them all, one ring to find them, one ring to bring them all
   and in the darkness bind them...

   http://mark.mielke.cc/


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [PERFORM] setting up raid10 with more than 4 drives

2007-05-30 Thread Gregory Stark
Michael Stone [EMAIL PROTECTED] writes:

Michael Stone [EMAIL PROTECTED] writes:

 On Wed, May 30, 2007 at 07:06:54AM -0700, Luke Lonergan wrote:

  Much better to get a RAID system that checksums blocks so that good is
  known. Solaris ZFS does that, as do high end systems from EMC and HDS.

 I don't see how that's better at all; in fact, it reduces to exactly the same
 problem: given two pieces of data which disagree, which is right?  

Well, the one where the checksum is correct.

In practice I've never seen a RAID failure due to outright bad data. In my
experience when a drive goes bad it goes really bad and you can't read the
block at all without i/o errors.

In every case where I've seen bad data it was due to bad memory (in one case
bad memory in the RAID controller cache -- that was hell to track down).
Checksums aren't even enough in that case as you'll happily generate a
checksum for the bad data before storing it...

-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [PERFORM] setting up raid10 with more than 4 drives

2007-05-30 Thread Rajesh Kumar Mallah

Sorry for posting and disappearing.

i am still not clear what is the best way of throwing in more
disks into the system.
does more stripes means more performance (mostly) ?
also is there any thumb rule about best stripe size ? (8k,16k,32k...)

regds
mallah



On 5/30/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:

On Wed, May 30, 2007 at 08:51:45AM -0700, Luke Lonergan wrote:
  This is standard stuff, very well proven: try googling 'self healing zfs'.
 The first hit on this search is a demo of ZFS detecting corruption of one of
 the mirror pair using checksums, very cool:

 http://www.opensolaris.org/os/community/zfs/demos/selfheal/;jsessionid=52508
 D464883F194061E341F58F4E7E1

 The bad drive is pointed out directly using the checksum and the data
 integrity is preserved.

One part is corruption. Another is ordering and consistency. ZFS represents
both RAID-style storage *and* journal-style file system. I imagine consistency
and ordering is handled through journalling.

Cheers,
mark

--
[EMAIL PROTECTED] / [EMAIL PROTECTED] / [EMAIL PROTECTED] 
__
.  .  _  ._  . .   .__.  . ._. .__ .   . . .__  | Neighbourhood Coder
|\/| |_| |_| |/|_ |\/|  |  |_  |   |/  |_   |
|  | | | | \ | \   |__ .  |  | .|. |__ |__ | \ |__  | Ottawa, Ontario, Canada

  One ring to rule them all, one ring to find them, one ring to bring them all
   and in the darkness bind them...

   http://mark.mielke.cc/


---(end of broadcast)---
TIP 6: explain analyze is your friend



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [PERFORM] setting up raid10 with more than 4 drives

2007-05-30 Thread Luke Lonergan
Mark,

On 5/30/07 8:57 AM, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:

 One part is corruption. Another is ordering and consistency. ZFS represents
 both RAID-style storage *and* journal-style file system. I imagine consistency
 and ordering is handled through journalling.

Yep and versioning, which answers PFC's scenario.

Short answer: ZFS has a very reliable model that uses checksumming and
journaling along with block versioning to implement self healing.  There
are others that do some similar things with checksumming on the SAN HW and
cooperation with the filesystem.

- Luke



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [PERFORM] setting up raid10 with more than 4 drives

2007-05-30 Thread mark
On Thu, May 31, 2007 at 01:28:58AM +0530, Rajesh Kumar Mallah wrote:
 i am still not clear what is the best way of throwing in more
 disks into the system.
 does more stripes means more performance (mostly) ?
 also is there any thumb rule about best stripe size ? (8k,16k,32k...)

It isn't that simple. RAID1 should theoretically give you the best read
performance. If all you care about is read, then best performance would
be to add more mirrors to your array.

For write performance, RAID0 is the best. I think this is what you mean
by more stripes.

This is where RAID 1+0/0+1 come in. To reconcile the above. Your question
seems to be: I have a RAID 1+0/0+1 system. Should I add disks onto the 0
part of the array? Or the 1 part of the array?

My conclusion to you would be: Both, unless you are certain that you load
is scaled heavily towards read, in which case the 1, or if scaled heavily
towards write, then 0.

Then comes the other factors. Do you want redundancy? Then you want 1.
Do you want capacity? Then you want 0.

There is no single answer for most people.

For me, stripe size is the last decision to make, and may be heavily
sensitive to load patterns. This suggests a trial and error / benchmarking
requirement to determine the optimal stripe size for your use.

Cheers,
mark

-- 
[EMAIL PROTECTED] / [EMAIL PROTECTED] / [EMAIL PROTECTED] 
__
.  .  _  ._  . .   .__.  . ._. .__ .   . . .__  | Neighbourhood Coder
|\/| |_| |_| |/|_ |\/|  |  |_  |   |/  |_   | 
|  | | | | \ | \   |__ .  |  | .|. |__ |__ | \ |__  | Ottawa, Ontario, Canada

  One ring to rule them all, one ring to find them, one ring to bring them all
   and in the darkness bind them...

   http://mark.mielke.cc/


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [PERFORM] setting up raid10 with more than 4 drives

2007-05-30 Thread Rajesh Kumar Mallah

On 5/31/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:

On Thu, May 31, 2007 at 01:28:58AM +0530, Rajesh Kumar Mallah wrote:
 i am still not clear what is the best way of throwing in more
 disks into the system.
 does more stripes means more performance (mostly) ?
 also is there any thumb rule about best stripe size ? (8k,16k,32k...)

It isn't that simple. RAID1 should theoretically give you the best read
performance. If all you care about is read, then best performance would
be to add more mirrors to your array.

For write performance, RAID0 is the best. I think this is what you mean
by more stripes.

This is where RAID 1+0/0+1 come in. To reconcile the above. Your question
seems to be: I have a RAID 1+0/0+1 system. Should I add disks onto the 0
part of the array? Or the 1 part of the array?



My conclusion to you would be: Both, unless you are certain that you load
is scaled heavily towards read, in which case the 1, or if scaled heavily
towards write, then 0.


thanks . this answers to my query. all the time i was thinking of 1+0
only failing to observe the 0+1 part in it.



Then comes the other factors. Do you want redundancy? Then you want 1.
Do you want capacity? Then you want 0.


Ok.



There is no single answer for most people.

For me, stripe size is the last decision to make, and may be heavily
sensitive to load patterns. This suggests a trial and error / benchmarking
requirement to determine the optimal stripe size for your use.


thanks.
mallah.



Cheers,
mark

--
[EMAIL PROTECTED] / [EMAIL PROTECTED] / [EMAIL PROTECTED] 
__
.  .  _  ._  . .   .__.  . ._. .__ .   . . .__  | Neighbourhood Coder
|\/| |_| |_| |/|_ |\/|  |  |_  |   |/  |_   |
|  | | | | \ | \   |__ .  |  | .|. |__ |__ | \ |__  | Ottawa, Ontario, Canada

  One ring to rule them all, one ring to find them, one ring to bring them all
   and in the darkness bind them...

   http://mark.mielke.cc/




---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [PERFORM] setting up raid10 with more than 4 drives

2007-05-29 Thread Luke Lonergan
Stripe of mirrors is preferred to mirror of stripes for the best balance of
protection and performance.

In the stripe of mirrors you can lose up to half of the disks and still be
operational.  In the mirror of stripes, the most you could lose is two
drives.  The performance of the two should be similar - perhaps the seek
performance would be different for high concurrent use in PG.

- Luke


On 5/29/07 2:14 PM, Rajesh Kumar Mallah [EMAIL PROTECTED] wrote:

 hi,
 
 this is not really postgresql specific, but any help is appreciated.
 i have read more spindles the better it is for IO performance.
 
 suppose i have 8 drives , should a stripe (raid0) be created on
 2 mirrors (raid1) of 4 drives each OR  should a stripe on 4 mirrors
 of 2 drives each be created  ?
 
 also does single channel  or dual channel controllers makes lot
 of difference in raid10 performance ?
 
 regds
 mallah.
 
 ---(end of broadcast)---
 TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match
 



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [PERFORM] setting up raid10 with more than 4 drives

2007-05-29 Thread Rajesh Kumar Mallah

On 5/30/07, Luke Lonergan [EMAIL PROTECTED] wrote:

Stripe of mirrors is preferred to mirror of stripes for the best balance of
protection and performance.


nooo! i am not aksing raid10 vs raid01 . I am considering stripe of
mirrors only.
the question is how are more number of disks supposed to be
BEST utilized in terms of IO performance  for

1. for adding more mirrored stripes OR
2. for adding more harddrives to the mirrors.

say i had 4 drives in raid10 format

D1  raid1  D2 -- MD0
D3  raid1  D4 -- MD1
MD0  raid0 MD1  -- MDF (final)

now i get 2 drives D5 and D6 the i got 2 options

1.  create a new mirror
D5 raid1 D6 -- MD2
MD0 raid0 MD1 raid0 MD2  -- MDF final


OR

D1 raid1 D2 raid1 D5  -- MD0
D3 raid1 D4 raid1 D6  -- MD1
MD0 raid0 MD1  -- MDF (final)

thanks , hope my question is clear now.


Regds
mallah.






In the stripe of mirrors you can lose up to half of the disks and still be
operational.  In the mirror of stripes, the most you could lose is two
drives.  The performance of the two should be similar - perhaps the seek
performance would be different for high concurrent use in PG.

- Luke


On 5/29/07 2:14 PM, Rajesh Kumar Mallah [EMAIL PROTECTED] wrote:

 hi,

 this is not really postgresql specific, but any help is appreciated.
 i have read more spindles the better it is for IO performance.

 suppose i have 8 drives , should a stripe (raid0) be created on
 2 mirrors (raid1) of 4 drives each OR  should a stripe on 4 mirrors
 of 2 drives each be created  ?

 also does single channel  or dual channel controllers makes lot
 of difference in raid10 performance ?

 regds
 mallah.

 ---(end of broadcast)---
 TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match






---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [PERFORM] setting up raid10 with more than 4 drives

2007-05-29 Thread Luke Lonergan
Hi Rajesh,

On 5/29/07 7:18 PM, Rajesh Kumar Mallah [EMAIL PROTECTED] wrote:

 D1 raid1 D2 raid1 D5  -- MD0
 D3 raid1 D4 raid1 D6  -- MD1
 MD0 raid0 MD1  -- MDF (final)

AFAIK you can't RAID1 more than two drives, so the above doesn't make sense
to me.

- Luke



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [PERFORM] setting up raid10 with more than 4 drives

2007-05-29 Thread Stephen Frost
* Luke Lonergan ([EMAIL PROTECTED]) wrote:
 Hi Rajesh,
 
 On 5/29/07 7:18 PM, Rajesh Kumar Mallah [EMAIL PROTECTED] wrote:
 
  D1 raid1 D2 raid1 D5  -- MD0
  D3 raid1 D4 raid1 D6  -- MD1
  MD0 raid0 MD1  -- MDF (final)
 
 AFAIK you can't RAID1 more than two drives, so the above doesn't make sense
 to me.

It's just more copies of the same data if it's really a RAID1, for the
extra, extra paranoid.  Basically, in the example above, I'd read it as
D1, D2, D5 have identical data on them.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [PERFORM] setting up raid10 with more than 4 drives

2007-05-29 Thread Luke Lonergan
Stephen,

On 5/29/07 8:31 PM, Stephen Frost [EMAIL PROTECTED] wrote:

 It's just more copies of the same data if it's really a RAID1, for the
 extra, extra paranoid.  Basically, in the example above, I'd read it as
 D1, D2, D5 have identical data on them.

In that case, I'd say it's a waste of disk to add 1+2 redundancy to the
mirrors.

- Luke 



---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [PERFORM] setting up raid10 with more than 4 drives

2007-05-29 Thread Jonah H. Harris

On 5/29/07, Luke Lonergan [EMAIL PROTECTED] wrote:

AFAIK you can't RAID1 more than two drives, so the above doesn't make sense
to me.


Yeah, I've never seen a way to RAID-1 more than 2 drives either.  It
would have to be his first one:

D1 + D2 = MD0 (RAID 1)
D3 + D4 = MD1 ...
D5 + D6 = MD2 ...
MD0 + MD1 + MD2 = MDF (RAID 0)

--
Jonah H. Harris, Software Architect | phone: 732.331.1324
EnterpriseDB Corporation| fax: 732.331.1301
33 Wood Ave S, 3rd Floor| [EMAIL PROTECTED]
Iselin, New Jersey 08830| http://www.enterprisedb.com/

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] setting up raid10 with more than 4 drives

2007-05-29 Thread david

On Wed, 30 May 2007, Jonah H. Harris wrote:


On 5/29/07, Luke Lonergan [EMAIL PROTECTED] wrote:

 AFAIK you can't RAID1 more than two drives, so the above doesn't make
 sense
 to me.


Yeah, I've never seen a way to RAID-1 more than 2 drives either.  It
would have to be his first one:

D1 + D2 = MD0 (RAID 1)
D3 + D4 = MD1 ...
D5 + D6 = MD2 ...
MD0 + MD1 + MD2 = MDF (RAID 0)



I don't know what the failure mode ends up being, but on linux I had no 
problems creating what appears to be a massively redundant (but small) array


md0 : active raid1 sdo1[10](S) sdn1[8] sdm1[7] sdl1[6] sdk1[5] sdj1[4] sdi1[3] 
sdh1[2] sdg1[9] sdf1[1] sde1[11](S) sdd1[0]
  896 blocks [10/10] [UU]

David Lang

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org