On Sun, 06 May 2012 15:40:50 +0000, Camaleón wrote:

> On Sun, 06 May 2012 14:52:19 +0000, Ramon Hofer wrote:
> 
>> On Sun, 06 May 2012 13:47:59 +0000, Camaleón wrote:
> 
>>>> Then I put the 28 partitions (4x3 + 4x4) in a raid 6?
>>> 
>>> Then you can pair/mix the partitions as you prefer (when using mdadm/
>>> linux raid, I mean). The "layout" (number of disks) and the "raid
>>> level" is up to you, I don't know what's your main goal.
>> 
>> The machine should be a NAS to store backups and serve multimedia
>> content.
> 
> Okay. And how much space are you planning to handle? Do you prefer a big
> pool to store data or you prefer using small chunks? And what about the
> future? Have you tought about expanding the storage capabilities in a
> near future? If yes, how it will be done?

My initial plan was to use 16 slots as raid5 with four disks per array. 
Then I wanted to use four slots as mythtv storage groups so the disks 
won't be in an array.
But now I'm quite fscinated with the 500 GB partitions raid6. It's very 
flexible. Maybe I'll have a harder time to set it up and won't be able to 
use hw raid which both you and Stan advice me to use...

<snipped>

>> Since I already start with 2x 500 GB disks for the OS, 4x 1.5 TB and 4x
>> 2 TB I think this could be a good solution. I probably will add 3 TB
>> disks if I will need more space or one disk fails: creating md5 and md6
>> :-)
>> 
>> Or is there something I'm missing?
> 
> I can't really tell, my head is baffled with all that parities,
> partitions and raid volumes 8-)

Yes sorry. I even confused myself :-o


> What you can do, should you finally decide to go for a linux raid, is
> creating a virtual machine to simulate what will be your NAS environment
> and start testing the raid layout from there. This way, any error can be
> easily reverted with no other annoying side-effects :-)

That's a good point. I have played with KVM some time ago. This will be 
interesting :-)

<snipped>

>>> What I usually do is having a RAID 1 level for holding the operating
>>> system installation and RAID 5 level (my raid controller does not
>>> support raid 6) for holding data. But my numbers are very conservative
>>> (this was a 2005 setup featuring 2x 200 GiB SATA disks in RAID 1 and
>>> x4 SATA disks of 400 GiB. which gives a 1.2 TiB volume).
>> 
>> You have drives of the same size in your raid.
> 
> Yes, that's a limitation coming from the hardware raid controller.

Isn't this limitation coming from the raid idea itself?
You can't use disks with different sizes in a linux raid neither? Only if 
you divide them into same sized partitions?

(...)

>> When I had the false positive I wanted to replace a Samsung disk with
>> one of the same Samsung types and it had some sectors less. I used JFS
>> for the md so was very happy that I could use the original drive and
>> not have to magically scale the JFS down :-)
> 
> I never bothered about replacing the drive. I knew the drive was in a
> good shape because otherwise the rebuilding operation couldn't have been
> done.

So you directly let the array rebuild to see if the disk is still ok?


>>> Yet, despite the ridiculous size of the RAID 5 volume, when the array
>>> goes down it takes up to *half of a business day* to rebuild, that's
>>> why I wanted to note that managing big raid volumes can make things
>>> worse :-/
>> 
>> My 6 TB raid takes more than a day :-/
> 
> That's something to consider. A software raid will use your CPU cycles
> and your RAM so you have to use a quite powerful computer if you want to
> get smooth results. OTOH, a hardware raid controller does the RAID I/O
> logical operations by its own so you completely rely on the card
> capabilities. In both cases, the hard disk bus will be the "real"
> bottleneck.

I have an i3 in that machine and 4 GB RAM. I'll see if this is enough 
when I have to rebuild all the arrays :-)


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/jo6dal$n40$1...@dough.gmane.org

Reply via email to