Re: Supermicro SAS controller

2012-05-08 Thread Ramon Hofer
On Tue, 08 May 2012 15:27:26 +, Camaleón wrote:

> On Tue, 08 May 2012 09:22:56 +, Ramon Hofer wrote:
> 
>> On Mon, 07 May 2012 14:35:31 +, Camaleón wrote:
> 
> (...)
> 
>>> Those "green" disks can be good for using them as stand-alone devices
>>> for user backup/archiving but not for 24/365 nor a NAS nor something
>>> that requires quick access and fast speeds such a raid.
>> 
>> I haven't thought about that. So the controller must be a bit more
>> patient ;-)
> 
> "Must" is the key here. But don't expect such collaboration from mdraid
> nor the hardware controller :-)
> 
>> I will stay away from the green drives in future.
> 
> I try to keep away from any computer device that is tagged to be "eco-
> friendly" (e.g., switches) because they usually cause more trouble than
> normal (watt-hungry) devices.
> 
> And the same goes with computer "power-saving" options (hibernation and
> suspension), I never use that for servers... what the hell, when the
> computer is "on" I want it to be "on" not sleepy, I need a quick
> response to whatever event. When I don't use the computer I just
> power-off and no single watt is wasted.

This may be true for business but at home I like to have silent and 
energy efficient devices. They don't have to be as responsive as 
professional equipment I think.


>>> For RAM you never-ever get enough :-)
>>>  
 Ok, RAM is quite cheap and it shouldn't affect power consumption with
 in comparison to >20 hard disks.
>>> 
>>> Exactly, your system will be happier and you won't have to worry in
>>> increasing it for a near future (~5 years). My motto is "always fill
>>> your system with the maximum amount of RAM, as much as you can
>>> afford", you won't regret.
>> 
>> Ok this sounds reasonable. But for 16 GB RAM I can get a 2 TB disk. So
>> I will have to sleep in it :-)
> 
> I think 4 GB RAM modules can be still affordable (<150€ per module?), if
> so, you can add 2 modules of 4 GiB. to get 8 GB from start and
> afterwards -should you need it- you can add 2 modules more to get 16 GB
> (although the board allows up to 32 GB. of RAM by using 8 GB modules but
> that can be costly).

This sounds good.
And I can leave the existent RAM and I have 12 GB after adding 2x 4 GB :-)


Best regards
Ramon


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/jocrpp$hvl$1...@dough.gmane.org



Re: Supermicro SAS controller

2012-05-08 Thread Camaleón
On Tue, 08 May 2012 09:22:56 +, Ramon Hofer wrote:

> On Mon, 07 May 2012 14:35:31 +, Camaleón wrote:

(...)

>> Those "green" disks can be good for using them as stand-alone devices
>> for user backup/archiving but not for 24/365 nor a NAS nor something
>> that requires quick access and fast speeds such a raid.
> 
> I haven't thought about that. So the controller must be a bit more
> patient ;-)

"Must" is the key here. But don't expect such collaboration from mdraid 
nor the hardware controller :-)

> I will stay away from the green drives in future.

I try to keep away from any computer device that is tagged to be "eco-
friendly" (e.g., switches) because they usually cause more trouble than 
normal (watt-hungry) devices. 

And the same goes with computer "power-saving" options (hibernation and 
suspension), I never use that for servers... what the hell, when the 
computer is "on" I want it to be "on" not sleepy, I need a quick response 
to whatever event. When I don't use the computer I just power-off and no 
single watt is wasted.

>> For RAM you never-ever get enough :-)
>>  
>>> Ok, RAM is quite cheap and it shouldn't affect power consumption with
>>> in comparison to >20 hard disks.
>> 
>> Exactly, your system will be happier and you won't have to worry in
>> increasing it for a near future (~5 years). My motto is "always fill
>> your system with the maximum amount of RAM, as much as you can afford",
>> you won't regret.
> 
> Ok this sounds reasonable. But for 16 GB RAM I can get a 2 TB disk. So I
> will have to sleep in it :-)

I think 4 GB RAM modules can be still affordable (<150€ per module?), if 
so, you can add 2 modules of 4 GiB. to get 8 GB from start and afterwards 
-should you need it- you can add 2 modules more to get 16 GB (although 
the board allows up to 32 GB. of RAM by using 8 GB modules but that can 
be costly).

Greetings,

-- 
Camaleón


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/jobe0u$vio$7...@dough.gmane.org



Re: Supermicro SAS controller

2012-05-08 Thread Ramon Hofer
On Mon, 07 May 2012 14:35:31 +, Camaleón wrote:

(...)

> For the raided space, yes, but still you can "redistribute" the disk
> better.

Ah yes, this is true. 


(...)

>> I'd like using green drives for this system. So low power consumption
>> is a thing I try keep low. And until now they worked well (one false
>> positive in two years is ok)
> 
> Remember that a raided system is more exigent than a non-raided one. If
> one of that "green" disks which is part of a raid level is put in stand-
> by/sleep mode and does not respond as quickly as mdadm expects, the raid
> manager can think the disk is lost/missing and will mark that disk as
> "failed" (or will give I/O erros...), forcing a rebuild, etc... :-/
> 
> Those "green" disks can be good for using them as stand-alone devices
> for user backup/archiving but not for 24/365 nor a NAS nor something
> that requires quick access and fast speeds such a raid.

I haven't thought about that. So the controller must be a bit more 
patient ;-)
I will stay away from the green drives in future.


 I have an i3 in that machine and 4 GB RAM. I'll see if this is enough
 when I have to rebuild all the arrays :-)
>>> 
>>> Mmm... I'd consider adding more RAM (at least 8 GB) though I would
>>> prefer 16-32 GB) you have to feed your little "big monster" :-)
>> 
>> That much :-O
> 
> For RAM you never-ever get enough :-)
>  
>> Ok, RAM is quite cheap and it shouldn't affect power consumption with
>> in comparison to >20 hard disks.
> 
> Exactly, your system will be happier and you won't have to worry in
> increasing it for a near future (~5 years). My motto is "always fill
> your system with the maximum amount of RAM, as much as you can afford",
> you won't regret.

Ok this sounds reasonable. But for 16 GB RAM I can get a 2 TB disk. So I 
will have to sleep in it :-)


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/joaolg$hvv$1...@dough.gmane.org



Re: Supermicro SAS controller

2012-05-07 Thread Camaleón
On Sun, 06 May 2012 19:27:59 +, Ramon Hofer wrote:

> On Sun, 06 May 2012 18:10:41 +, Camaleón wrote:

> You have drives of the same size in your raid.
 
 Yes, that's a limitation coming from the hardware raid controller.
>>> 
>>> Isn't this limitation coming from the raid idea itself?
>> 
>> Well, no, software raid does not impose such limit because you can work
>> with partitions instead.
>> 
>> In hardware raid I can use, for example, a 120 GiB disk with 200 GiB
>> disk and make a RAID 1 level but the volume will be of just 120 GiB. (I
>> lose 80 GiB. of space in addition to the 50% for the RAID 1 :-/).
> 
> But you can't build a linux software raid with a 100 GB and a 200 GB
> disk and then have 150 GB?

Of course. But still you can use the remainded (non-raided) space for 
another non-vital usage (small secondary backup/data partition, a boot 
partition, for swap...). Although this is not recommended, it can be 
useful in some scenarios.

>>> You can't use disks with different sizes in a linux raid neither? Only
>>> if you divide them into same sized partitions?
>> 
>> Yes, you can! In both, hardware raid and software raid. Linux raid even
>> allows to use different disks (SATA+PATA) while I don't think it's
>> recommended becasue of the bus speeds.
> 
> What I mean was the space difference is lost in either ways?

For the raided space, yes, but still you can "redistribute" the disk 
better.
 
>>> So you directly let the array rebuild to see if the disk is still ok?
>> 
>> Exactly, rebuilding starts automatically (that's a default setting, it
>> is configurable). And rebuiling always ends with no problem with the
>> same disk that went down. In my case this happens (→ the array going
>> down) because of the poor quality hard disks that were not tagged as
>> "enterprise" nor to be used for RAID layouts (they were "plain" Seagate
>> Barracuda). I did not build the system so I have to care about that for
>> the next time.
> 
> I'd like using green drives for this system. So low power consumption is
> a thing I try keep low. And until now they worked well (one false
> positive in two years is ok)

Remember that a raided system is more exigent than a non-raided one. If 
one of that "green" disks which is part of a raid level is put in stand-
by/sleep mode and does not respond as quickly as mdadm expects, the raid 
manager can think the disk is lost/missing and will mark that disk as 
"failed" (or will give I/O erros...), forcing a rebuild, etc... :-/

Those "green" disks can be good for using them as stand-alone devices for 
user backup/archiving but not for 24/365 nor a NAS nor something that 
requires quick access and fast speeds such a raid.

>>> I have an i3 in that machine and 4 GB RAM. I'll see if this is enough
>>> when I have to rebuild all the arrays :-)
>> 
>> Mmm... I'd consider adding more RAM (at least 8 GB) though I would
>> prefer 16-32 GB) you have to feed your little "big monster" :-)
> 
> That much :-O

For RAM you never-ever get enough :-)
 
> Ok, RAM is quite cheap and it shouldn't affect power consumption with in
> comparison to >20 hard disks.

Exactly, your system will be happier and you won't have to worry in 
increasing it for a near future (~5 years). My motto is "always fill your 
system with the maximum amount of RAM, as much as you can afford", you 
won't regret.

Greetings,

-- 
Camaleón


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/jo8mji$e2o$2...@dough.gmane.org



Re: Supermicro SAS controller

2012-05-06 Thread Ramon Hofer
On Sun, 06 May 2012 18:10:41 +, Camaleón wrote:

> On Sun, 06 May 2012 17:44:54 +, Ramon Hofer wrote:
> 
>> On Sun, 06 May 2012 15:40:50 +, Camaleón wrote:
> 
>>> Okay. And how much space are you planning to handle? Do you prefer a
>>> big pool to store data or you prefer using small chunks? And what
>>> about the future? Have you tought about expanding the storage
>>> capabilities in a near future? If yes, how it will be done?
>> 
>> My initial plan was to use 16 slots as raid5 with four disks per array.
>> Then I wanted to use four slots as mythtv storage groups so the disks
>> won't be in an array.
>> But now I'm quite fscinated with the 500 GB partitions raid6. It's very
>> flexible. Maybe I'll have a harder time to set it up and won't be able
>> to use hw raid which both you and Stan advice me to use...
> 
> It's always nice to have many options and true is that linux softare
> raid is very pupular (the usual main problem for not using is when high
> performance is needed and when doing a dual-boot with Windows) :-)

Yes and I need neither of those things :-)


 You have drives of the same size in your raid.
>>> 
>>> Yes, that's a limitation coming from the hardware raid controller.
>> 
>> Isn't this limitation coming from the raid idea itself?
> 
> Well, no, software raid does not impose such limit because you can work
> with partitions instead.
> 
> In hardware raid I can use, for example, a 120 GiB disk with 200 GiB
> disk and make a RAID 1 level but the volume will be of just 120 GiB. (I
> lose 80 GiB. of space in addition to the 50% for the RAID 1 :-/).

But you can't build a linux software raid with a 100 GB and a 200 GB disk 
and then have 150 GB?


>> You can't use disks with different sizes in a linux raid neither? Only
>> if you divide them into same sized partitions?
> 
> Yes, you can! In both, hardware raid and software raid. Linux raid even
> allows to use different disks (SATA+PATA) while I don't think it's
> recommended becasue of the bus speeds.

What I mean was the space difference is lost in either ways?


>>> I never bothered about replacing the drive. I knew the drive was in a
>>> good shape because otherwise the rebuilding operation couldn't have
>>> been done.
>> 
>> So you directly let the array rebuild to see if the disk is still ok?
> 
> Exactly, rebuilding starts automatically (that's a default setting, it
> is configurable). And rebuiling always ends with no problem with the
> same disk that went down. In my case this happens (→ the array going
> down) because of the poor quality hard disks that were not tagged as
> "enterprise" nor to be used for RAID layouts (they were "plain" Seagate
> Barracuda). I did not build the system so I have to care about that for
> the next time.

I'd like using green drives for this system. So low power consumption is 
a thing I try keep low. And until now they worked well (one false 
positive in two years is ok)


 My 6 TB raid takes more than a day :-/
>>> 
>>> That's something to consider. A software raid will use your CPU cycles
>>> and your RAM so you have to use a quite powerful computer if you want
>>> to get smooth results. OTOH, a hardware raid controller does the RAID
>>> I/O logical operations by its own so you completely rely on the card
>>> capabilities. In both cases, the hard disk bus will be the "real"
>>> bottleneck.
>> 
>> I have an i3 in that machine and 4 GB RAM. I'll see if this is enough
>> when I have to rebuild all the arrays :-)
> 
> Mmm... I'd consider adding more RAM (at least 8 GB) though I would
> prefer 16-32 GB) you have to feed your little "big monster" :-)

That much :-O

Ok, RAM is quite cheap and it shouldn't affect power consumption with in 
comparison to >20 hard disks.


Best regards
Ramon


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/jo6jbv$2r9$1...@dough.gmane.org



Re: Supermicro SAS controller

2012-05-06 Thread Camaleón
On Sun, 06 May 2012 17:44:54 +, Ramon Hofer wrote:

> On Sun, 06 May 2012 15:40:50 +, Camaleón wrote:

>> Okay. And how much space are you planning to handle? Do you prefer a
>> big pool to store data or you prefer using small chunks? And what about
>> the future? Have you tought about expanding the storage capabilities in
>> a near future? If yes, how it will be done?
> 
> My initial plan was to use 16 slots as raid5 with four disks per array.
> Then I wanted to use four slots as mythtv storage groups so the disks
> won't be in an array.
> But now I'm quite fscinated with the 500 GB partitions raid6. It's very
> flexible. Maybe I'll have a harder time to set it up and won't be able
> to use hw raid which both you and Stan advice me to use...

It's always nice to have many options and true is that linux softare raid 
is very pupular (the usual main problem for not using is when high 
performance is needed and when doing a dual-boot with Windows) :-)

>>> You have drives of the same size in your raid.
>> 
>> Yes, that's a limitation coming from the hardware raid controller.
> 
> Isn't this limitation coming from the raid idea itself?

Well, no, software raid does not impose such limit because you can work 
with partitions instead.

In hardware raid I can use, for example, a 120 GiB disk with 200 GiB disk 
and make a RAID 1 level but the volume will be of just 120 GiB. (I lose 
80 GiB. of space in addition to the 50% for the RAID 1 :-/).

> You can't use disks with different sizes in a linux raid neither? Only
> if you divide them into same sized partitions?

Yes, you can! In both, hardware raid and software raid. Linux raid even 
allows to use different disks (SATA+PATA) while I don't think it's 
recommended becasue of the bus speeds.

>> I never bothered about replacing the drive. I knew the drive was in a
>> good shape because otherwise the rebuilding operation couldn't have
>> been done.
> 
> So you directly let the array rebuild to see if the disk is still ok?

Exactly, rebuilding starts automatically (that's a default setting, it is 
configurable). And rebuiling always ends with no problem with the same 
disk that went down. In my case this happens (→ the array going down) 
because of the poor quality hard disks that were not tagged as 
"enterprise" nor to be used for RAID layouts (they were "plain" Seagate 
Barracuda). I did not build the system so I have to care about that for 
the next time.

>>> My 6 TB raid takes more than a day :-/
>> 
>> That's something to consider. A software raid will use your CPU cycles
>> and your RAM so you have to use a quite powerful computer if you want
>> to get smooth results. OTOH, a hardware raid controller does the RAID
>> I/O logical operations by its own so you completely rely on the card
>> capabilities. In both cases, the hard disk bus will be the "real"
>> bottleneck.
> 
> I have an i3 in that machine and 4 GB RAM. I'll see if this is enough
> when I have to rebuild all the arrays :-)

Mmm... I'd consider adding more RAM (at least 8 GB) though I would prefer 
16-32 GB) you have to feed your little "big monster" :-)

Greetings,

-- 
Camaleón


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/jo6er1$kir$1...@dough.gmane.org



Re: Supermicro SAS controller

2012-05-06 Thread Ramon Hofer
On Sun, 06 May 2012 15:40:50 +, Camaleón wrote:

> On Sun, 06 May 2012 14:52:19 +, Ramon Hofer wrote:
> 
>> On Sun, 06 May 2012 13:47:59 +, Camaleón wrote:
> 
 Then I put the 28 partitions (4x3 + 4x4) in a raid 6?
>>> 
>>> Then you can pair/mix the partitions as you prefer (when using mdadm/
>>> linux raid, I mean). The "layout" (number of disks) and the "raid
>>> level" is up to you, I don't know what's your main goal.
>> 
>> The machine should be a NAS to store backups and serve multimedia
>> content.
> 
> Okay. And how much space are you planning to handle? Do you prefer a big
> pool to store data or you prefer using small chunks? And what about the
> future? Have you tought about expanding the storage capabilities in a
> near future? If yes, how it will be done?

My initial plan was to use 16 slots as raid5 with four disks per array. 
Then I wanted to use four slots as mythtv storage groups so the disks 
won't be in an array.
But now I'm quite fscinated with the 500 GB partitions raid6. It's very 
flexible. Maybe I'll have a harder time to set it up and won't be able to 
use hw raid which both you and Stan advice me to use...



>> Since I already start with 2x 500 GB disks for the OS, 4x 1.5 TB and 4x
>> 2 TB I think this could be a good solution. I probably will add 3 TB
>> disks if I will need more space or one disk fails: creating md5 and md6
>> :-)
>> 
>> Or is there something I'm missing?
> 
> I can't really tell, my head is baffled with all that parities,
> partitions and raid volumes 8-)

Yes sorry. I even confused myself :-o


> What you can do, should you finally decide to go for a linux raid, is
> creating a virtual machine to simulate what will be your NAS environment
> and start testing the raid layout from there. This way, any error can be
> easily reverted with no other annoying side-effects :-)

That's a good point. I have played with KVM some time ago. This will be 
interesting :-)



>>> What I usually do is having a RAID 1 level for holding the operating
>>> system installation and RAID 5 level (my raid controller does not
>>> support raid 6) for holding data. But my numbers are very conservative
>>> (this was a 2005 setup featuring 2x 200 GiB SATA disks in RAID 1 and
>>> x4 SATA disks of 400 GiB. which gives a 1.2 TiB volume).
>> 
>> You have drives of the same size in your raid.
> 
> Yes, that's a limitation coming from the hardware raid controller.

Isn't this limitation coming from the raid idea itself?
You can't use disks with different sizes in a linux raid neither? Only if 
you divide them into same sized partitions?

(...)

>> When I had the false positive I wanted to replace a Samsung disk with
>> one of the same Samsung types and it had some sectors less. I used JFS
>> for the md so was very happy that I could use the original drive and
>> not have to magically scale the JFS down :-)
> 
> I never bothered about replacing the drive. I knew the drive was in a
> good shape because otherwise the rebuilding operation couldn't have been
> done.

So you directly let the array rebuild to see if the disk is still ok?


>>> Yet, despite the ridiculous size of the RAID 5 volume, when the array
>>> goes down it takes up to *half of a business day* to rebuild, that's
>>> why I wanted to note that managing big raid volumes can make things
>>> worse :-/
>> 
>> My 6 TB raid takes more than a day :-/
> 
> That's something to consider. A software raid will use your CPU cycles
> and your RAM so you have to use a quite powerful computer if you want to
> get smooth results. OTOH, a hardware raid controller does the RAID I/O
> logical operations by its own so you completely rely on the card
> capabilities. In both cases, the hard disk bus will be the "real"
> bottleneck.

I have an i3 in that machine and 4 GB RAM. I'll see if this is enough 
when I have to rebuild all the arrays :-)


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/jo6dal$n40$1...@dough.gmane.org



Re: Supermicro SAS controller

2012-05-06 Thread Camaleón
On Sun, 06 May 2012 14:52:19 +, Ramon Hofer wrote:

> On Sun, 06 May 2012 13:47:59 +, Camaleón wrote:

>>> Then I put the 28 partitions (4x3 + 4x4) in a raid 6?
>> 
>> Then you can pair/mix the partitions as you prefer (when using mdadm/
>> linux raid, I mean). The "layout" (number of disks) and the "raid
>> level" is up to you, I don't know what's your main goal.
> 
> The machine should be a NAS to store backups and serve multimedia
> content.

Okay. And how much space are you planning to handle? Do you prefer a big 
pool to store data or you prefer using small chunks? And what about the 
future? Have you tought about expanding the storage capabilities in a 
near future? If yes, how it will be done?
 
> My point was that it doesn't make sense to me to put several partitions
> from the same hdd to the same raid.

Of course, you have to pair the partitions between different hard disks 
and raid levels.

> Let's assume one of the 1.5 TB disks fails. Then I'd loose three
> partitions but the raid6 only able to stand two (I hope it's
> understandable what I mean).

If one of the disks which is used on a raid 6 fails, you won't lose 
anything, that's what a raid layout is for. To start worring, 3 different 
disks which are part of a raid 6 array must fail.

> So maybe I would have to make 500 GB partitions on each disk and put one
> partition per disk into a separate raid6s. E.g.: md1: sda1, sdb1, sdc1,
> sdd1, sde1, sdf1 md2: sda2, sdb2, sdc2, sdd2, sde2, sdf2 md3: sda3,
> sdb3, sdc3, sdd3, sde3, sdf3 md4: sde4, sdf4

Yes, that's a possible setup. md1, md2 and md3 will be 3 raid 6 volumes 
with (3 TiB - 1 TiB) = 2 TiB of available space.
 
(...)

> Since I already start with 2x 500 GB disks for the OS, 4x 1.5 TB and 4x
> 2 TB I think this could be a good solution. I probably will add 3 TB
> disks if I will need more space or one disk fails: creating md5 and md6
> :-)
> 
> Or is there something I'm missing?

I can't really tell, my head is baffled with all that parities, 
partitions and raid volumes 8-)

What you can do, should you finally decide to go for a linux raid, is 
creating a virtual machine to simulate what will be your NAS environment 
and start testing the raid layout from there. This way, any error can be 
easily reverted with no other annoying side-effects :-)

> And when I put all these array into a single LVM and one array goes down
> I will loose *all* the data?

You only lose data if 3 of the disks die at the same time. Period. LVM 
will just expand the possibilities of the size space in the arrays.

> But this won't like to happen because of the double parity, right?

Exactly.

>> What I usually do is having a RAID 1 level for holding the operating
>> system installation and RAID 5 level (my raid controller does not
>> support raid 6) for holding data. But my numbers are very conservative
>> (this was a 2005 setup featuring 2x 200 GiB SATA disks in RAID 1 and x4
>> SATA disks of 400 GiB. which gives a 1.2 TiB volume).
> 
> You have drives of the same size in your raid.

Yes, that's a limitation coming from the hardware raid controller.

> Btw: Have you ever had to replace a disk. 

Never. In ~7 years. All were false positives.

> When I had the false positive I wanted to replace a Samsung disk with
> one of the same Samsung types and it had some sectors less. I used JFS
> for the md so was very happy that I could use the original drive and
> not have to magically scale the JFS down :-)

I never bothered about replacing the drive. I knew the drive was in a 
good shape because otherwise the rebuilding operation couldn't have been 
done.
 
>> Yet, despite the ridiculous size of the RAID 5 volume, when the array
>> goes down it takes up to *half of a business day* to rebuild, that's
>> why I wanted to note that managing big raid volumes can make things
>> worse :-/
> 
> My 6 TB raid takes more than a day :-/

That's something to consider. A software raid will use your CPU cycles 
and your RAM so you have to use a quite powerful computer if you want to 
get smooth results. OTOH, a hardware raid controller does the RAID I/O 
logical operations by its own so you completely rely on the card 
capabilities. In both cases, the hard disk bus will be the "real" 
bottleneck.

Greetings,

-- 
Camaleón


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/jo6622$kir$1...@dough.gmane.org



Re: Supermicro SAS controller

2012-05-06 Thread Ramon Hofer
On Sun, 06 May 2012 13:47:59 +, Camaleón wrote:

> On Sun, 06 May 2012 12:35:40 +, Ramon Hofer wrote:
> 
>> On Sun, 06 May 2012 12:18:33 +, Camaleón wrote:
> 
>>> If your hard disk capacity is ~1.5 TiB then you can get 3 partitions
>>> from there of ~500 GiB of size (e.g., sda1, sda2 and sda3). For a
>>> second disk, the same (e.g., sdb1, sdb2 and sdb3) and so on... or you
>>> can make smaller partitions. I would just care about the whole RAID
>>> volume size.
>> 
>> Sorry I don't get it.
>> 
>> Let's assume I have 4x 1.5 TB and 4x 2 TB.
> 
> x4 1.5 TiB → sda, sdb, sdc, sdd
> x2 2 TiB → sde, sdf
> 
>> I divide each drive into 500 GB partitions. So three per 1.5 TB and
>> four per 2 TB disk.
> 
> 1.5 TiB hard disks:
> 
> sda1, sda2, sda3
> sdb1, sdb2, sdb3
> sdc1, sdc2, sdc3
> sdd1, sdd2, sdd3
> 
> 2 TiB hard disks:
> 
> sde1, sde2, sde3, sde4
> sdf1, sdf2, sdf3, sdf4
> 
>> Then I put the 28 partitions (4x3 + 4x4) in a raid 6?
> 
> Then you can pair/mix the partitions as you prefer (when using mdadm/
> linux raid, I mean). The "layout" (number of disks) and the "raid level"
> is up to you, I don't know what's your main goal.

The machine should be a NAS to store backups and serve multimedia content.

My point was that it doesn't make sense to me to put several partitions 
from the same hdd to the same raid.
Let's assume one of the 1.5 TB disks fails. Then I'd loose three 
partitions but the raid6 only able to stand two (I hope it's 
understandable what I mean).

So maybe I would have to make 500 GB partitions on each disk and put one 
partition per disk into a separate raid6s. E.g.:
md1: sda1, sdb1, sdc1, sdd1, sde1, sdf1
md2: sda2, sdb2, sdc2, sdd2, sde2, sdf2
md3: sda3, sdb3, sdc3, sdd3, sde3, sdf3
md4: sde4, sdf4

md4 could only be used if I add another partition from another disk. I 
could use 2/3 of the disks of md1-md3 and nothing from md4. In total 8/20 
of the space is used for parity.

When I now add another 1.5 TB disk the available space from md1-md3 
increases by 500 GB each. So now only 8/23 of the space is used for 
parity.

If I add a 2 TB disk *instead* of the 1.5 TB from before I can add a 
partition to each of the four of them and md4 becomes useful. This means 
I have even less parity than before (8/24).

If instead I even add a 3 TB disk, I will create md5 and md6. This lets 
the relative parity increase to 10/26. But when I then add another 3 TB 
disk the parity decreases to 10/32 which is already less than 8/20.


If in the initial setup one 1.5 TB disk fails (e.g. sda) and I replace it 
with a 2 TB disk I will get this:
md1: sda1, sdb1, sdc1, sdd1, sde1, sdf1
md2: sda2, sdb2, sdc2, sdd2, sde2, sdf2
md3: sda3, sdb3, sdc3, sdd3, sde3, sdf3
md4: sda4, sde4, sdf4
Which means that I now have 8/21 parity.


Since I already start with 2x 500 GB disks for the OS, 4x 1.5 TB and 4x 2 
TB I think this could be a good solution. I probably will add 3 TB disks 
if I will need more space or one disk fails: creating md5 and md6 :-)

Or is there something I'm missing?


And when I put all these array into a single LVM and one array goes down 
I will loose *all* the data?
But this won't like to happen because of the double parity, right?


> What I usually do is having a RAID 1 level for holding the operating
> system installation and RAID 5 level (my raid controller does not
> support raid 6) for holding data. But my numbers are very conservative
> (this was a 2005 setup featuring 2x 200 GiB SATA disks in RAID 1 and x4
> SATA disks of 400 GiB. which gives a 1.2 TiB volume).

You have drives of the same size in your raid.

Btw: Have you ever had to replace a disk. When I had the false positive I 
wanted to replace a Samsung disk with one of the same Samsung types and 
it had some sectors less. I used JFS for the md so was very happy that I 
could use the original drive and not have to magically scale the JFS 
down :-)


> Yet, despite the ridiculous size of the RAID 5 volume, when the array
> goes down it takes up to *half of a business day* to rebuild, that's why
> I wanted to note that managing big raid volumes can make things worse
> :-/

My 6 TB raid takes more than a day :-/


Best regards
Ramon


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/jo6372$6p1$1...@dough.gmane.org



Re: Supermicro SAS controller

2012-05-06 Thread Camaleón
On Sun, 06 May 2012 12:35:40 +, Ramon Hofer wrote:

> On Sun, 06 May 2012 12:18:33 +, Camaleón wrote:

>> If your hard disk capacity is ~1.5 TiB then you can get 3 partitions
>> from there of ~500 GiB of size (e.g., sda1, sda2 and sda3). For a
>> second disk, the same (e.g., sdb1, sdb2 and sdb3) and so on... or you
>> can make smaller partitions. I would just care about the whole RAID
>> volume size.
> 
> Sorry I don't get it.
> 
> Let's assume I have 4x 1.5 TB and 4x 2 TB. 

x4 1.5 TiB → sda, sdb, sdc, sdd
x2 2 TiB → sde, sdf

> I divide each drive into 500 GB partitions. So three per 1.5 TB and
> four per 2 TB disk. 

1.5 TiB hard disks:

sda1, sda2, sda3
sdb1, sdb2, sdb3
sdc1, sdc2, sdc3
sdd1, sdd2, sdd3

2 TiB hard disks:

sde1, sde2, sde3, sde4
sdf1, sdf2, sdf3, sdf4

> Then I put the 28 partitions (4x3 + 4x4) in a raid 6?

Then you can pair/mix the partitions as you prefer (when using mdadm/
linux raid, I mean). The "layout" (number of disks) and the "raid level" 
is up to you, I don't know what's your main goal. 

What I usually do is having a RAID 1 level for holding the operating 
system installation and RAID 5 level (my raid controller does not support 
raid 6) for holding data. But my numbers are very conservative (this was 
a 2005 setup featuring 2x 200 GiB SATA disks in RAID 1 and x4 SATA disks 
of 400 GiB. which gives a 1.2 TiB volume).

Yet, despite the ridiculous size of the RAID 5 volume, when the array 
goes down it takes up to *half of a business day* to rebuild, that's why 
I wanted to note that managing big raid volumes can make things worse :-/

>> When using the whole hard disk capacity for the array:
>> 
>> - A RAID 5 volume with x4 1.5 TiB disks will give you an available
>> space of 4.5 TiB (the sum of the number of the disks minus 1 drive).
>> 
>> - A RAID 6 volume with x4 1.5 TiB disks will give you an available
>> space of 3 TiB (the sum of the number of the disks minus 2 drives).
>> 
>> That's the price for the added data security. If you are constrained
>> about hard disk space, remember that you can add LVM and your spacing
>> problems are be solved >;-)
> 
> My problem is that I don't have much experience with raid. Only about
> the two years where I only had one drive failure which was false alarm.
> I could put it right back in.

Yeah, false possitives are also a PITA :-), that's why a good (and 
usually "costly" hardware raid controller is a must if you want to have a 
peaceful journey with your RAID setup.
 
> So I think I'll have to burn my fingers myself to understand the little
> (or maybe even misleading in some sense) security of raid5...

We all get experience by doing things and decide based on our criteria,  
there's no other way to learn (errors and success included).

Greetings,

-- 
Camaleón


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/jo5vef$kir$7...@dough.gmane.org



Re: Supermicro SAS controller

2012-05-06 Thread Ramon Hofer
On Sun, 06 May 2012 12:18:33 +, Camaleón wrote:

> On Sun, 06 May 2012 11:46:21 +, Ramon Hofer wrote:
> 
>> On Sat, 05 May 2012 11:15:22 +, Camaleón wrote:
> 
 On the other hand it isn't possible to have different disk sizes in a
 raid 6 neither.
>>> 
>>> I think yes, that you can, but only the lowest of the disk capacities
>>> will be used (this applies to all of the RAID levels). Software RAID
>>> has not such limitaion because you can mirror partitions, instead.
>> 
>> Ok so with sw raid I can use partitions as devices. This means I could
>> divide my drives into 500 GB pieces and like this use the whole size of
>> the disks?
> 
> If your hard disk capacity is ~1.5 TiB then you can get 3 partitions
> from there of ~500 GiB of size (e.g., sda1, sda2 and sda3). For a second
> disk, the same (e.g., sdb1, sdb2 and sdb3) and so on... or you can make
> smaller partitions. I would just care about the whole RAID volume size.

Sorry I don't get it.

Let's assume I have 4x 1.5 TB and 4x 2 TB. I divide each drive into 500 
GB partitions. So three per 1.5 TB and four per 2 TB disk. Then I put the 
28 partitions (4x3 + 4x4) in a raid 6?


>> Wouldn't it be easier to have e.g. four of each size and put them into
>> raid5?
> 
> With software raid you have more choices because you can partition as
> you like: you can use the whole disk capacity or make small chunks and
> use them to be part of a RAID volume.
> 
> Again, RAID5 is not something advisable and mdadm also supports RAID 6
> :-)
> 
 So my plan seems still reasonable to me to have several 4 disks raid
 5 arrays. Like that I'm flexible to add bigger disks in future as
 they become cheaper and still can keep my old 1.5 TB disks. And if I
 would go for raid 6 with the 4 disk array I would loose a third of
 the capacity.
>>> 
>>> (...)
>>> 
>>> You've been warned :-)
>> 
>> Yes and I appreciate that!
>> But I can't see any other solution without loosing 500 GB of the two TB
>> disks :-?
> 
> When using the whole hard disk capacity for the array:
> 
> - A RAID 5 volume with x4 1.5 TiB disks will give you an available space
> of 4.5 TiB (the sum of the number of the disks minus 1 drive).
> 
> - A RAID 6 volume with x4 1.5 TiB disks will give you an available space
> of 3 TiB (the sum of the number of the disks minus 2 drives).
> 
> That's the price for the added data security. If you are constrained
> about hard disk space, remember that you can add LVM and your spacing
> problems are be solved >;-)

My problem is that I don't have much experience with raid. Only about the 
two years where I only had one drive failure which was false alarm. I 
could put it right back in.

So I think I'll have to burn my fingers myself to understand the little 
(or maybe even misleading in some sense) security of raid5...


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/jo5r6s$540$3...@dough.gmane.org



Re: Supermicro SAS controller

2012-05-06 Thread Camaleón
On Sun, 06 May 2012 11:46:21 +, Ramon Hofer wrote:

> On Sat, 05 May 2012 11:15:22 +, Camaleón wrote:

>>> On the other hand it isn't possible to have different disk sizes in a
>>> raid 6 neither.
>> 
>> I think yes, that you can, but only the lowest of the disk capacities
>> will be used (this applies to all of the RAID levels). Software RAID
>> has not such limitaion because you can mirror partitions, instead.
> 
> Ok so with sw raid I can use partitions as devices. This means I could
> divide my drives into 500 GB pieces and like this use the whole size of
> the disks?

If your hard disk capacity is ~1.5 TiB then you can get 3 partitions from 
there of ~500 GiB of size (e.g., sda1, sda2 and sda3). For a second disk, 
the same (e.g., sdb1, sdb2 and sdb3) and so on... or you can make smaller 
partitions. I would just care about the whole RAID volume size.

> Wouldn't it be easier to have e.g. four of each size and put them into
> raid5?

With software raid you have more choices because you can partition as you 
like: you can use the whole disk capacity or make small chunks and use 
them to be part of a RAID volume.

Again, RAID5 is not something advisable and mdadm also supports RAID 6 :-)

>>> So my plan seems still reasonable to me to have several 4 disks raid 5
>>> arrays. Like that I'm flexible to add bigger disks in future as they
>>> become cheaper and still can keep my old 1.5 TB disks. And if I would
>>> go for raid 6 with the 4 disk array I would loose a third of the
>>> capacity.
>> 
>> (...)
>> 
>> You've been warned :-)
> 
> Yes and I appreciate that!
> But I can't see any other solution without loosing 500 GB of the two TB
> disks :-?

When using the whole hard disk capacity for the array:

- A RAID 5 volume with x4 1.5 TiB disks will give you an available space 
of 4.5 TiB (the sum of the number of the disks minus 1 drive).

- A RAID 6 volume with x4 1.5 TiB disks will give you an available space 
of 3 TiB (the sum of the number of the disks minus 2 drives). 

That's the price for the added data security. If you are constrained 
about hard disk space, remember that you can add LVM and your spacing 
problems are be solved >;-)

Greetings,

-- 
Camaleón


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/jo5q6o$kir$5...@dough.gmane.org



Re: Supermicro SAS controller

2012-05-06 Thread Ramon Hofer
On Sat, 05 May 2012 11:15:22 +, Camaleón wrote:



>>> Just a note of caution here.
>>> 
>>> RAID 5 with big hard disks can be a real pain and a real problem. If
>>> one of the arrays go down, the rebuilding operation can take up to
>>> "days" (depending on the controller's capacity) and if while the RAID
>>> is rebuilding a second disk of the array is also down for whatever
>>> reason (it can be a false possitive) you can't recover your data, at
>>> least not that easily. That's why most people is switching from raid 5
>>> to raid 6, it adds an extra of security with no remarkable drawbacks.
>> 
>> That's true.
>> 
>> On the other hand it isn't possible to have different disk sizes in a
>> raid 6 neither.
> 
> I think yes, that you can, but only the lowest of the disk capacities
> will be used (this applies to all of the RAID levels). Software RAID has
> not such limitaion because you can mirror partitions, instead.

Ok so with sw raid I can use partitions as devices. This means I could 
divide my drives into 500 GB pieces and like this use the whole size of 
the disks?

Wouldn't it be easier to have e.g. four of each size and put them into 
raid5?


>> So my plan seems still reasonable to me to have several 4 disks raid 5
>> arrays. Like that I'm flexible to add bigger disks in future as they
>> become cheaper and still can keep my old 1.5 TB disks. And if I would
>> go for raid 6 with the 4 disk array I would loose a third of the
>> capacity.
> 
> (...)
> 
> You've been warned :-)

Yes and I appreciate that!
But I can't see any other solution without loosing 500 GB of the two TB 
disks :-?


Best regards
Ramon


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/jo5oad$540$2...@dough.gmane.org



Re: Supermicro SAS controller

2012-05-06 Thread Ramon Hofer
On Sun, 06 May 2012 04:00:46 -0500, Stan Hoeppner wrote:

> On 5/3/2012 1:27 PM, Ramon Hofer wrote:



> mdraid is quite tolerant with drive errors before it finally kicks them
> offline.  Using the firmware RAID on this LSI card, any drive showing
> flaky behavior will be kicked very quickly from the array, and if you
> configure everything properly, you'll receive and email, sms text, or
> page telling you a drive is offline and why.  If you have a spare
> configured, the HBA will automatically start rebuilding the failed
> drive's contents to the spare drive.  If not, you simply pop the dead
> drive out, pop the new on in, and it starts rebuilding automatically.

I also had a failure where mdadm told me a drive failed. When I wanted to 
replace it I had a drive that was some sectors smaller than the array. So 
I took a closer look at the "broken" one. It seemed ok and I put it back 
into the array. It's working now for about a year :-)


> In a nutshell, (good) hardware RAID typically has every advantage over
> linux mdraid but two:
> 
> 1.  Flexibility--mdraid can span disks across many HBAs 2.  Absolute
> performance**
> 
> **Host CPUs are much faster than HBA RAID ASICs, 3-4GHz vs 500-800MHz,
> especially with parity calculations (RAID5/6), and have many more cores,
> typically 4-24 in a single or dual socket machine, vs 1-2 cores in an
> HBA RAID ASIC.  Thus they have an enormous raw horsepower advantage.  A
> good hardware RAID HBA will have RAID5/6 performance similar to mdraid
> w/up to ~16-24 SAS 15K drives.  At some drive count beyond that the RAID
> ASIC will hit its performance ceiling.
> 
> Many people use hybrid setups, where hardware RAID is used at the
> controller level and mdraid is used to span the HW RAID devices into a
> single Linux disk device, allowing for a single filesystem across dozens
> of drives connected to 2, 4, or more RAID HBAs.  With some application
> workloads multiple RAID groups are created per controller and these are
> then spanned or striped with md, for example high IOPS maildir servers.
> 
> You mentioned previously you don't have a high performance requirement
> in which case I'd recommend hardware RAID.  That said, if you want to
> use RAID5/6 instead of RAID10, md RAID may be more attractive, as the
> parity RAID performance of the SAS9240 is less than stellar.  Depending
> on your workloads, it may perform great.  Just remember that the LSI
> SAS9240 is high end JBOD HBA with RAID firmware.  It's can act just like
> a HW RAID card from the viewpoint of the OS and the user, but it not a
> HW RAID card.  It's not even an entry level RAID controller--note the
> lack of cache and BBU option.
> 
> Given what you've told us so far, I'd say you'd likely be very happy
> using the HW RAID mode.

I don't know if I understand correctly what you wrote: You say that the 
LSI SAS9240 is slower for raid 5 than mdadm? And less flexible?

How about CLI manage possibilities? Will I have to reboot when I want to 
set up a new array? Can I check the progress of the rebuilding process?

Even though I know raid 5 is risky with 2 TB drives I think it's probably 
the best solution for me. I don't want to loose too much storage space by 
limiting the 2 TB drives to 1.5 TB to fit the smaller drive size. And 
still would like to have a little parity safety.

But I haven't thought about putting the raid 5 arrays in another raid0 
array. I was thinking about lvm...
But this shouldn't make much difference? Only that I'm already a bit 
familiar with mdadm not with lvm...

 
>> Or if I ever want to exchange the mainboard and use one with a SAS
>> controller onboard?
> 
> If your new mobo has an onboard SAS controller it will be an LSI
> controller, so this is a non issue.  But why would you switch to that
> and toss the 9240 anyway?  They use the same ASIC:  the SAS2008.  Look
> at SuperMicro, Tyan, Intel, and Asus  boards.  They all use the SAS2008,
> or an older or newer LSI SAS chip.
> 
> LSI is the only viable motherboard-down SAS solution on the market, at
> least on quality server mobos.  There may be junk floating around the
> Asian markets with different onboard SAS chips.  If so, I'm unaware of
> them, and I'd recommend to anyone to stay away from them as the chips
> will be Marvell, JMicron, etc--cheap and unreliable.

I don't want to do that in the near future. But I don't know what will be 
in 5 or even 10 years.
What I know is that I hopefully still can use the case and the drives (at 
least most of them :-) ).


Best regards
Ramon


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/jo5o02$540$1...@dough.gmane.org



Re: Supermicro SAS controller

2012-05-06 Thread Stan Hoeppner
On 5/3/2012 1:27 PM, Ramon Hofer wrote:

> Btw: Wouldn't it be better to use software raid? 

There are many factors involved in this decision, and just as many
opinions available from users who prefer one method over the other.

> In case of failure of 
> the controller I would need to get exactly the same card again?

This in not a primary reason to choose software over hardware RAID.
Though it does seem to be the first mentioned by people with very little
RAID experience.

Something to consider is that mdraid newbies tend to lose their entire
arrays for various reasons.  Whether they're using cheap hardware, or
not paying attention to logs, etc is unclear.  In the past 2 weeks alone
there have been no less than 4 people post on the linux-raid list that
they'd lost an md RAID5 array.  Some were recoverable, some were not.
It's difficult to find many similiar stories via Google of HW RAID HBA
failures causing the loss of entire arrays.

mdraid is quite tolerant with drive errors before it finally kicks them
offline.  Using the firmware RAID on this LSI card, any drive showing
flaky behavior will be kicked very quickly from the array, and if you
configure everything properly, you'll receive and email, sms text, or
page telling you a drive is offline and why.  If you have a spare
configured, the HBA will automatically start rebuilding the failed
drive's contents to the spare drive.  If not, you simply pop the dead
drive out, pop the new on in, and it starts rebuilding automatically.

In a nutshell, (good) hardware RAID typically has every advantage over
linux mdraid but two:

1.  Flexibility--mdraid can span disks across many HBAs
2.  Absolute performance**

**Host CPUs are much faster than HBA RAID ASICs, 3-4GHz vs 500-800MHz,
especially with parity calculations (RAID5/6), and have many more cores,
typically 4-24 in a single or dual socket machine, vs 1-2 cores in an
HBA RAID ASIC.  Thus they have an enormous raw horsepower advantage.  A
good hardware RAID HBA will have RAID5/6 performance similar to mdraid
w/up to ~16-24 SAS 15K drives.  At some drive count beyond that the RAID
ASIC will hit its performance ceiling.

Many people use hybrid setups, where hardware RAID is used at the
controller level and mdraid is used to span the HW RAID devices into a
single Linux disk device, allowing for a single filesystem across dozens
of drives connected to 2, 4, or more RAID HBAs.  With some application
workloads multiple RAID groups are created per controller and these are
then spanned or striped with md, for example high IOPS maildir servers.

You mentioned previously you don't have a high performance requirement
in which case I'd recommend hardware RAID.  That said, if you want to
use RAID5/6 instead of RAID10, md RAID may be more attractive, as the
parity RAID performance of the SAS9240 is less than stellar.  Depending
on your workloads, it may perform great.  Just remember that the LSI
SAS9240 is high end JBOD HBA with RAID firmware.  It's can act just like
a HW RAID card from the viewpoint of the OS and the user, but it not a
HW RAID card.  It's not even an entry level RAID controller--note the
lack of cache and BBU option.

Given what you've told us so far, I'd say you'd likely be very happy
using the HW RAID mode.

> Or if I ever want to exchange the mainboard and use one with a SAS 
> controller onboard?

If your new mobo has an onboard SAS controller it will be an LSI
controller, so this is a non issue.  But why would you switch to that
and toss the 9240 anyway?  They use the same ASIC:  the SAS2008.  Look
at SuperMicro, Tyan, Intel, and Asus  boards.  They all use the SAS2008,
or an older or newer LSI SAS chip.

LSI is the only viable motherboard-down SAS solution on the market, at
least on quality server mobos.  There may be junk floating around the
Asian markets with different onboard SAS chips.  If so, I'm unaware of
them, and I'd recommend to anyone to stay away from them as the chips
will be Marvell, JMicron, etc--cheap and unreliable.

-- 
Stan



-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/4fa63dbe.7010...@hardwarefreak.com



Re: Supermicro SAS controller

2012-05-05 Thread Camaleón
On Sat, 05 May 2012 10:44:02 +, Ramon Hofer wrote:

> On Fri, 04 May 2012 15:38:10 +, Camaleón wrote:

(...)

>> http://hwraid.le-vert.net/wiki/LSIMegaRAIDSAS
>> 
>> Ufff, I was not aware of this:
>> 
>> ***
>> There is currently no known opensource tool for theses cards. 
>> ***
>> 
>> How, how bad... in contrast, 3ware seems to fully support open source,
>> or at least that's what it can be read here:
>> 
>> http://hwraid.le-vert.net/wiki/3Ware
>> 
>> ***
>> 3Ware supports Linux and provide an opensource kernel driver which has
>> been part of Linux for ages
>> ***
>> 
>> This is something to reconsider.
> 
> Yes, this is really not what I wanted to read :-o

It was also unknown to me... I thought this manufacturer was providing 
more support to linux other than having a bunch of binaries at their 
site :-/
 
> So I think I'll just go for the LSI card and use mdadm. The 3Ware card I
> found at my dealer was twice the price of the LSI...

Well, despite the card manufacturer does not provide an open source 
driver, you can still use the binary utilities built for Debian, 
available at the mentioned site. You can always reconsider moving to 
mdadm if something goes wrong or if you get into some trouble while 
setting up the hardware RAID card.

Still, my 0.2 cents go for a hardware raid :-)

>> Just a note of caution here.
>> 
>> RAID 5 with big hard disks can be a real pain and a real problem. If
>> one of the arrays go down, the rebuilding operation can take up to
>> "days" (depending on the controller's capacity) and if while the RAID
>> is rebuilding a second disk of the array is also down for whatever
>> reason (it can be a false possitive) you can't recover your data, at
>> least not that easily. That's why most people is switching from raid 5
>> to raid 6, it adds an extra of security with no remarkable drawbacks.
> 
> That's true.
> 
> On the other hand it isn't possible to have different disk sizes in a
> raid 6 neither.

I think yes, that you can, but only the lowest of the disk capacities 
will be used (this applies to all of the RAID levels). Software RAID has 
not such limitaion because you can mirror partitions, instead.

> So my plan seems still reasonable to me to have several 4 disks raid 5
> arrays. Like that I'm flexible to add bigger disks in future as they
> become cheaper and still can keep my old 1.5 TB disks. And if I would go
> for raid 6 with the 4 disk array I would loose a third of the capacity.

(...)

You've been warned :-)

Greetings,

-- 
Camaleón


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/jo324a$4gr$2...@dough.gmane.org



Re: Supermicro SAS controller

2012-05-05 Thread Ramon Hofer
On Fri, 04 May 2012 15:38:10 +, Camaleón wrote:

> On Fri, 04 May 2012 10:48:36 +, Ramon Hofer wrote:
> 
>> On Thu, 03 May 2012 20:27:02 +, Camaleón wrote:
> 
>>> There's some useful information in one of the links I sent before:
>>> 
>>> http://wiki.debian.org/LinuxRaidForAdmins
>> 
>> Maybe I miss something but the page doesn't say anything about cli
>> tools of the megaraid cards :-?
> 
> Yes... no info is also info after all. Not the one you'd like to read
> but that's how it is :-). Anyway, the page can be simply outdated or
> lacking from that specific information.
> 
> Also, the expanded information on the "megaraid_sas" driver points to
> the page you sent before:
> 
> http://hwraid.le-vert.net/wiki/DebianPackages
> 
> Where you can find a set of tools for your driver ("megaclisas-status"
> and "megacli") as well as more information about the LSI controllers and
> the driver status:
> 
> http://hwraid.le-vert.net/wiki/LSIMegaRAIDSAS
> 
> Ufff, I was not aware of this:
> 
> ***
> There is currently no known opensource tool for theses cards. ***
> 
> How, how bad... in contrast, 3ware seems to fully support open source,
> or at least that's what it can be read here:
> 
> http://hwraid.le-vert.net/wiki/3Ware
> 
> ***
> 3Ware supports Linux and provide an opensource kernel driver which has
> been part of Linux for ages
> ***
> 
> This is something to reconsider.

Yes, this is really not what I wanted to read :-o

So I think I'll just go for the LSI card and use mdadm. The 3Ware card I 
found at my dealer was twice the price of the LSI...




>>> Just an additional note. By reading the chosen card specs it seems it
>>> does not support a RAID 6 level (which is better than RAID 5 because
>>> it allows the failure 2 disks) so that can be a handycap.
>> 
>> This should be no problem. I plan to use four slots without raid for
>> mythtv.
>> I already have a 4x 1.5 TB disks raid 5 and another 4x 2 TB disks raid
>> 5. When I want to add more disks I can e.g. go for 3 TB disks and set 4
>> of them up as another raid5.
>> Like this I can use disks with different sizes.
> 
> Just a note of caution here.
> 
> RAID 5 with big hard disks can be a real pain and a real problem. If one
> of the arrays go down, the rebuilding operation can take up to "days"
> (depending on the controller's capacity) and if while the RAID is
> rebuilding a second disk of the array is also down for whatever reason
> (it can be a false possitive) you can't recover your data, at least not
> that easily. That's why most people is switching from raid 5 to raid 6,
> it adds an extra of security with no remarkable drawbacks.

That's true.

On the other hand it isn't possible to have different disk sizes in a 
raid 6 neither.
So my plan seems still reasonable to me to have several 4 disks raid 5 
arrays. Like that I'm flexible to add bigger disks in future as they 
become cheaper and still can keep my old 1.5 TB disks.
And if I would go for raid 6 with the 4 disk array I would loose a third 
of the capacity.


>> I'm thinking of combining the arrays then to a lvm... But I don't know
>> if this is a good idea as it adds more complexity :-?
> 
> Yes, it will be a good idea (it will allow you to manage your volumes in
> a more flexible manner) and yes, it will add an extra layer of
> complexity (RAID+LVM) :-)

Ok, hope it won't be too complicated :-)


Best regards
Ramon


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/jo309i$ma3$1...@dough.gmane.org



Re: Supermicro SAS controller

2012-05-04 Thread Camaleón
On Fri, 04 May 2012 10:48:36 +, Ramon Hofer wrote:

> On Thu, 03 May 2012 20:27:02 +, Camaleón wrote:

>> There's some useful information in one of the links I sent before:
>> 
>> http://wiki.debian.org/LinuxRaidForAdmins
> 
> Maybe I miss something but the page doesn't say anything about cli tools
> of the megaraid cards :-?

Yes... no info is also info after all. Not the one you'd like to read but 
that's how it is :-). Anyway, the page can be simply outdated or lacking 
from that specific information. 

Also, the expanded information on the "megaraid_sas" driver points to the 
page you sent before:

http://hwraid.le-vert.net/wiki/DebianPackages

Where you can find a set of tools for your driver ("megaclisas-status" 
and "megacli") as well as more information about the LSI controllers and 
the driver status:

http://hwraid.le-vert.net/wiki/LSIMegaRAIDSAS

Ufff, I was not aware of this:

***
There is currently no known opensource tool for theses cards. 
***

How, how bad... in contrast, 3ware seems to fully support open source, or 
at least that's what it can be read here:

http://hwraid.le-vert.net/wiki/3Ware

***
3Ware supports Linux and provide an opensource kernel driver which has 
been part of Linux for ages 
***

This is something to reconsider.

>> In case of disasterous raid failure you depend completely on the
>> manufacturer and what are the option they can provide (although data
>> recovery can be usually done at professional labs).
> 
> My data isn't so important that it would justify restoring it with
> professional help.
> 
> Still I have to do backups of the really important stuff to dvd or a
> seperate external drive...

Yes, and very good point. A RAID system can never substitute the backups, 
they have to be set in paralel (RAID cares about hardware issues while 
backups about software/logical/user errors).
 
>> Just an additional note. By reading the chosen card specs it seems it
>> does not support a RAID 6 level (which is better than RAID 5 because it
>> allows the failure 2 disks) so that can be a handycap.
> 
> This should be no problem. I plan to use four slots without raid for
> mythtv.
> I already have a 4x 1.5 TB disks raid 5 and another 4x 2 TB disks raid
> 5. When I want to add more disks I can e.g. go for 3 TB disks and set 4
> of them up as another raid5.
> Like this I can use disks with different sizes.

Just a note of caution here. 

RAID 5 with big hard disks can be a real pain and a real problem. If one 
of the arrays go down, the rebuilding operation can take up to 
"days" (depending on the controller's capacity) and if while the RAID is 
rebuilding a second disk of the array is also down for whatever reason 
(it can be a false possitive) you can't recover your data, at least not 
that easily. That's why most people is switching from raid 5 to raid 6, 
it adds an extra of security with no remarkable drawbacks.

> I'm thinking of combining the arrays then to a lvm... But I don't know
> if this is a good idea as it adds more complexity :-?

Yes, it will be a good idea (it will allow you to manage your volumes in 
a more flexible manner) and yes, it will add an extra layer of complexity 
(RAID+LVM) :-)

Greetings,

-- 
Camaleón


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/jo0t52$3du$9...@dough.gmane.org



Re: Supermicro SAS controller

2012-05-04 Thread Ramon Hofer
On Thu, 03 May 2012 20:27:02 +, Camaleón wrote:

> On Thu, 03 May 2012 18:27:55 +, Ramon Hofer wrote:
> 
>> On Thu, 03 May 2012 16:30:00 +, Camaleón wrote:
>> 
>> 
>>
>>> 2/ The card's manufacturer provides a set of CLI tools (also GUI/web
>>> based) to control all of the aspects of the RAID volume (from array
>>> creation/modification/reconstruction/rebuilding/deletion/on-the-fly
>>> volume expansion/current array status... up to firmware update, if
>>> possible)
>> 
>> Didn't find any infos about that :-?
> 
> There's some useful information in one of the links I sent before:
> 
> http://wiki.debian.org/LinuxRaidForAdmins

Maybe I miss something but the page doesn't say anything about cli tools 
of the megaraid cards :-?


>>> 3/ The manufacturer is enough linux-friendly so that in the event of a
>>> problem you can contact them with no regrets :-)
>> 
>> Hope I don't have to find out about that ;-)
> 
> Bugs do exist also in good hardware, so better having a person over the
> wire that at least understands what you are talking about :-)

Thats absolutely true!

 
>> Btw: Wouldn't it be better to use software raid? In case of failure of
>> the controller I would need to get exactly the same card again? Or if I
>> ever want to exchange the mainboard and use one with a SAS controller
>> onboard?
> 
> Some people prefer "mdamd" (linux raid) instead using a hardware raid
> card (it can be more flexible, yes) but IMO, a good raid card provides
> better performance and it's easier to manage than a software raid
> system.
> 
> In case of disasterous raid failure you depend completely on the
> manufacturer and what are the option they can provide (although data
> recovery can be usually done at professional labs).

My data isn't so important that it would justify restoring it with 
professional help.

Still I have to do backups of the really important stuff to dvd or a 
seperate external drive...


>> Thanks for all your help and advices! Ramon
> 
> Just an additional note. By reading the chosen card specs it seems it
> does not support a RAID 6 level (which is better than RAID 5 because it
> allows the failure 2 disks) so that can be a handycap.

This should be no problem. I plan to use four slots without raid for 
mythtv.
I already have a 4x 1.5 TB disks raid 5 and another 4x 2 TB disks raid 5. 
When I want to add more disks I can e.g. go for 3 TB disks and set 4 of 
them up as another raid5.
Like this I can use disks with different sizes.

I'm thinking of combining the arrays then to a lvm...
But I don't know if this is a good idea as it adds more complexity :-?


Best regards
Ramon


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/jo0c64$i4k$1...@dough.gmane.org



Re: Supermicro SAS controller

2012-05-03 Thread Camaleón
On Thu, 03 May 2012 18:27:55 +, Ramon Hofer wrote:

> On Thu, 03 May 2012 16:30:00 +, Camaleón wrote:
> 
> 
> 
>> In brief, yes, that card seems one of those you can consider to be
>> "safe" enough to don't have many problems :-P
> 
> This sounds very good :-)

Sounds good, but only your own experience will confirm :-)

My servers are running over an adaptec sata raid controller which it is 
supposedly one of those that put your system in the "safe-side" but I'll 
reconsider setting up a hardware RAID for the next time should I have the 
chance. And also the manufacturer (I'm a bit dissatisfied with the 
overall result).
 
>> This is an own-made list I did of things one should take into account
>> for hardware RAID cards:
>> 
>> 1/ The driver is included in the kernel (you will avoid many problems)
> 
> This seems to be the case.

If the card uses the "megaraid_sas" kernel module, yes.

>> 2/ The card's manufacturer provides a set of CLI tools (also GUI/web
>> based) to control all of the aspects of the RAID volume (from array
>> creation/modification/reconstruction/rebuilding/deletion/on-the-fly
>> volume expansion/current array status... up to firmware update, if
>> possible)
> 
> Didn't find any infos about that :-?

There's some useful information in one of the links I sent before:

http://wiki.debian.org/LinuxRaidForAdmins

>> 3/ The manufacturer is enough linux-friendly so that in the event of a
>> problem you can contact them with no regrets :-)
> 
> Hope I don't have to find out about that ;-)

Bugs do exist also in good hardware, so better having a person over the 
wire that at least understands what you are talking about :-)

> Btw: Wouldn't it be better to use software raid? In case of failure of
> the controller I would need to get exactly the same card again? Or if I
> ever want to exchange the mainboard and use one with a SAS controller
> onboard?

Some people prefer "mdamd" (linux raid) instead using a hardware raid 
card (it can be more flexible, yes) but IMO, a good raid card provides 
better performance and it's easier to manage than a software raid system.

In case of disasterous raid failure you depend completely on the 
manufacturer and what are the option they can provide (although data 
recovery can be usually done at professional labs).

> Thanks for all your help and advices! Ramon

Just an additional note. By reading the chosen card specs it seems it 
does not support a RAID 6 level (which is better than RAID 5 because it 
allows the failure 2 disks) so that can be a handycap.

Greetings,

-- 
Camaleón


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/jnupml$78r$1...@dough.gmane.org



Re: Supermicro SAS controller

2012-05-03 Thread Ramon Hofer
On Thu, 03 May 2012 13:05:55 -0500, Stan Hoeppner wrote:



> His disks are fine.  The Marvell SAS driver is problem.  This is a
> thoroughly documented.  The mvsas driver is simply crap.

Something that the Supermicro support told me just came back into my mind 
and worries me: With the Norco RPC-4220 case I should use Enterprise 
grade disks.

Here's what they wrote:
> If the harddrives are Desktop grade then they cannot cope with the
> vibrations of a storage Server chassis nor 19" rack vibrations

But since I replaced all the ventilators with quiet ones that are mounted 
with some kind of rubber sticks instead of screws there shouldn't be much 
vibration. Or are the vibrating disks a problem?

Can I somehow observe how healthy they still are? Maybe with smartctl or 
something similar?



Thanks again for all the information!
Ramon



-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/jnukop$rq8$3...@dough.gmane.org



Re: Supermicro SAS controller

2012-05-03 Thread Ramon Hofer
On Thu, 03 May 2012 12:21:17 -0500, Stan Hoeppner wrote:

> On 5/2/2012 11:30 AM, Ramon Hofer wrote:
>> On Tue, 01 May 2012 15:43:13 -0500, Stan Hoeppner wrote:
>> 
>>> On 5/1/2012 12:37 PM, Ramon Hofer wrote:
>>>
 I have the RPC-4220 case with 20 howswap slots.
>>>
>>> You should have mentioned this sooner, as there is a better solution
>>> than buying 3 of the 9211-8i, which is $239*3= $717.  And you end up
>>> with one SFF8087 port wasted.
>>>
>>> Instead, get a 24 port Intel 6Gb SAS expander:
>>> http://www.provantage.com/intel-res2sv240~7ITSP0V8.htm $238.24
>>>
>>> and the LSI 9240-4i, same LSISAS2008 chip as the 9211-8i:
>>> http://www.newegg.com/Product/Product.aspx?Item=N82E16816118129
>>> $189.99
>>>
>>> Total:  $429
>>>
>>> W/4 extra SFF8087 cables (assuming you already have 2):
>>> http://www.newegg.com/Product/Product.aspx?Item=N82E16816116093 $60
>>>
>>> Total:  $489
>>>
>>> This solution connects all 20 drives on all 5 backplanes to the HBA,
>>> and will give you ~1.5GB/s read throughput with 20 7.2k RPM drives
>>> using md RAID 5/6, and ~800MB/s with hardware or md RAID10.
>>>
>>> You connect the SFF8087 of the LSI card to port 0 of the SAS
>>> exapander.
>>>  You then connect the remaining 5 ports to the 5 SFF8087 ports on the
>>>  5
>>> backplanes.
>> 
>> Thanks alot for the suggestions. I have found a shop where I live and
>> will order them tomorrow. Do you have experience with these cards?
> 
> Hi Ramon,
> 
> Yes.  Note that the Intel SAS expander has a PCIe x4 edge connector on
> the PCB and it also has a standard 4 pin Molex connector.  The PCB has
> mounting holes to allow mounting it directly to your chassis via
> motherboard style brass or plastic stand-offs.  This method may likely
> require drilling holes in your chassis.  I often use this method to
> avoid wasting a PCIe slot.  If you have plenty of free PCIe x4/8/16
> slots mount it in one as it's much easier.  Note only power is drawn
> from the PCIe slot.  There is no data xfer.  Data xfer occurs only via
> the SFF8087 ports.  Here's the manual:
> http://download.intel.com/support/motherboards/server/sb/
e93121003_res2sv240_hwug.pdf
> Nice picture and info:
> http://www.intel.com/content/www/us/en/servers/raid/raid-controller-
res2sv240.html
> 
> Using the LSI 9240-4i HBA will be very similar to using the SuperMicro
> Marvell based SAS card, but better.  Simply enter the BIOS at boot and
> configure the drives as you wish.  This card is a real hardware RAID
> controller, not fakeraid, so you can use it as such.  It simply lacks
> cache memory and the more advanced RAID features of LSI's higher end
> RAID cards.  You can even install Debian onto and boot directly from a
> RAID volume on this card.  If you wish to use mdraid instead, configure
> the drives as JBOD so md can see the individual drives.
> 
> If you choose to use the hardware RAID feature, note that you can have a
> maximum of 16 drives per RAID volume.  Thus, if you have 20 drives in
> that chassis, you'd want to create two RAID5 or two RAID10 volumes.  If
> you use a separate boot/OS drive, you can do two hardware RAID5 arrays,
> then create an md linear or RAID0 array of these two hardware volumes so
> you have a single file system across all the drives.  Lots of
> possibilities.  All the info you could want/need for the 9240 is here:
> http://www.lsi.com/products/storagecomponents/Pages/
MegaRAIDSAS9240-4i.aspx


Thank you very much for all the information and the suggestion of the 
cards.
I've just ordered them :-)

What would you suggest: Using the hardware raid functionality or use 
mdadm?
I already used mdadm for some time now and like the ability to change the 
arrays whilst the system is running. Can I do this with the hardware raid 
too?


Thanks again
Ramon


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/jnujnn$rq8$2...@dough.gmane.org



Re: Supermicro SAS controller

2012-05-03 Thread Ramon Hofer
On Thu, 03 May 2012 16:30:00 +, Camaleón wrote:



> In brief, yes, that card seems one of those you can consider to be
> "safe" enough to don't have many problems :-P

This sounds very good :-)


>>> Mmm, yes. I can't tell for that specific model but LSI is a good
>>> manufacturer for HBA solutions and also linux-friendly, at least
>>> that's what I've heard :-)
>> 
>> Yes, I hope I won't have any problems with them. Especially because
>> they too promise SuSE and Red Hat support but only have a Debian 5
>> driver on their homepage.
>> 
>> But since the hwraid page shows good support for MegaRAID cards I'm
>> optimistic :-)
> 
> At this point, let me share my own experience with hardware RAID cards
> because "not all that glitters is gold" :-)
> 
> This is an own-made list I did of things one should take into account
> for hardware RAID cards:
> 
> 1/ The driver is included in the kernel (you will avoid many problems)

This seems to be the case.


> 2/ The card's manufacturer provides a set of CLI tools (also GUI/web
> based) to control all of the aspects of the RAID volume (from array
> creation/modification/reconstruction/rebuilding/deletion/on-the-fly
> volume expansion/current array status... up to firmware update, if
> possible)

Didn't find any infos about that :-?


> 3/ The manufacturer is enough linux-friendly so that in the event of a
> problem you can contact them with no regrets :-)

Hope I don't have to find out about that ;-)

Btw: Wouldn't it be better to use software raid? In case of failure of 
the controller I would need to get exactly the same card again?
Or if I ever want to exchange the mainboard and use one with a SAS 
controller onboard?


>>> Mmm, yes, there's something strange there. Ah, I think I got it :-)
>>>
 $ sudo lspci | grep Marvel
 01:00.0 RAID bus controller: Marvell Technology Group Ltd.
 MV64460/64461/64462 System Controller, Revision B (rev 01)
>>> 
>>> This can be the motherboard SATA 2 controller.
>>> 
 02:00.0 RAID bus controller: Marvell Technology Group Ltd.
 MV64460/64461/64462 System Controller, Revision B (rev 01)
>>> 
>>> This can be the SAS add-on card.
>> 
>> I think they probably are the two SAS cards
> 
> I also thought so, but it cannot be that way :-)
> 
> (note the add-on card is SATA 2 -and thus 3 Gbps- while one of the
> embedded ports is rated at 6 Gbps and there's only one port listed that
> features the 6 Gbps speed)
> 
>>> Does this make more sense? Yes, exact numbers do not match but this
>>> can be due to a simple identification problem ("update-pciids" could
>>> solve this).
>> 
>> I did update-pciids but the numbers didn't change. But anyhow they are
>> the same as on the debian wiki pci database. Or what numbers don't
>> match?
> 
> I wouldn't bother about that. Maybe is just the chipsets are still not
> listed at the upstream PCI ID database.

Ok, I already forgot :-)


Thanks for all your help and advices!
Ramon


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/jnuinb$rq8$1...@dough.gmane.org



Re: Supermicro SAS controller

2012-05-03 Thread Stan Hoeppner
On 5/3/2012 8:48 AM, Ramon Hofer wrote:

> Thanks for the warning. I will carefully check about the LSI 9240-4i and 
> the Intel 6Gb SAS expander.

SAS expanders are transparent.  They have no bearing on supported drive
sizes.  That is entirely up to the HBA firmware.  And in this case the
9240 supports >2TB drives, as listed in the docs I linked.

> I was just googling for the LSI SAS 9240-4i. It seems as it uses the same 
> chipset as the Intel expansion card (see post #5 in [1]).

No, they're different chips.  LSI manufactures a few thousand different
ICs.  Both of these products have LSI chips, but they are not the same
chip.  The Intel board has an LSISAS2x24 SAS expander ASIC.  The LSI
9240-4i RAID HBA has an LSISAS2008 storage controller ASIC.  These are
entirely different ICs.

> They should be supported by the hwraid packages [2].
> 
> [1] http://hardforum.com/showthread.php?p=1037845618
> [2] http://hwraid.le-vert.net/wiki/DebianPackages
> 
> So I think this looks promising for the controller and expansion cards?

The default Debian kernel should automatically load the correct megaraid
driver if using JBOD mode.  If using RAID mode you may have to do some
manual driver intervention.  That 2nd link seems to show all the
packages/binaries you'll need for management.

 Glag it's more stable now with an updated kernel but I'd be keep
 monitoring the array during some days... and if you experience another
 issue with the disks, I would reconsider in replacing the hard disk
 controller or moving to SATA disks, instead.

His disks are fine.  The Marvell SAS driver is problem.  This is a
thoroughly documented.  The mvsas driver is simply crap.

>>> Thanks.
>>> I think I'll go with the solution Stan posted (LSI 9240-4i and Intel
>>> SAS expander).
>>
>> Mmm, yes. I can't tell for that specific model but LSI is a good
>> manufacturer for HBA solutions and also linux-friendly, at least that's
>> what I've heard :-)

The overall best quality, performance, and best supported SAS/SATA HBAs
on the planet, period.  Which is why so many tier 1 server vendors sell
OEM LSI cards, the biggest two being Dell (PERC) and IBM (ServeRAID).
Their boards are rebadged LSI boards with slightly tweaked firmware
(branding only, not function).  This goes all the way back to the days
well before LSI acquired AMI and Mylex.  Dell OEM'd AMI's MegaRAID
boards and IBM OEM'd Mylex boards.  IBM acquired Mylex in 2001, then
sold them to LSI a few years later.  Then LSI acquired the RAID division
of AMI, and ended up with the two premier RAID card manufacturers on the
planet.  Which is why they are the RAID powerhouse in the market today.
 Oh, forgot to mention they also acquired 3ware a few years ago.  That
acquisition was more to kill the competition than steal their IP.  All
the current 3ware boards are rebadged LSI models, or slightly different
configurations of same.  And they still have a 3ware-esque firmware.

> Yes, I hope I won't have any problems with them. Especially because they 
> too promise SuSE and Red Hat support but only have a Debian 5 driver on 
> their homepage.

Never use the drivers from the LSI site.  They're always out of date.
Use what came with your kernel.  If it doesn't work right, upgrade your
kernel from backports (which you should do anyway).  Mainline always has
more recent LSI drivers.

> But since the hwraid page shows good support for MegaRAID cards I'm 
> optimistic :-)

I already told you all that. ;)  But I don't blame you for
verifying--always smart to do.

-- 
Stan


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/4fa2c903.20...@hardwarefreak.com



Re: Supermicro SAS controller

2012-05-03 Thread Stan Hoeppner
On 5/2/2012 11:30 AM, Ramon Hofer wrote:
> On Tue, 01 May 2012 15:43:13 -0500, Stan Hoeppner wrote:
> 
>> On 5/1/2012 12:37 PM, Ramon Hofer wrote:
>>
>>> I have the RPC-4220 case with 20 howswap slots.
>>
>> You should have mentioned this sooner, as there is a better solution
>> than buying 3 of the 9211-8i, which is $239*3= $717.  And you end up
>> with one SFF8087 port wasted.
>>
>> Instead, get a 24 port Intel 6Gb SAS expander:
>> http://www.provantage.com/intel-res2sv240~7ITSP0V8.htm $238.24
>>
>> and the LSI 9240-4i, same LSISAS2008 chip as the 9211-8i:
>> http://www.newegg.com/Product/Product.aspx?Item=N82E16816118129 $189.99
>>
>> Total:  $429
>>
>> W/4 extra SFF8087 cables (assuming you already have 2):
>> http://www.newegg.com/Product/Product.aspx?Item=N82E16816116093 $60
>>
>> Total:  $489
>>
>> This solution connects all 20 drives on all 5 backplanes to the HBA, and
>> will give you ~1.5GB/s read throughput with 20 7.2k RPM drives using md
>> RAID 5/6, and ~800MB/s with hardware or md RAID10.
>>
>> You connect the SFF8087 of the LSI card to port 0 of the SAS exapander.
>>  You then connect the remaining 5 ports to the 5 SFF8087 ports on the 5
>> backplanes.
> 
> Thanks alot for the suggestions. I have found a shop where I live and 
> will order them tomorrow. Do you have experience with these cards?

Hi Ramon,

Yes.  Note that the Intel SAS expander has a PCIe x4 edge connector on
the PCB and it also has a standard 4 pin Molex connector.  The PCB has
mounting holes to allow mounting it directly to your chassis via
motherboard style brass or plastic stand-offs.  This method may likely
require drilling holes in your chassis.  I often use this method to
avoid wasting a PCIe slot.  If you have plenty of free PCIe x4/8/16
slots mount it in one as it's much easier.  Note only power is drawn
from the PCIe slot.  There is no data xfer.  Data xfer occurs only via
the SFF8087 ports.  Here's the manual:
http://download.intel.com/support/motherboards/server/sb/e93121003_res2sv240_hwug.pdf
Nice picture and info:
http://www.intel.com/content/www/us/en/servers/raid/raid-controller-res2sv240.html

Using the LSI 9240-4i HBA will be very similar to using the SuperMicro
Marvell based SAS card, but better.  Simply enter the BIOS at boot and
configure the drives as you wish.  This card is a real hardware RAID
controller, not fakeraid, so you can use it as such.  It simply lacks
cache memory and the more advanced RAID features of LSI's higher end
RAID cards.  You can even install Debian onto and boot directly from a
RAID volume on this card.  If you wish to use mdraid instead, configure
the drives as JBOD so md can see the individual drives.

If you choose to use the hardware RAID feature, note that you can have a
maximum of 16 drives per RAID volume.  Thus, if you have 20 drives in
that chassis, you'd want to create two RAID5 or two RAID10 volumes.  If
you use a separate boot/OS drive, you can do two hardware RAID5 arrays,
then create an md linear or RAID0 array of these two hardware volumes so
you have a single file system across all the drives.  Lots of
possibilities.  All the info you could want/need for the 9240 is here:
http://www.lsi.com/products/storagecomponents/Pages/MegaRAIDSAS9240-4i.aspx

-- 
Stan


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/4fa2be8d.7030...@hardwarefreak.com



Re: Supermicro SAS controller

2012-05-03 Thread Camaleón
On Thu, 03 May 2012 13:48:33 +, Ramon Hofer wrote:

> On Wed, 02 May 2012 17:49:53 +, Camaleón wrote:

(removing some stuff)

>> Just let me add a note of warning here: whatever SAS/SATA card you
>> finally choose, ensure that has support for big hard disks (>2-3TiB)
>> just in case, because this information is not usually displayed on the
>> specs.
> 
> Thanks for the warning. I will carefully check about the LSI 9240-4i and
> the Intel 6Gb SAS expander.
> 
> I was just googling for the LSI SAS 9240-4i. It seems as it uses the
> same chipset as the Intel expansion card (see post #5 in [1]).
> 
> They should be supported by the hwraid packages [2].
> 
> [1] http://hardforum.com/showthread.php?p=1037845618 
> [2] http://hwraid.le-vert.net/wiki/DebianPackages
> 
> So I think this looks promising for the controller and expansion cards?

The first time I had to buy a hardware raid controller I looked into 
these two links which helped me a lot to "separate the sheep from the 
goats":

http://linuxmafia.com/faq/Hardware/sata.html
https://ata.wiki.kernel.org/index.php/SATA_RAID_FAQ
http://wiki.debian.org/LinuxRaidForAdmins

(note that despite being scsi/sas/sata adapter controllers they all can 
share the same set of drivers)

Altough the first link is not very up-to-date, it's still very helpful 
when it comes to distinguish between the drivers/chipsets of the 
controllers.

To keep yourself at the "safe-side", I would recommend sticking to the 
set of drivers listed at the beginning of the page, that is:

***
Hardware RAID cards have drivers outside these two collections (e.g., 3w-
, 3w-9xxx, aacraid, cciss, dac960, dpt_i2o, gdth, ips, megaraid, 
megaraid2, megaraid_mbox aka megaraid-newgen, mpt*).
***

These drivers are usually open source and so can be included in the 
kernel so you don't even need to install nothing to get the card and the 
array levels detected at install time.

In brief, yes, that card seems one of those you can consider to be "safe" 
enough to don't have many problems :-P

>> Mmm, yes. I can't tell for that specific model but LSI is a good
>> manufacturer for HBA solutions and also linux-friendly, at least that's
>> what I've heard :-)
> 
> Yes, I hope I won't have any problems with them. Especially because they
> too promise SuSE and Red Hat support but only have a Debian 5 driver on
> their homepage.
> 
> But since the hwraid page shows good support for MegaRAID cards I'm
> optimistic :-)

At this point, let me share my own experience with hardware RAID cards 
because "not all that glitters is gold" :-)

This is an own-made list I did of things one should take into account for 
hardware RAID cards:

1/ The driver is included in the kernel (you will avoid many problems)

2/ The card's manufacturer provides a set of CLI tools (also GUI/web 
based) to control all of the aspects of the RAID volume (from array 
creation/modification/reconstruction/rebuilding/deletion/on-the-fly 
volume expansion/current array status... up to firmware update, if 
possible)

3/ The manufacturer is enough linux-friendly so that in the event of a 
problem you can contact them with no regrets :-)

>> Mmm, yes, there's something strange there. Ah, I think I got it :-)
>>
>>> $ sudo lspci | grep Marvel
>>> 01:00.0 RAID bus controller: Marvell Technology Group Ltd.
>>> MV64460/64461/64462 System Controller, Revision B (rev 01)
>> 
>> This can be the motherboard SATA 2 controller.
>> 
>>> 02:00.0 RAID bus controller: Marvell Technology Group Ltd.
>>> MV64460/64461/64462 System Controller, Revision B (rev 01)
>> 
>> This can be the SAS add-on card.
> 
> I think they probably are the two SAS cards

I also thought so, but it cannot be that way :-)

(note the add-on card is SATA 2 -and thus 3 Gbps- while one of the 
embedded ports is rated at 6 Gbps and there's only one port listed that 
features the 6 Gbps speed)

>> Does this make more sense? Yes, exact numbers do not match but this can
>> be due to a simple identification problem ("update-pciids" could solve
>> this).
> 
> I did update-pciids but the numbers didn't change. But anyhow they are
> the same as on the debian wiki pci database. Or what numbers don't
> match?

I wouldn't bother about that. Maybe is just the chipsets are still not 
listed at the upstream PCI ID database.

Greetings,

-- 
Camaleón


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/jnubq8$78r$6...@dough.gmane.org



Re: Supermicro SAS controller

2012-05-03 Thread Ramon Hofer
On Wed, 02 May 2012 17:49:53 +, Camaleón wrote:

> On Wed, 02 May 2012 16:19:40 +, Ramon Hofer wrote:
> 
>> On Wed, 02 May 2012 14:21:36 +, Camaleón wrote:
> 
>>> Ah, okay. This one:
>>> 
>>> http://www.supermicro.com/products/motherboard/Core/P67/C7P67.cfm
>>> 
>>> The board has no SAS ports but it features 8 SATA ports (4 SATA2 and 4
>>> SATA3), aren't those enough your your purpose? :-?
>> 
>> Yes, that's the mainboard I got.
>> 
>> The case has two places to add os drives, one for a cdrom and 20 hot
>> swapable disks.
>> It was available with either SAS or SATA connectors. But I would have
>> needed 23 SATA connectors on the mainboard or addon cards. The case
>> with 5 SAS connectors was available and the SATA one had much later
>> delivery date so I went for the SAS case.
> 
> But you are still physically limited to the eight-ports provided by the
> add-on card, right? :-?

Well, I have two cards and eight sata ports on the mainboard. With a 
4xsata to sas cable I can connect another four hot swap drives and the os 
drives plus cdrom.
Until now I have four 2 TB and four 1.5 TB disks. But I wanted to be able 
to expand when I need more space.


>>> Well, I wonder why is that you chose to go with SAS drives instead
>>> using SATA given that the motehrboard only has SATA ports. When
>>> someone adds a SAS controller is usually because he/she wnats to build
>>> a mainstream server or expectes more performance/reliability than the
>>> average :-)
>> 
>> Since I couldn't find any mainboards with more than 20 SATA ports and
>> enough slots for addon cards (1x PCI, 2x PCI-Ex1 only for the tv
>> cards).
> 
> Okay, I didn't realize you were planning to use all of the available
> hard disk trays of the case :-)
> 
> But then, you will need SAS controller with expansion capabilities,
> don't you? I maybe overlooked but the SuperMicro SAS controller you
> first pointed out does not seem to support more than 8 devices.

I have two of these cards. This makes 16 drives which can be attached to 
the controllers.


 Now I have one 500 GB disk as system drive but I'm thinking of adding
 another one as RAID1.
>>> 
>>> This leads me to another question. Why RAID 1 for a media server?
>> 
>> Just because the case has two places for os disks. But on the other
>> hand it's seems to be interesting to set up a bootable raid1. And
>> because it's calming to have the safety of the raid as it serves all
>> the media I have: MythTV, LogitechMediaServer, etc. So my family relies
>> on it and isn't amused when the system is down ;-)
> 
> Okay :-)
> 
> Just let me add a note of warning here: whatever SAS/SATA card you
> finally choose, ensure that has support for big hard disks (>2-3TiB)
> just in case, because this information is not usually displayed on the
> specs.

Thanks for the warning. I will carefully check about the LSI 9240-4i and 
the Intel 6Gb SAS expander.

I was just googling for the LSI SAS 9240-4i. It seems as it uses the same 
chipset as the Intel expansion card (see post #5 in [1]).

They should be supported by the hwraid packages [2].

[1] http://hardforum.com/showthread.php?p=1037845618
[2] http://hwraid.le-vert.net/wiki/DebianPackages

So I think this looks promising for the controller and expansion cards?


>>> Okay, let's see what we have for now:
>>> 
>>> - A motherboard with 8 SATA ports
>>> - A 4U case with up to 20 hot-swap drive bays for the disks (SATA/SAS)
>>> 
>>> I wonder why is that you have not considered using SATA hard disks :-)
>> 
>> Besides the fact of the longer delivery because I couldn't find cheaper
>> solution than the two Supermicro SAS cards. The rest of the disks and
>> optical drive.
> 
> Ah, so your plan was adding two of this eight-port SAS addon card to get
> a total of 16 hard disks.

Yes exactly :-)
But I should have searched infos more carefully :-?


 But in the meantime I have installed the bpo kernel and it seems to
 be working now...
 At least it never run the disk check for so long, the raid is
 rebuilding and I can see the details as much as I want...
>>> 
>>> Glag it's more stable now with an updated kernel but I'd be keep
>>> monitoring the array during some days... and if you experience another
>>> issue with the disks, I would reconsider in replacing the hard disk
>>> controller or moving to SATA disks, instead.
>> 
>> Thanks.
>> I think I'll go with the solution Stan posted (LSI 9240-4i and Intel
>> SAS expander).
> 
> Mmm, yes. I can't tell for that specific model but LSI is a good
> manufacturer for HBA solutions and also linux-friendly, at least that's
> what I've heard :-)

Yes, I hope I won't have any problems with them. Especially because they 
too promise SuSE and Red Hat support but only have a Debian 5 driver on 
their homepage.

But since the hwraid page shows good support for MegaRAID cards I'm 
optimistic :-)


 But I'm confused about the two different versions too. lspci shows:
>>> 
>>> (I'm copying the rest of the me

Re: Supermicro SAS controller

2012-05-02 Thread Camaleón
On Wed, 02 May 2012 16:19:40 +, Ramon Hofer wrote:

> On Wed, 02 May 2012 14:21:36 +, Camaleón wrote:

>> Ah, okay. This one:
>> 
>> http://www.supermicro.com/products/motherboard/Core/P67/C7P67.cfm
>> 
>> The board has no SAS ports but it features 8 SATA ports (4 SATA2 and 4
>> SATA3), aren't those enough your your purpose? :-?
> 
> Yes, that's the mainboard I got.
> 
> The case has two places to add os drives, one for a cdrom and 20 hot
> swapable disks.
> It was available with either SAS or SATA connectors. But I would have
> needed 23 SATA connectors on the mainboard or addon cards. The case with
> 5 SAS connectors was available and the SATA one had much later delivery
> date so I went for the SAS case.

But you are still physically limited to the eight-ports provided by the 
add-on card, right? :-?

>> Well, I wonder why is that you chose to go with SAS drives instead
>> using SATA given that the motehrboard only has SATA ports. When someone
>> adds a SAS controller is usually because he/she wnats to build a
>> mainstream server or expectes more performance/reliability than the
>> average :-)
> 
> Since I couldn't find any mainboards with more than 20 SATA ports and
> enough slots for addon cards (1x PCI, 2x PCI-Ex1 only for the tv cards).

Okay, I didn't realize you were planning to use all of the available hard 
disk trays of the case :-)

But then, you will need SAS controller with expansion capabilities, don't 
you? I maybe overlooked but the SuperMicro SAS controller you first 
pointed out does not seem to support more than 8 devices.

>>> Now I have one 500 GB disk as system drive but I'm thinking of adding
>>> another one as RAID1.
>> 
>> This leads me to another question. Why RAID 1 for a media server?
> 
> Just because the case has two places for os disks. But on the other hand
> it's seems to be interesting to set up a bootable raid1. And because
> it's calming to have the safety of the raid as it serves all the media I
> have: MythTV, LogitechMediaServer, etc. So my family relies on it and
> isn't amused when the system is down ;-)

Okay :-)

Just let me add a note of warning here: whatever SAS/SATA card you 
finally choose, ensure that has support for big hard disks (>2-3TiB) just 
in case, because this information is not usually displayed on the specs.

>> Okay, let's see what we have for now:
>> 
>> - A motherboard with 8 SATA ports
>> - A 4U case with up to 20 hot-swap drive bays for the disks (SATA/SAS)
>> 
>> I wonder why is that you have not considered using SATA hard disks :-)
> 
> Besides the fact of the longer delivery because I couldn't find cheaper
> solution than the two Supermicro SAS cards. The rest of the disks and
> optical drive.

Ah, so your plan was adding two of this eight-port SAS addon card to get 
a total of 16 hard disks.
 
>>> But in the meantime I have installed the bpo kernel and it seems to be
>>> working now...
>>> At least it never run the disk check for so long, the raid is
>>> rebuilding and I can see the details as much as I want...
>> 
>> Glag it's more stable now with an updated kernel but I'd be keep
>> monitoring the array during some days... and if you experience another
>> issue with the disks, I would reconsider in replacing the hard disk
>> controller or moving to SATA disks, instead.
> 
> Thanks.
> I think I'll go with the solution Stan posted (LSI 9240-4i and Intel SAS
> expander).

Mmm, yes. I can't tell for that specific model but LSI is a good 
manufacturer for HBA solutions and also linux-friendly, at least that's 
what I've heard :-)

>>> You're about an hour too late :-o
>>> But I already had the newest firmware on the card.
>> 
>> Oh. Hope all went well O:-)
> 
> Yes, I hope to be able to sell them to Windows users :-)

He, he.. good move :-)

>>> But I'm confused about the two different versions too. lspci shows:
>> 
>> (I'm copying the rest of the message here)
>> 
>>> 01:00.0 RAID bus controller: Marvell Technology Group Ltd.
>>> MV64460/64461/64462 System Controller, Revision B (rev 01)
>> 
>> Well, lspci should display two different sets for the hard disk
>> controller: the SAS adapter (Marvell 88SE6480) and the motherboard
>> embedded chipset (Marvell 88SE9128) but none of these two matches with
>> the lscpi output :-?
> 
> You're right:
> http://pastebin.com/raw.php?i=JQtrS5J2
> 
> Why don't they match :-?

Mmm, yes, there's something strange there. Ah, I think I got it :-)

> $ sudo lspci | grep Marvel
> 01:00.0 RAID bus controller: Marvell Technology Group Ltd. 
> MV64460/64461/64462 System Controller, Revision B (rev 01)

This can be the motherboard SATA 2 controller.

> 02:00.0 RAID bus controller: Marvell Technology Group Ltd. 
> MV64460/64461/64462 System Controller, Revision B (rev 01)

This can be the SAS add-on card.

> 03:00.0 SATA controller: Marvell Technology Group Ltd. 88SE9123 PCIe SATA 6.0 
> Gb/s controller (rev 11)

This is the motherboard SATA 3 controller.

> 03:00.1 IDE interface: Marvell Technology Group

Re: Supermicro SAS controller

2012-05-02 Thread Ramon Hofer
On Tue, 01 May 2012 15:43:13 -0500, Stan Hoeppner wrote:

> On 5/1/2012 12:37 PM, Ramon Hofer wrote:
> 
>> I have the RPC-4220 case with 20 howswap slots.
> 
> You should have mentioned this sooner, as there is a better solution
> than buying 3 of the 9211-8i, which is $239*3= $717.  And you end up
> with one SFF8087 port wasted.
> 
> Instead, get a 24 port Intel 6Gb SAS expander:
> http://www.provantage.com/intel-res2sv240~7ITSP0V8.htm $238.24
> 
> and the LSI 9240-4i, same LSISAS2008 chip as the 9211-8i:
> http://www.newegg.com/Product/Product.aspx?Item=N82E16816118129 $189.99
> 
> Total:  $429
> 
> W/4 extra SFF8087 cables (assuming you already have 2):
> http://www.newegg.com/Product/Product.aspx?Item=N82E16816116093 $60
> 
> Total:  $489
> 
> This solution connects all 20 drives on all 5 backplanes to the HBA, and
> will give you ~1.5GB/s read throughput with 20 7.2k RPM drives using md
> RAID 5/6, and ~800MB/s with hardware or md RAID10.
> 
> You connect the SFF8087 of the LSI card to port 0 of the SAS exapander.
>  You then connect the remaining 5 ports to the 5 SFF8087 ports on the 5
> backplanes.

Thanks alot for the suggestions. I have found a shop where I live and 
will order them tomorrow. Do you have experience with these cards?


Best regards
Ramon


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/jnrned$9rb$2...@dough.gmane.org



Re: Supermicro SAS controller

2012-05-02 Thread Ramon Hofer
On Wed, 02 May 2012 14:21:36 +, Camaleón wrote:

> On Tue, 01 May 2012 17:29:17 +, Ramon Hofer wrote:
> 
>> On Tue, 01 May 2012 16:16:07 +, Camaleón wrote:
>> 
>>> What kind of hardware do you have (motherboard brand and model) and
>>> what kind of hard disk controller do you need, what are your
>>> expectations?
>>> 
>>> SuperMicro boards (I'm also a SuperMicro user) are usually good enough
>>> to use their embedded SAS/SATA ports, at least if you want to use a
>>> software raid solution :-?
>> 
>> I have a Supermicro C7P67 board. But there aren't any SAS connectors
>> there.
> 
> Ah, okay. This one:
> 
> http://www.supermicro.com/products/motherboard/Core/P67/C7P67.cfm
> 
> The board has no SAS ports but it features 8 SATA ports (4 SATA2 and 4
> SATA3), aren't those enough your your purpose? :-?

Yes, that's the mainboard I got.

The case has two places to add os drives, one for a cdrom and 20 hot 
swapable disks.
It was available with either SAS or SATA connectors. But I would have 
needed 23 SATA connectors on the mainboard or addon cards.
The case with 5 SAS connectors was available and the SATA one had much 
later delivery date so I went for the SAS case.



>> This is a home media server. Earlier I used a debian box with a raid
>> and a disk for mythtv recordings. But I ran out of space and
>> resurrected an ReadyNas NV+. But this was so slow and I wanted to have
>> everything centralized. So I was looking for something else and found
>> this case:
>> 
>> http://cybershop.ri-vier.nl/4u-rackmnt-server-case-w20-hotswap-satasas-
drv-bays-rpc4220-p-18.html
>> 
>> They also had that SAS controller and on the Supermicro website they
>> wrote it would be SUSE and Red Hat compatible. So I thought it runs too
>> under Debian.
> 
> Well, the driver status for most of the hardware out there can be
> "misleading" many times. This is like a double-sided sword, you have to
> carefully read the technical specs of the device to find out the chipset
> it uses and then, search for its status in the kernel. If you rely on
> hardware manufacturer's driver you are stuck: they can drop it at any
> time or don't compile for your linux distribution version, which seems
> to be this case :-(

Sounds very true :-(

 
>> So performance isn't very important. But I don't know what exactly you
>> mean by expectations.
> 
> Well, I wonder why is that you chose to go with SAS drives instead using
> SATA given that the motehrboard only has SATA ports. When someone adds a
> SAS controller is usually because he/she wnats to build a mainstream
> server or expectes more performance/reliability than the average :-)

Since I couldn't find any mainboards with more than 20 SATA ports and 
enough slots for addon cards (1x PCI, 2x PCI-Ex1 only for the tv cards).


>> The controller should give access to the disks. They will mostly be
>> slow green drives. It's not even a very big problem if it's limited to
>> 3 TB but of course it would be nice if I could also go bigger in some
>> years when I run out of space again and want to add another raid.
> 
> Okay... I'll ask you again: why a SAS controller instead using the
> embedded SATA ports?

To be honest just because the case was ready at the dealer...


>> So the media server contains one analogue PCI tuner card (PVR-500) and
>> one (maybe in future a second one will be added) TeVii (S480) sat tuner
>> card.
>> 
>> Now I have one 500 GB disk as system drive but I'm thinking of adding
>> another one as RAID1.
> 
> This leads me to another question. Why RAID 1 for a media server?

Just because the case has two places for os disks. But on the other hand 
it's seems to be interesting to set up a bootable raid1. And because it's 
calming to have the safety of the raid as it serves all the media I have: 
MythTV, LogitechMediaServer, etc. So my family relies on it and isn't 
amused when the system is down ;-)


>> With the 20 hot swap slots in the case, the two system drives and an
>> optical drive I need 23 sata connectors. Or better four SAS connectors
>> and the eight SATA ports on the mainboard.
>> 
>> I think software raid will cause me less cost and problem because when
>> the controller fails I can replace it by anything that can talk SAS?
> 
> Okay, let's see what we have for now:
> 
> - A motherboard with 8 SATA ports
> - A 4U case with up to 20 hot-swap drive bays for the disks (SATA/SAS)
> 
> I wonder why is that you have not considered using SATA hard disks :-)

Besides the fact of the longer delivery because I couldn't find cheaper 
solution than the two Supermicro SAS cards. The rest of the disks and 
optical drive.



>>> Well, I'm not familiar with MD (I use hardware raid) but "md1 stopped"
>>> and raid 5 with only 2 elements in the array does not sound very good
>>> ;-(
>> 
>> Ah, yes you're right :-o
>> 
>> Was this during bootup? I recreated the array again after bootup...
> 
> It could be...
> 
>>> Ugh... and when is that happening, I mean, that "I/O error"? At
>>> in

Re: Supermicro SAS controller

2012-05-02 Thread Camaleón
On Tue, 01 May 2012 17:29:17 +, Ramon Hofer wrote:

> On Tue, 01 May 2012 16:16:07 +, Camaleón wrote:
> 
>> What kind of hardware do you have (motherboard brand and model) and
>> what kind of hard disk controller do you need, what are your
>> expectations?
>> 
>> SuperMicro boards (I'm also a SuperMicro user) are usually good enough
>> to use their embedded SAS/SATA ports, at least if you want to use a
>> software raid solution :-?
> 
> I have a Supermicro C7P67 board. But there aren't any SAS connectors
> there.

Ah, okay. This one:

http://www.supermicro.com/products/motherboard/Core/P67/C7P67.cfm

The board has no SAS ports but it features 8 SATA ports (4 SATA2 and 4 
SATA3), aren't those enough your your purpose? :-?

> This is a home media server. Earlier I used a debian box with a raid and
> a disk for mythtv recordings. But I ran out of space and resurrected an
> ReadyNas NV+. But this was so slow and I wanted to have everything
> centralized. So I was looking for something else and found this case:
> 
> http://cybershop.ri-vier.nl/4u-rackmnt-server-case-w20-hotswap-satasas-drv-bays-rpc4220-p-18.html
> 
> They also had that SAS controller and on the Supermicro website they
> wrote it would be SUSE and Red Hat compatible. So I thought it runs too
> under Debian.

Well, the driver status for most of the hardware out there can be 
"misleading" many times. This is like a double-sided sword, you have
to carefully read the technical specs of the device to find out the
chipset it uses and then, search for its status in the kernel. If you
rely on hardware manufacturer's driver you are stuck: they can drop it
at any time or don't compile for your linux distribution version, which
seems to be this case :-(

> So performance isn't very important. But I don't know what exactly you
> mean by expectations. 

Well, I wonder why is that you chose to go with SAS drives instead using
SATA given that the motehrboard only has SATA ports. When someone adds
a SAS controller is usually because he/she wnats to build a mainstream 
server or expectes more performance/reliability than the average :-)

> The controller should give access to the disks.
> They will mostly be slow green drives. It's not even a very big problem
> if it's limited to 3 TB but of course it would be nice if I could also
> go bigger in some years when I run out of space again and want to add
> another raid.

Okay... I'll ask you again: why a SAS controller instead using the 
embedded SATA ports?

> So the media server contains one analogue PCI tuner card (PVR-500) and
> one (maybe in future a second one will be added) TeVii (S480) sat tuner
> card.
> 
> Now I have one 500 GB disk as system drive but I'm thinking of adding
> another one as RAID1.

This leads me to another question. Why RAID 1 for a media server?

> With the 20 hot swap slots in the case, the two system drives and an
> optical drive I need 23 sata connectors. Or better four SAS connectors
> and the eight SATA ports on the mainboard.
> 
> I think software raid will cause me less cost and problem because when
> the controller fails I can replace it by anything that can talk SAS?

Okay, let's see what we have for now:

- A motherboard with 8 SATA ports
- A 4U case with up to 20 hot-swap drive bays for the disks (SATA/SAS)

I wonder why is that you have not considered using SATA hard disks :-)

>> Well, I'm not familiar with MD (I use hardware raid) but "md1 stopped"
>> and raid 5 with only 2 elements in the array does not sound very good
>> ;-(
> 
> Ah, yes you're right :-o
> 
> Was this during bootup? I recreated the array again after bootup...

It could be...

>> Ugh... and when is that happening, I mean, that "I/O error"? At install
>> time, when partitioning, after the first boot?
> 
> This usually happens when I tried to create the filesystem on the raid
> array by
> 
> sudo mkfs.ext4 -c -L test-device-1 /dev/md1
> 
> And when I then want to see details about the array (sudo mdadm --detail
> / dev/md1) the system crashes and I get the I/O error.
> 
> This causes so much problem that I wasn't able to repair it when it
> happened the first time (afterwards I had nothing to recover ;-) ).
> 
> I posted it here:
> 
> http://lists.debian.org/debian-user/2012/04/msg01290.html

Too much hassle/problems for a simple raid5 volume :-(

>>> I've written a mail to Supermicro. Should I also create a Debian bug
>>> report?
>> 
>> Yup, tough I think it will be forwarded upstream.
> 
> Thanks I will run reportbug.
> 
> But in the meantime I have installed the bpo kernel and it seems to be
> working now...
> At least it never run the disk check for so long, the raid is rebuilding
> and I can see the details as much as I want...

Glag it's more stable now with an updated kernel but I'd be keep monitoring 
the array during some days... and if you experience another issue with the
disks, I would reconsider in replacing the hard disk controller or moving to
SATA disks, instead.

>> Mmm, then the ab

Re: Supermicro SAS controller

2012-05-01 Thread Stan Hoeppner
On 5/1/2012 12:37 PM, Ramon Hofer wrote:

> I have the RPC-4220 case with 20 howswap slots.

You should have mentioned this sooner, as there is a better solution
than buying 3 of the 9211-8i, which is $239*3= $717.  And you end up
with one SFF8087 port wasted.

Instead, get a 24 port Intel 6Gb SAS expander:
http://www.provantage.com/intel-res2sv240~7ITSP0V8.htm
$238.24

and the LSI 9240-4i, same LSISAS2008 chip as the 9211-8i:
http://www.newegg.com/Product/Product.aspx?Item=N82E16816118129
$189.99

Total:  $429

W/4 extra SFF8087 cables (assuming you already have 2):
http://www.newegg.com/Product/Product.aspx?Item=N82E16816116093
$60

Total:  $489

This solution connects all 20 drives on all 5 backplanes to the HBA, and
will give you ~1.5GB/s read throughput with 20 7.2k RPM drives using md
RAID 5/6, and ~800MB/s with hardware or md RAID10.

You connect the SFF8087 of the LSI card to port 0 of the SAS exapander.
 You then connect the remaining 5 ports to the 5 SFF8087 ports on the 5
backplanes.

-- 
Stan


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/4fa04ae1.9000...@hardwarefreak.com



Re: Supermicro SAS controller

2012-05-01 Thread Ramon Hofer
Sorry I hit ctrl + enter or something and the message went out...

On Tue, 01 May 2012 17:29:17 +, Ramon Hofer wrote:

> But I'm confused about the two different versions too. lspci shows:

01:00.0 RAID bus controller: Marvell Technology Group Ltd. 
MV64460/64461/64462 System Controller, Revision B (rev 01)



-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/jnp744$3u4$6...@dough.gmane.org



Re: Supermicro SAS controller

2012-05-01 Thread Ramon Hofer
On Tue, 01 May 2012 10:57:47 -0500, Stan Hoeppner wrote:

> On 5/1/2012 6:53 AM, Ramon Hofer wrote:
>> Hi all
>> 
>> I'm using Debian Squeeze and would like to use a Supermicro
>> AOC-SASLP-MV8 as controller for a software raid.
> 
> The mvsas Linux driver has never been ready for production, unless
> things have dramatically changed very recently.  The AOC-SASLP-MV8 will
> work fine on a MS Windows machine, but you will continue to suffer many
> nightmares with Linux.  Google for the mvsas horror stories.
> 
>> What else can I do?
> 
> Ebay that card and acquire one that will simply work:
> http://www.newegg.com/Product/Product.aspx?Item=N82E16816118112 This LSI
> card supports 6Gb SAS/SATA3 and 3TB+ drives.

Thanks alot!


> If you have x4 PCIe slots but not x8/x16, then get the Intel card:
> http://www.newegg.com/Product/Product.aspx?Item=N82E16816117141

Nono, I have enough x16 slots. The x4 slots aren't enough.


 
> People buy this SM card because it's the cheapest thing on the planet
> with 2xSFF8087 ports (without first looking up its reputation).  If the
> dual SFF8087 cards above are beyond your budget, go with 2 Silicon Image
> based 4 port cards with plain SATA connectors:
> http://www.newegg.com/Product/Product.aspx?Item=N82E16816124027
> 
> If you don't have a backplane with SFF8087 connectors, simply use 4
> regular SATA cables with the SiI cards.  If you do have a backplane, buy
> 2 new 4 port SATA to SFF8087 2ft reverse breakout cables:
> http://www.newegg.com/Product/Product.aspx?Item=N82E16816116101
> 
> The LSI card is $239, will give far superior performance, and will work
> with your current cables.  The 2xSiI cards is $120, $154 w/cables.
> Either have great Linux compatibility.

Thanks you very much!!!

I have the RPC-4220 case with 20 howswap slots. So I can only go for the 
LSI card. But when this means no more such problems I'm more than 
happy :-)


Thanks again!



-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/jnp71k$3u4$5...@dough.gmane.org



Re: Supermicro SAS controller

2012-05-01 Thread Ramon Hofer
On Tue, 01 May 2012 16:16:07 +, Camaleón wrote:

>> Can you recommend something better?
>> It was very nice because it's quite cheap and I don't need a hardware
>> raid card.
> 
> "Cheap" and "nice" do not usually came together, or to put it well,
> "cheap" and "good performance" do not usually match :-)
> 
> What kind of hardware do you have (motherboard brand and model) and what
> kind of hard disk controller do you need, what are your expectations?
> 
> SuperMicro boards (I'm also a SuperMicro user) are usually good enough
> to use their embedded SAS/SATA ports, at least if you want to use a
> software raid solution :-?

I have a Supermicro C7P67 board. But there aren't any SAS connectors 
there.

This is a home media server. Earlier I used a debian box with a raid and 
a disk for mythtv recordings. But I ran out of space and resurrected an 
ReadyNas NV+. But this was so slow and I wanted to have everything 
centralized. So I was looking for something else and found this case:

http://cybershop.ri-vier.nl/4u-rackmnt-server-case-w20-hotswap-satasas-
drv-bays-rpc4220-p-18.html

They also had that SAS controller and on the Supermicro website they 
wrote it would be SUSE and Red Hat compatible. So I thought it runs too 
under Debian.

So performance isn't very important. But I don't know what exactly you 
mean by expectations. The controller should give access to the disks. 
They will mostly be slow green drives. It's not even a very big problem 
if it's limited to 3 TB but of course it would be nice if I could also go 
bigger in some years when I run out of space again and want to add 
another raid.

So the media server contains one analogue PCI tuner card (PVR-500) and 
one (maybe in future a second one will be added) TeVii (S480) sat tuner 
card.

Now I have one 500 GB disk as system drive but I'm thinking of adding 
another one as RAID1.
With the 20 hot swap slots in the case, the two system drives and an 
optical drive I need 23 sata connectors. Or better four SAS connectors 
and the eight SATA ports on the mainboard.

I think software raid will cause me less cost and problem because when 
the controller fails I can replace it by anything that can talk SAS?



 And I can access the disks. Create an ext3 and ext4 filesystem on
 them seperately. But they don't like be be in the raid.
 
 When the system crashed I got this dmesg but I can't find anything in
 there:
 http://pastebin.com/raw.php?i=ZFdkcS8p
>>> 
>>> There are some interesting entries there:
>>> 
>>> [   12.028337] md: md1 stopped.
> (...)
>>> [   12.035275] raid5: raid level 5 set md1 active with 2 out of 3
>>> devices, algorithm 2
>>> 
>>> Those are related to md1 and your raid5 volume.
>> 
>> And this looks ok or is there a problem?
> 
> Well, I'm not familiar with MD (I use hardware raid) but "md1 stopped"
> and raid 5 with only 2 elements in the array does not sound very good
> ;-(

Ah, yes you're right :-o

Was this during bootup? I recreated the array again after bootup...



 On the screen I saw this:
 http://666k.com/u.php
 (Sorry it's a photograph)
>>> 
>>> I can't load the image :-?
>> 
>> Sorry I posted the wrong link. This one should work:
>> 
>> http://666kb.com/i/c3f6nbmalagytqujw.jpg
> 
> Ugh... and when is that happening, I mean, that "I/O error"? At install
> time, when partitioning, after the first boot?

This usually happens when I tried to create the filesystem on the raid 
array by

sudo mkfs.ext4 -c -L test-device-1 /dev/md1

And when I then want to see details about the array (sudo mdadm --detail /
dev/md1) the system crashes and I get the I/O error.

This causes so much problem that I wasn't able to repair it when it 
happened the first time (afterwards I had nothing to recover ;-) ).

I posted it here:

http://lists.debian.org/debian-user/2012/04/msg01290.html


 What else can I do?
>>> 
>>> I would report it, although I'm afraid this is a well-known issue.
>> 
>> Where should I report it?
> 
> I would try firt in Debian BTS, against the kernel package.
> 
>> I've written a mail to Supermicro. Should I also create a Debian bug
>> report?
> 
> Yup, tough I think it will be forwarded upstream.

Thanks I will run reportbug.

But in the meantime I have installed the bpo kernel and it seems to be 
working now...
At least it never run the disk check for so long, the raid is rebuilding 
and I can see the details as much as I want...



>>> Sadly, Supermicro does not build binaries for Debian/Ubuntu but maybe
>>> you can ask them for the sources to compile the driver by your own.
>> 
>> I found this pages:
>> 
>> ftp://ftp.supermicro.nl/driver/SAS/Marvell/MV8/SAS1/Driver/
Linux/3.1.0.7/
>> 
>> But it doesn't seem as if it's what I need.
> 
> Mmm, your addon card is based on Marvell 6480 chipset, those packages
> are for Marvell Odin(88SE64xx) as the README file says. Maybe you need
> to look here, instead:
> 
> ftp://ftp.supermicro.nl/driver/SAS/Marvell/MV8/SAS2/
> 
> But it doesn't

Re: Supermicro SAS controller

2012-05-01 Thread Stan Hoeppner
On 5/1/2012 8:35 AM, Allan Wind wrote:
> On 2012-05-01 11:53:27, Ramon Hofer wrote:
>> I'm using Debian Squeeze and would like to use a Supermicro AOC-SASLP-MV8 
>> as controller for a software raid.
> 
> I have a different SuperMicro board and it can run in two 
> different modes, forget their names for it, but one supports soft 
> raid and the other does not (IT or something).  I needed to flash
> the firmware to use the latter, then use mdadm to configure a 
> Linux soft array.

Then you must have the 8 port PCI-X AOC-SAT2-MV8, w/Marvell 88SX6081
SATAII controller.  This chip uses the Linux sata-mv driver, which works
fine.

The AOC-SASLP-MV8 has the Marvell 88SE6480 SAS controller chip.  This
chip uses the mvsas driver, which has simply never worked properly and
often causes massive data loss.  Not a desirable trait in a storage driver.

-- 
Stan


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/4fa010f8.1090...@hardwarefreak.com



Re: Supermicro SAS controller

2012-05-01 Thread Camaleón
On Tue, 01 May 2012 14:38:47 +, Ramon Hofer wrote:

> On Tue, 01 May 2012 13:31:34 +, Camaleón wrote:
> 
>> On Tue, 01 May 2012 11:53:27 +, Ramon Hofer wrote:
>> 
>>> I'm using Debian Squeeze and would like to use a Supermicro
>>> AOC-SASLP-MV8 as controller for a software raid. Unfortunately the
>>> system crashes when I try creating a filesystem on the md device.
>> 
>> JFYI, Google reports tons of problems with that card using mvsas driver
>> in linux, maybe you should consider using a different controller :-(
> 
> Thanks for your info.
> I wasn't aware of that :-(

Yeah, that usually happens. I had to read lots of documents before buying 
the raid card controller for my servers. I ensured the card was fully 
supported by the kernel but still you never know if a bug is going to hit 
you at some point. Linux users play with disadvantage here.
 
> Can you recommend something better?
> It was very nice because it's quite cheap and I don't need a hardware
> raid card.

"Cheap" and "nice" do not usually came together, or to put it well, 
"cheap" and "good performance" do not usually match :-)

What kind of hardware do you have (motherboard brand and model) and what 
kind of hard disk controller do you need, what are your expectations?

SuperMicro boards (I'm also a SuperMicro user) are usually good enough to 
use their embedded SAS/SATA ports, at least if you want to use a software 
raid solution :-?

>>> And I can access the disks. Create an ext3 and ext4 filesystem on them
>>> seperately. But they don't like be be in the raid.
>>> 
>>> When the system crashed I got this dmesg but I can't find anything in
>>> there:
>>> http://pastebin.com/raw.php?i=ZFdkcS8p
>> 
>> There are some interesting entries there:
>> 
>> [   12.028337] md: md1 stopped.
(...)
>> [   12.035275] raid5: raid level 5 set md1 active with 2 out of 3 devices, 
>> algorithm 2
>> 
>> Those are related to md1 and your raid5 volume.
> 
> And this looks ok or is there a problem?

Well, I'm not familiar with MD (I use hardware raid) but "md1 stopped" 
and raid 5 with only 2 elements in the array does not sound very good ;-(

>> [   12.244499] PM: Starting manual resume from disk 
>> [   12.244502] PM: Resume from partition 8:3 
>> [   12.244503] PM: Checking hibernation image. 
>> [   12.244599] PM: Error -22 checking image file 
>> [   12.244602] PM: Resume from disk failed.
>> 
>> And this comes from a resuming operation. Do you hibernate your system?
> 
> No I don't. I usually do `sudo halt` to shut it off. But maybe I pressed
> the power button of the case before I collected the dmesg report. But
> usually I don't hibernate.

Okay.

>>> On the screen I saw this:
>>> http://666k.com/u.php
>>> (Sorry it's a photograph)
>> 
>> I can't load the image :-?
> 
> Sorry I posted the wrong link. This one should work:
> 
> http://666kb.com/i/c3f6nbmalagytqujw.jpg

Ugh... and when is that happening, I mean, that "I/O error"? At install time, 
when partitioning, after the first boot?

>>> What else can I do?
>> 
>> I would report it, although I'm afraid this is a well-known issue.
> 
> Where should I report it?

I would try firt in Debian BTS, against the kernel package.

> I've written a mail to Supermicro. Should I also create a Debian bug
> report?

Yup, tough I think it will be forwarded upstream.
 
>> Sadly, Supermicro does not build binaries for Debian/Ubuntu but maybe
>> you can ask them for the sources to compile the driver by your own.
> 
> I found this pages:
> 
> ftp://ftp.supermicro.nl/driver/SAS/Marvell/MV8/SAS1/Driver/Linux/3.1.0.7/
> 
> But it doesn't seem as if it's what I need.

Mmm, your addon card is based on Marvell 6480 chipset, those packages are 
for Marvell Odin(88SE64xx) as the README file says. Maybe you need to look 
here, instead:

ftp://ftp.supermicro.nl/driver/SAS/Marvell/MV8/SAS2/

But it doesn't matter because these are also rpm based packages.

> And the supermicro support sent me the link to this zip file:
> 
> ftp://ftp.supermicro.nl/driver/SAS/Marvell/MV8/SAS1/Firmware/3.1.0.21/Firmware_3.1.0.21.zip
> 
> It contains some win files and I have no clue what to do with them. So I
> hope I get an answer from them what to do with it...

Mmm, then the above FTP link you sent was correct, weird...

Well, that ZIP file is for updating the "firmware" of the card, not the 
driver. You should not update it unless you are completely sure about what 
you are doing, and even more when there's data on the array. Also, ensure 
that's the correct firmware version for you card...

Greetings,

-- 
Camaleón


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/jnp287$d12$9...@dough.gmane.org



Re: Supermicro SAS controller

2012-05-01 Thread Darac Marjal
On Tue, May 01, 2012 at 02:54:24PM +, Ramon Hofer wrote:
> On Tue, 01 May 2012 09:35:36 -0400, Allan Wind wrote:
> 
[cut]
> 
> So I suppose I have to create a USB stick that boots DOS and then run one 
> of the command. But which one. Maybe smc.bat but what is dos4gw.exe for?

https://en.wikipedia.org/wiki/DOS/4G



signature.asc
Description: Digital signature


Re: Supermicro SAS controller

2012-05-01 Thread Stan Hoeppner
On 5/1/2012 6:53 AM, Ramon Hofer wrote:
> Hi all
> 
> I'm using Debian Squeeze and would like to use a Supermicro AOC-SASLP-MV8 
> as controller for a software raid.

The mvsas Linux driver has never been ready for production, unless
things have dramatically changed very recently.  The AOC-SASLP-MV8 will
work fine on a MS Windows machine, but you will continue to suffer many
nightmares with Linux.  Google for the mvsas horror stories.

> What else can I do?

Ebay that card and acquire one that will simply work:
http://www.newegg.com/Product/Product.aspx?Item=N82E16816118112
This LSI card supports 6Gb SAS/SATA3 and 3TB+ drives.

If you have x4 PCIe slots but not x8/x16, then get the Intel card:
http://www.newegg.com/Product/Product.aspx?Item=N82E16816117141

People buy this SM card because it's the cheapest thing on the planet
with 2xSFF8087 ports (without first looking up its reputation).  If the
dual SFF8087 cards above are beyond your budget, go with 2 Silicon Image
based 4 port cards with plain SATA connectors:
http://www.newegg.com/Product/Product.aspx?Item=N82E16816124027

If you don't have a backplane with SFF8087 connectors, simply use 4
regular SATA cables with the SiI cards.  If you do have a backplane, buy
2 new 4 port SATA to SFF8087 2ft reverse breakout cables:
http://www.newegg.com/Product/Product.aspx?Item=N82E16816116101

The LSI card is $239, will give far superior performance, and will work
with your current cables.  The 2xSiI cards is $120, $154 w/cables.
Either have great Linux compatibility.

-- 
Stan


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/4fa007fb.6040...@hardwarefreak.com



Re: Supermicro SAS controller

2012-05-01 Thread Ramon Hofer
On Tue, 01 May 2012 14:54:24 +, Ramon Hofer wrote:

> Do you remember how you updated the firmware?

I have just got an answer from supermicro:

> You can create bootable USB stick with this utility  (windows only)
> http://download.softpedia.ro/dl/
f82c4af1fbe1f35565d91a87dedd9c5b/4e083d83/100081785/software/OTHERS/
BootFlashDos.zip

So I'l try Unetbootin and FreeDOS...



-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/jnou1k$3u4$3...@dough.gmane.org



Re: Supermicro SAS controller

2012-05-01 Thread Ramon Hofer
On Tue, 01 May 2012 09:35:36 -0400, Allan Wind wrote:

> On 2012-05-01 11:53:27, Ramon Hofer wrote:
>> I'm using Debian Squeeze and would like to use a Supermicro
>> AOC-SASLP-MV8 as controller for a software raid.
> 
> I have a different SuperMicro board and it can run in two different
> modes, forget their names for it, but one supports soft raid and the
> other does not (IT or something).  I needed to flash the firmware to use
> the latter, then use mdadm to configure a Linux soft array.

The supermicro support sent ma a link to a zip file which contains two 
exes a bat and a bin file:
ftp://ftp.supermicro.nl/driver/SAS/Marvell/MV8/SAS1/Firmware/3.1.0.21/
Firmware_3.1.0.21.zip

It contains

6480.bin
dos4gw.exe
mvf.exe
smc.bat

smc.bat contains
mvf 6480.bin –y

So I suppose I have to create a USB stick that boots DOS and then run one 
of the command. But which one. Maybe smc.bat but what is dos4gw.exe for?

Do you remember how you updated the firmware?


Best regards
Ramon


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/jnotf0$3u4$2...@dough.gmane.org



Re: Supermicro SAS controller

2012-05-01 Thread Ramon Hofer
On Tue, 01 May 2012 13:31:34 +, Camaleón wrote:

> On Tue, 01 May 2012 11:53:27 +, Ramon Hofer wrote:
> 
>> I'm using Debian Squeeze and would like to use a Supermicro
>> AOC-SASLP-MV8 as controller for a software raid. Unfortunately the
>> system crashes when I try creating a filesystem on the md device.
> 
> JFYI, Google reports tons of problems with that card using mvsas driver
> in linux, maybe you should consider using a different controller :-(

Thanks for your info.
I wasn't aware of that :-(

Can you recommend something better?
It was very nice because it's quite cheap and I don't need a hardware 
raid card.


>> And I can access the disks. Create an ext3 and ext4 filesystem on them
>> seperately. But they don't like be be in the raid.
>> 
>> When the system crashed I got this dmesg but I can't find anything in
>> there:
>> http://pastebin.com/raw.php?i=ZFdkcS8p
> 
> There are some interesting entries there:
> 
> [   12.028337] md: md1 stopped.
> [   12.029374] md: bind
> [   12.034155] md: bind
> [   12.034275] md: bind
> [   12.034986] raid5: device sdb operational as raid disk 0 [  
> 12.034988] raid5: device sdc operational as raid disk 1 [   12.035246]
> raid5: allocated 3230kB for md1 [   12.035270] 0: w=1 pa=0 pr=3 m=1 a=2
> r=3 op1=0 op2=0 [   12.035272] 1: w=2 pa=0 pr=3 m=1 a=2 r=3 op1=0 op2=0
> [   12.035275] raid5: raid level 5 set md1 active with 2 out of 3
> devices, algorithm 2 [   12.035378] RAID5 conf printout:
> [   12.035380]  --- rd:3 wd:2
> [   12.035382]  disk 0, o:1, dev:sdb
> [   12.035383]  disk 1, o:1, dev:sdc
> [   12.035406] md1: detected capacity change from 0 to 4000794542080 [  
> 12.035940] md1: unknown partition table
> 
> Those are related to md1 and your raid5 volume.

And this looks ok or is there a problem?


> [   12.244499] PM: Starting manual resume from disk [   12.244502] PM:
> Resume from partition 8:3 [   12.244503] PM: Checking hibernation image.
> [   12.244599] PM: Error -22 checking image file [   12.244602] PM:
> Resume from disk failed.
> 
> And this comes from a resuming operation. Do you hibernate your system?

No I don't. I usually do `sudo halt` to shut it off.
But maybe I pressed the power button of the case before I collected the 
dmesg report. But usually I don't hibernate.


>> On the screen I saw this:
>> http://666k.com/u.php
>> (Sorry it's a photograph)
> 
> I can't load the image :-?

Sorry I posted the wrong link. This one should work:

http://666kb.com/i/c3f6nbmalagytqujw.jpg


>> What else can I do?
> 
> I would report it, although I'm afraid this is a well-known issue.

Where should I report it?
I've written a mail to Supermicro. Should I also create a Debian bug 
report?


> Maybe you can try with an updated kernel to see if there's any
> imprevement with the driver (mvsas) but to be sincere, I would be very
> reluctant in setting a raid 5 level with a hard disk controller that is
> not rock-solid, you are exposing your system to a data loss :-/
> 
>> There are Red Hat ad SUSE drivers and firmware on the Supermicro
>> homepage. Should I take them from there?
> 
> Sadly, Supermicro does not build binaries for Debian/Ubuntu but maybe
> you can ask them for the sources to compile the driver by your own.

I found this pages:

ftp://ftp.supermicro.nl/driver/SAS/Marvell/MV8/SAS1/Driver/Linux/3.1.0.7/

But it doesn't seem as if it's what I need.

And the supermicro support sent me the link to this zip file:

ftp://ftp.supermicro.nl/driver/SAS/Marvell/MV8/SAS1/Firmware/3.1.0.21/
Firmware_3.1.0.21.zip

It contains some win files and I have no clue what to do with them. So I 
hope I get an answer from them what to do with it...


Best regards
Ramon


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/jnoshn$3u4$1...@dough.gmane.org



Re: Supermicro SAS controller

2012-05-01 Thread Allan Wind
On 2012-05-01 11:53:27, Ramon Hofer wrote:
> I'm using Debian Squeeze and would like to use a Supermicro AOC-SASLP-MV8 
> as controller for a software raid.

I have a different SuperMicro board and it can run in two 
different modes, forget their names for it, but one supports soft 
raid and the other does not (IT or something).  I needed to flash
the firmware to use the latter, then use mdadm to configure a 
Linux soft array.


/Allan
-- 
Allan Wind
Life Integrity, LLC



-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20120501133536.gj20...@lifeintegrity.com



Re: Supermicro SAS controller

2012-05-01 Thread Camaleón
On Tue, 01 May 2012 11:53:27 +, Ramon Hofer wrote:

> I'm using Debian Squeeze and would like to use a Supermicro
> AOC-SASLP-MV8 as controller for a software raid.
> Unfortunately the system crashes when I try creating a filesystem on the
> md device.

JFYI, Google reports tons of problems with that card using mvsas driver 
in linux, maybe you should consider using a different controller :-(

(...)

> And I can access the disks. Create an ext3 and ext4 filesystem on them
> seperately. But they don't like be be in the raid.
> 
> When the system crashed I got this dmesg but I can't find anything in
> there:
> http://pastebin.com/raw.php?i=ZFdkcS8p

There are some interesting entries there:

[   12.028337] md: md1 stopped.
[   12.029374] md: bind
[   12.034155] md: bind
[   12.034275] md: bind
[   12.034986] raid5: device sdb operational as raid disk 0
[   12.034988] raid5: device sdc operational as raid disk 1
[   12.035246] raid5: allocated 3230kB for md1
[   12.035270] 0: w=1 pa=0 pr=3 m=1 a=2 r=3 op1=0 op2=0
[   12.035272] 1: w=2 pa=0 pr=3 m=1 a=2 r=3 op1=0 op2=0
[   12.035275] raid5: raid level 5 set md1 active with 2 out of 3 devices, 
algorithm 2
[   12.035378] RAID5 conf printout:
[   12.035380]  --- rd:3 wd:2
[   12.035382]  disk 0, o:1, dev:sdb
[   12.035383]  disk 1, o:1, dev:sdc
[   12.035406] md1: detected capacity change from 0 to 4000794542080
[   12.035940] md1: unknown partition table

Those are related to md1 and your raid5 volume.

[   12.244499] PM: Starting manual resume from disk
[   12.244502] PM: Resume from partition 8:3
[   12.244503] PM: Checking hibernation image.
[   12.244599] PM: Error -22 checking image file
[   12.244602] PM: Resume from disk failed.

And this comes from a resuming operation. Do you hibernate your system?

> On the screen I saw this:
> http://666k.com/u.php
> (Sorry it's a photograph)

I can't load the image :-?

> What else can I do?

I would report it, although I'm afraid this is a well-known issue. 

Maybe you can try with an updated kernel to see if there's any imprevement 
with the driver (mvsas) but to be sincere, I would be very reluctant in 
setting a raid 5 level with a hard disk controller that is not rock-solid, 
you are exposing your system to a data loss :-/

> There are Red Hat ad SUSE drivers and firmware on the Supermicro
> homepage. Should I take them from there?

Sadly, Supermicro does not build binaries for Debian/Ubuntu but maybe you can
ask them for the sources to compile the driver by your own.

Greetings,

-- 
Camaleón


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/jnoojl$d12$6...@dough.gmane.org