Hi Dennis,
I contacted Broadcom support.
They said
"
The MegaRAID SAS 9361-8i. I would recommend the MegaRAID SAS 9460-8i. The
latest controller also allows you to attach NVMe drives. Please see the
following link for details.
https://www.broadcom.com/products/storage/raid-controllers/tab-12Gb-
On 10/10/19 11:45 AM, Dennis Jacobfeuerborn wrote:
> Hi,
> I'm currently looking for a RAID controller with BBU/CacheVault and
> while LSI MegaRaid controllers worked well in the past apparently they
> are no longer supported in RHEL 8:
> https://access.redhat.com/discussions/3722151
>
> Does anyb
Hi,
I'm currently looking for a RAID controller with BBU/CacheVault and
while LSI MegaRaid controllers worked well in the past apparently they
are no longer supported in RHEL 8:
https://access.redhat.com/discussions/3722151
Does anybody have recommendations for for hardware controllers with
cache
>
>
> On 2019-07-01 10:01, Warren Young wrote:
>> On Jul 1, 2019, at 8:26 AM, Valeri Galtsev
>> wrote:
>>>
>>> RAID function, which boils down to simple, short, easy to debug well
>>> program.
>
> I didn't intend to start software vs hardware RAID flame war when I
> joined somebody's else opinion.
On Jul 1, 2019, at 10:10 AM, Valeri Galtsev wrote:
>
> On 2019-07-01 10:01, Warren Young wrote:
>> On Jul 1, 2019, at 8:26 AM, Valeri Galtsev wrote:
>>>
>>> RAID function, which boils down to simple, short, easy to debug well
>>> program.
>
> I didn't intend to start software vs hardware RAID
>> You seem to be saying that hardware RAID can’t lose data. You’re
>> ignoring the RAID 5 write hole:
>>
>> https://en.wikipedia.org/wiki/RAID#WRITE-HOLE
>>
>> If you then bring up battery backups, now you’re adding cost to the
>> system. And then some ~3-5 years later, downtime to swap the
> On Mon, 1 Jul 2019, Warren Young wrote:
>
>> If you then bring up battery backups, now you’re adding cost to the
>> system. And then some ~3-5 years later, downtime to swap the battery,
>> and more downtime. And all of that just to work around the RAID write
>> hole.
>
> Although batteries have
On Jul 1, 2019, at 9:10 AM, mark wrote:
>
> ZFS with a zpoolZ2
You mean raidz2.
> which we set up using the LSI card set to JBOD
Some LSI cards require a complete firmware re-flash to get them into “IT mode”
which completely does away with the RAID logic and turns them into dumb SATA
control
Warren Young wrote on 7/1/2019 9:48 AM:
On Jul 1, 2019, at 7:56 AM, Blake Hudson wrote:
I've never used ZFS, as its Linux support has been historically poor.
When was the last time you checked?
The ZFS-on-Linux (ZoL) code has been stable for years. In recent months, the
BSDs have rebased t
On 2019-07-01 10:10, mark wrote:
I haven't been following this thread closely, but some of them have left
me puzzled.
1. Hardware RAID: other than Rocket RAID, who don't seem to support a card
more than about 3 years (i used to have to update and rebuild the
drivers), anything LSI based, whic
On 2019-07-01 10:01, Warren Young wrote:
On Jul 1, 2019, at 8:26 AM, Valeri Galtsev wrote:
RAID function, which boils down to simple, short, easy to debug well program.
I didn't intend to start software vs hardware RAID flame war when I
joined somebody's else opinion.
Now, commenting wi
You seem to be saying that hardware RAID can’t lose data. You’re ignoring the
RAID 5 write hole:
https://en.wikipedia.org/wiki/RAID#WRITE-HOLE
If you then bring up battery backups, now you’re adding cost to the system.
And then some ~3-5 years later, downtime to swap the battery, and mo
I haven't been following this thread closely, but some of them have left
me puzzled.
1. Hardware RAID: other than Rocket RAID, who don't seem to support a card
more than about 3 years (i used to have to update and rebuild the
drivers), anything LSI based, which includes Dell PERC, have been pretty
On Mon, 1 Jul 2019, Warren Young wrote:
If you then bring up battery backups, now you’re adding cost to the system.
And then some ~3-5 years later, downtime to swap the battery, and more
downtime. And all of that just to work around the RAID write hole.
Although batteries have disappeared
On Jul 1, 2019, at 8:26 AM, Valeri Galtsev wrote:
>
> RAID function, which boils down to simple, short, easy to debug well program.
RAID firmware will be harder to debug than Linux software RAID, if only because
of easier-to-use tools.
Furthermore, MD RAID only had to be debugged once, rather
On Jul 1, 2019, at 7:56 AM, Blake Hudson wrote:
>
> I've never used ZFS, as its Linux support has been historically poor.
When was the last time you checked?
The ZFS-on-Linux (ZoL) code has been stable for years. In recent months, the
BSDs have rebased their offerings from Illumos to ZoL. Th
On July 1, 2019 8:56:35 AM CDT, Blake Hudson wrote:
>
>
>Warren Young wrote on 6/28/2019 6:53 PM:
>> On Jun 28, 2019, at 8:46 AM, Blake Hudson wrote:
>>> Linux software RAID…has only decreased availability for me. This has
>been due to a combination of hardware and software issues that are are
Warren Young wrote on 6/28/2019 6:53 PM:
On Jun 28, 2019, at 8:46 AM, Blake Hudson wrote:
Linux software RAID…has only decreased availability for me. This has been due
to a combination of hardware and software issues that are are generally handled
well by HW RAID controllers, but are often
>>
>>
>>
> IMHO, Hardware raid primarily exists because of Microsoft Windows and
> VMware esxi, neither of which have good native storage management.
>
> Because of this, it's fairly hard to order a major brand (HP, Dell, etc)
> server without raid cards.
>
> Raid cards do have the performance boos
>
>
>
IMHO, Hardware raid primarily exists because of Microsoft Windows and
VMware esxi, neither of which have good native storage management.
Because of this, it's fairly hard to order a major brand (HP, Dell, etc)
server without raid cards.
Raid cards do have the performance boost of nonvolatil
On Jun 28, 2019, at 8:46 AM, Blake Hudson wrote:
>
> Linux software RAID…has only decreased availability for me. This has been due
> to a combination of hardware and software issues that are are generally
> handled well by HW RAID controllers, but are often handled poorly or
> unpredictably by
On 6/28/19 4:46 PM, Blake Hudson wrote:
Unfortunately, I've never had Linux software RAID improve availability - it has
only decreased availability for me. This has been due to a combination of
hardware and software issues that are are generally handled well by HW RAID
controllers, but are of
Just a comment: what RAID 6 (we use that instead of 5, as of years ago),
was much larger storage.
When you have, say, over 0.3petabytes, that starts to matter.
mark
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/
On 29/06/19 2:46 AM, Blake Hudson wrote:
Nikos Gatsis - Qbit wrote on 6/27/2019 8:36 AM:
Hello list.
The next days we are going to install Centos 7 on a new server, with
4*3Tb sata hdd as raid-5. We will use the graphical interface to
install and set up raid.
Do I have to consider anything
Am 28.06.2019 um 16:46 schrieb Blake Hudson :
>
> Nikos Gatsis - Qbit wrote on 6/27/2019 8:36 AM:
>> Hello list.
>>
>> The next days we are going to install Centos 7 on a new server, with 4*3Tb
>> sata hdd as raid-5. We will use the graphical interface to install and set
>> up raid.
>>
>> Do I
Nikos Gatsis - Qbit wrote on 6/27/2019 8:36 AM:
Hello list.
The next days we are going to install Centos 7 on a new server, with
4*3Tb sata hdd as raid-5. We will use the graphical interface to
install and set up raid.
Do I have to consider anything before installation, because the disks
Le 28/06/2019 à 14:28, Jonathan Billings a écrit :
> You can't have actually tested these instructions if you think 'sudo
> echo > /path' actually works.
>
> The idiom for this is typically:
>
> echo 5 | sudo tee /proc/sys/dev/raid/speed_limit_min
My bad.
The initial article used this instr
On Fri, Jun 28, 2019 at 07:01:00AM +0200, Nicolas Kovacs wrote:
> 3. Here's a neat little trick you can use to speed up the initial sync.
>
> $ sudo echo 5 > /proc/sys/dev/raid/speed_limit_min
>
> I've written a detailed blog article about the kind of setup you want.
> It's in French, but t
Thank you all for your answers.
Nikos.
On 27/6/2019 4:48 μ.μ., Gary Stainburn wrote:
I have done this a couple of times successfully.
I did set the boot partitions etc as RAID1 on sda and sdb. This I believe is
an old instruction and was based on the fact that the kernel needed access to
t
If you can afford it I would prefer to use RAID10. You will loose half
of disk space but you will get really faster system. It depends what you
need / what you will use server for.
Mirek
28.6.2019 at 7:01 Nicolas Kovacs:
Le 27/06/2019 à 15:36, Nikos Gatsis - Qbit a écrit :
Do I have to consi
Le 27/06/2019 à 15:36, Nikos Gatsis - Qbit a écrit :
> Do I have to consider anything before installation, because the disks
> are very large?
I'm doing this kind of installation quite regularly. Here's my two cents.
1. Use RAID6 instead of RAID5. You'll lose a little space, but you'll
gain quite
On 6/27/19 10:27 AM, Robert Heller wrote:
Actually*grub* needs access to /boot to load the kernel. I don't believe that
grub can access (software) RAID filesystems. RAID1 is effectively an exception
because it is just a mirror set and grub can [RO] access any one of the mirror
set elements as a
On 6/27/19 6:36 AM, Nikos Gatsis - Qbit wrote:
Do I have to consider anything before installation, because the disks
are very large?
Probably not. You'll need to use GPT because they're large, but for a
new server you probably would need to do that anyway in order to boot
under UEFI.
The
At Thu, 27 Jun 2019 14:48:30 +0100 CentOS mailing list
wrote:
>
> I have done this a couple of times successfully.
>
> I did set the boot partitions etc as RAID1 on sda and sdb. This I believe is
> an old instruction and was based on the fact that the kernel needed access
> to these partition
Am 27.06.2019 um 15:36 schrieb Nikos Gatsis - Qbit:
Hello list.
The next days we are going to install Centos 7 on a new server, with
4*3Tb sata hdd as raid-5. We will use the graphical interface to install
and set up raid.
You hopefully plan to use just 3 of the disks for the RAID 5 array an
Which may very well be the case.
On 6/27/19, 10:40 AM, "CentOS on behalf of John Hodrien"
wrote:
On Thu, 27 Jun 2019, Peda, Allan (NYC-GIS) wrote:
> I'd isolate all that RAID stuff from your OS, so the root, /boot, /usr,
/etc /tmp, /bin swap are on "normal" partition(s). I know
On Thu, 27 Jun 2019, Peda, Allan (NYC-GIS) wrote:
I'd isolate all that RAID stuff from your OS, so the root, /boot, /usr, /etc /tmp, /bin
swap are on "normal" partition(s). I know I'm missing some directories, but
the point is you should be able to unmount that RAID stuff to adjust it without
I'd isolate all that RAID stuff from your OS, so the root, /boot, /usr, /etc
/tmp, /bin swap are on "normal" partition(s). I know I'm missing some
directories, but the point is you should be able to unmount that RAID stuff to
adjust it without crippling your system.
https://www.howtogeek.com/1
I have done this a couple of times successfully.
I did set the boot partitions etc as RAID1 on sda and sdb. This I believe is
an old instruction and was based on the fact that the kernel needed access to
these partitions before RAID access was available.
I'm sure someone more knowledgeable wil
Hello list.
The next days we are going to install Centos 7 on a new server, with
4*3Tb sata hdd as raid-5. We will use the graphical interface to install
and set up raid.
Do I have to consider anything before installation, because the disks
are very large?
Does the graphical use the parted
On 15.02.2017 03:10, TE Dukes wrote:
>
>
>> -Original Message-
>> From: CentOS [mailto:centos-boun...@centos.org] On Behalf Of John R
>> Pierce
>> Sent: Tuesday, February 14, 2017 8:13 PM
>> To: centos@centos.org
>> Subject: Re: [CentOS] RAID que
On 2017-02-17, John R Pierce wrote:
> On 2/16/2017 9:18 PM, Keith Keller wrote:
>>> Only some systems support that sort of restriping, and its a dangerous
>>> activity (if the power fails or system crashes midway through the
>>> restriping operation, its probably not restartable, you quite likely
On 02/16/2017 09:18 PM, Keith Keller wrote:
Doesn't mdraid support changing RAID levels?
It supports a small number of conversions. See the "GROW MODE" section
of mdadm for details.
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.o
On 2/16/2017 9:18 PM, Keith Keller wrote:
On 2017-02-15, John R Pierce wrote:
On 2/14/2017 4:48 PM,tdu...@palmettoshopper.com wrote:
3 - Can additional drive(s) be added later with a changein RAID level
without current data loss?
Only some systems support that sort of restriping, and its a
On 2017-02-15, John R Pierce wrote:
> On 2/14/2017 4:48 PM, tdu...@palmettoshopper.com wrote:
>
>> 3 - Can additional drive(s) be added later with a changein RAID level
>> without current data loss?
>
> Only some systems support that sort of restriping, and its a dangerous
> activity (if the powe
Am Dienstag, den 14.02.2017, 20:21 -0500 schrieb Digimer:
> On 14/02/17 08:12 PM, John R Pierce wrote:
> > On 2/14/2017 5:08 PM, Digimer wrote:
> >> Note; If you're mirroring /boot, you may need to run grub install on
> >> both disks to ensure they're both actually bootable (or else you might
> >>
> -Original Message-
> From: CentOS [mailto:centos-boun...@centos.org] On Behalf Of John R
> Pierce
> Sent: Tuesday, February 14, 2017 8:13 PM
> To: centos@centos.org
> Subject: Re: [CentOS] RAID questions
>
> On 2/14/2017 5:08 PM, Digimer wrote:
> > Note;
On 14/02/17 08:12 PM, John R Pierce wrote:
> On 2/14/2017 5:08 PM, Digimer wrote:
>> Note; If you're mirroring /boot, you may need to run grub install on
>> both disks to ensure they're both actually bootable (or else you might
>> find yourself doing an emergency boot off the CentOS ISO and install
On 2/14/2017 5:08 PM, Digimer wrote:
Note; If you're mirroring /boot, you may need to run grub install on
both disks to ensure they're both actually bootable (or else you might
find yourself doing an emergency boot off the CentOS ISO and installing
grub later).
I left that out because the OP wa
On 14/02/17 07:58 PM, John R Pierce wrote:
> On 2/14/2017 4:48 PM, tdu...@palmettoshopper.com wrote:
>> 1- Better to go with a hardware RAID (mainboardsupported) or software?
>
> I would only use hardware raid if its a card with battery (or
> supercap+flash) backed writeback cache, such as a megar
On 2/14/2017 4:48 PM, tdu...@palmettoshopper.com wrote:
1- Better to go with a hardware RAID (mainboardsupported) or software?
I would only use hardware raid if its a card with battery (or
supercap+flash) backed writeback cache, such as a megaraid, areca, etc.
otherwise I would use mdraid mi
Hello,
Just a couple questions regarding RAID. Here's thesituation.
I bought a 4TB drive before I upgraded from 6.8 to 7.3. I'm not too far
into this that Ican't start over. I wanted disk space to backup 3 other
machines. I way overestimated what I needed for full, incremental and
image backups
On 2017-02-03, lejeczek wrote:
> hi everyone
> I've just configured a simple raid10 on a Dell system, but
> one thing is puzzling to me.
> I'm seeing this below and I wonder why? There: Consist = No
> ...
> /c0/v1 :
>==
>
> ---
> DG/
On 03/02/17 20:24, Cameron Smith wrote:
Active Operations = Background Initialization (0%)
Once this completes you would be able to run a CC
well, I'm not so sure...
$ perccli /c0/v1 show init
Controller = 0
Status = Success
Description = None
VD Operation Status :
===
--
Active Operations = Background Initialization (0%)
Once this completes you would be able to run a CC
Cameron
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos
hi everyone
I've just configured a simple raid10 on a Dell system, but
one thing is puzzling to me.
I'm seeing this below and I wonder why? There: Consist = No
...
/c0/v1 :
==
---
DG/VD TYPE State Access Consist Cache Cac sCC
Il 12/12/2016 17:15, pope...@chmail.ir ha scritto:
i have 6 sata hdd 2 TB . i want install centos 7 on these hdd in raid 6 mode.
how can i do it ?
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos
If you
On 12/12/16 11:15 AM, pope...@chmail.ir wrote:
> i have 6 sata hdd 2 TB . i want install centos 7 on these hdd in raid 6
> mode.
>
> how can i do it ?
The RAID configuration of the new Anaconda is a little tricky. If you
can access the Red Hat documentation, it is covered in section 6.14;
ht
i have 6 sata hdd 2 TB . i want install centos 7 on these hdd in raid 6 mode.
how can i do it ?
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos
On Mon, 30 Mar 2015 19:42:05 -0400, Stephen wrote:
>
> 16G/swap 500MB/boot 80G/home 50G/root
>
> 800G/sdb
>
> will not install Grub bootloader Fatal error
Um, this is JBOD, and not RAID1.
Raid1 would be 2x drives (sda & sdb) appearing as one single drive to the
OS. Data is byte for byte mi
On 7 February 2014 16:34, John R Pierce wrote:
> On 2/7/2014 8:06 AM, James Hogarth wrote:
> > The two I linked are internal units to go in a 5.25" bay ... that's why
> > you'd need an internal 4-6 port card to make them worthwhile.
>
> ah, I thought we were talking about esata external 4-bays, s
On 2/7/2014 8:06 AM, James Hogarth wrote:
> The two I linked are internal units to go in a 5.25" bay ... that's why
> you'd need an internal 4-6 port card to make them worthwhile.
ah, I thought we were talking about esata external 4-bays, since we were
talking about microservers which don't HAVE
On 7 February 2014 15:45, John R Pierce wrote:
> On 2/7/2014 7:28 AM, James Hogarth wrote:
> > I was thinking of a bay along the lines of:
> >
> > http://www.sharkoon.com/?q=en/node/1824 or
> > http://www.icydock.com/goods.php?id=151
> >
> > I wonder what performance would be like through multip
On 2/7/2014 7:28 AM, James Hogarth wrote:
> I was thinking of a bay along the lines of:
>
> http://www.sharkoon.com/?q=en/node/1824 or
> http://www.icydock.com/goods.php?id=151
>
> I wonder what performance would be like through multiplexing the eSATA
> interface compared to buying a 4-6 port inte
On 7 February 2014 15:01, John R Pierce wrote:
> On 2/7/2014 6:15 AM, James Hogarth wrote:
> >> >the eSATA expander has its own PSU. the Microserver would just be
> >> >powering the esata card, which is nothing.
> >> >
> > I don't suppose you have a link to one you've looked at already do you?
On 2/7/2014 6:15 AM, James Hogarth wrote:
>> >the eSATA expander has its own PSU. the Microserver would just be
>> >powering the esata card, which is nothing.
>> >
> I don't suppose you have a link to one you've looked at already do you?
something like this...
http://www.newegg.com/Product/Produ
On 7 February 2014 14:13, John R Pierce wrote:
> On 2/7/2014 6:09 AM, James Hogarth wrote:
> > I do like the idea about the expander though ... power might be an issue
> > given it's a low power PSU that comes with it - would probably have to
> swap
> > that part out.
>
> the eSATA expander has i
On 2/7/2014 6:09 AM, James Hogarth wrote:
> I do like the idea about the expander though ... power might be an issue
> given it's a low power PSU that comes with it - would probably have to swap
> that part out.
the eSATA expander has its own PSU. the Microserver would just be
powering the esat
On 7 February 2014 14:03, John R Pierce wrote:
> On 2/7/2014 5:56 AM, James Hogarth wrote:
> > I frankly don't care if I lose the system disk - it's quick to rebuild
> the
> > system - it's the data I care about.
> >
> > On this I'm running F20 rather than C6 primarily for the better BTRFS
> (whe
On 2/7/2014 5:56 AM, James Hogarth wrote:
> I frankly don't care if I lose the system disk - it's quick to rebuild the
> system - it's the data I care about.
>
> On this I'm running F20 rather than C6 primarily for the better BTRFS (when
> el7 rolls around I'll contemplate a rebuild to that then) a
On 7 February 2014 11:34, John Doe wrote:
> From: Jeff Allison
>
> > Ok I've a HP mircoserver that I'm building up.
> > It's got 4 bays for be used for data that I'm considering setup up woth
> > softwere raid (mdadm)
> > I've 2 x 2TB 2 x 2.5 TB and 2 x 1TB, I'm leaning towards usig the
> > 4 2.
From: Jeff Allison
> Ok I've a HP mircoserver that I'm building up.
> It's got 4 bays for be used for data that I'm considering setup up woth
> softwere raid (mdadm)
> I've 2 x 2TB 2 x 2.5 TB and 2 x 1TB, I'm leaning towards usig the
> 4 2.x TB is a raid 5 array to get 6TB.
Just a reminder that
Ok I've a HP mircoserver that I'm building up.
It's got 4 bays for be used for data that I'm considering setup up woth
softwere raid (mdadm)
I've 2 x 2TB 2 x 2.5 TB and 2 x 1TB, I'm leaning towards usig the 4 2.x TB
is a raid 5 array to get 6TB.
Now the data is on the 2.5TB disks currently.
So
I'm trying to install 6.5 on this server. I can't get past the RAID.
CentOS doesn't recognize my 4ea 1tb drives.
I don't find a way to bypass the Intel RAID to use software RAID.
Suggestions?
Norm
___
CentOS mailing list
CentOS@centos.org
http://list
On 04/12/2013 01:01 AM, David Miller wrote:
> You simply match up the Linux /dev/sdX designation with the drives
> serial number using smartctl. When I first bring the array online I
> have a script that greps out the drives serial numbers from smartctl
> and creates a neat text file with the ma
hi,
> yeah, until a disk fails on a 40 disk array and the chassis LEDs on the
> backplane don't light up to indicate which disk it is and your
> operations monkey pulls the wrong one and crash the whole raid.
that is why I put a label on every drive tray that is visible without
pulling the disk.
On 4/12/2013 12:11 PM, m.r...@5-cent.us wrote:
> Interesting. We're still playing with sizing the RAID sets and volumes.
> The prime consideration for this is that the filesystem utils still have
> problems with > 16TB (and they appear to have been saying that fixing this
> is a priority for at lea
Hi, Seth,
Seth Bardash wrote:
> We build a storage unit that anyone using Centos can build. It is based on
> the 3ware 9750-16 controller. It has 16 x 2 TB Sata 6 gb/s disks. We
> always set it up as a 15 disk RAID 6 array and a hot spare. We have seen
Interesting. We're still playing with sizing
We build a storage unit that anyone using Centos can build. It is based on the
3ware 9750-16 controller. It has 16 x 2 TB Sata 6 gb/s disks. We always set it
up as a 15 disk RAID 6 array and a hot spare. We have seen multiple instances
were the A/C has gone off but the customer's UPS kept the sy
On 04/11/2013 06:36 PM, m.r...@5-cent.us wrote:
> I'm setting up this huge RAID 6 box. I've always thought of hot spares,
> but I'm reading things that are comparing RAID 5 with a hot spare to RAID
> 6, implying that the latter doesn't need one. I *certainly* have enough
> drives to spare in this R
On Apr 11, 2013, at 5:25 PM, John R Pierce wrote:
> On 4/11/2013 5:04 PM, David C. Miller wrote:
>> The LSI 9200's I use are nothing more than a dumb $300 host bus adapter. No
>> RAID levels or special features. I prefer to NOT use hardware RAID
>> controllers when I can. With a generic HBA th
On 2013-04-12, Miranda Hawarden-Ogata wrote:
> RAID6 means you can handle 2 disk failures, but the third one will drop
> your array, if I'm remembering correctly. And the larger the number of
> disks, the higher the chance that you'll have disk failures...
Yes, and yes. But different configura
On 2013/04/11 10:36 AM, Joseph Spenner wrote:
>
> From: John R Pierce
> To: centos@centos.org
> Sent: Thursday, April 11, 2013 1:24 PM
> Subject: Re: [CentOS] RAID 6 - opinions
>
>
> On 4/11/2013 12:30 PM, m.r...@5-cent.us wrote:
>
On 4/11/2013 5:04 PM, David C. Miller wrote:
> The LSI 9200's I use are nothing more than a dumb $300 host bus adapter. No
> RAID levels or special features. I prefer to NOT use hardware RAID
> controllers when I can. With a generic HBA the hard drives are seen raw to
> the OS. You can use smart
- Original Message -
> From: "Keith Keller"
> To: centos@centos.org
> Sent: Thursday, April 11, 2013 4:34:20 PM
> Subject: Re: [CentOS] RAID 6 - opinions
>
> On 2013-04-11, David C. Miller wrote:
> >
> > Just for reference, I have a 24 x 2TB SAT
On 2013-04-11, David C. Miller wrote:
>
> Just for reference, I have a 24 x 2TB SATAIII using CentOS 6.4 Linux MD RAID6
> with two of those 24 disks as hotspares. The drives are in a Supermicro
> external SAS/SATA box connected to another Supermicro 1U computer with an
> i3-2125 CPU @ 3.30GHz a
- Original Message -
> From: "Reindl Harald"
> To: "CentOS mailing list"
> Cc: "David C. Miller"
> Sent: Thursday, April 11, 2013 4:17:18 PM
> Subject: Re: [CentOS] RAID 6 - opinions
>
>
>
> Am 12.04.2013 01:13, schrieb Davi
- Original Message -
> From: "Joseph Spenner"
> To: "CentOS mailing list"
> Sent: Thursday, April 11, 2013 1:36:29 PM
> Subject: Re: [CentOS] RAID 6 - opinions
>
>
>
>
>
> From: John R Pierce
> T
On 4/11/2013 1:20 PM, m.r...@5-cent.us wrote:
> Followup comment: I created the two RAID sets, then started to create the
> volume sets... and realized I didn't know if it was*possible*, much less
> desirable, to have a volume set that spanned two RAID sets. Talked it over
> with my manager, and I
On 4/11/2013 1:36 PM, Joseph Spenner wrote:
> But isn't that one of the benefits of RAID6? (not much degraded/latency
> effect during a rebuild, less impact on performance during rebuild, so longer
> times are acceptable?)
trouble comes in 3s.
--
john r pierce
From: John R Pierce
To: centos@centos.org
Sent: Thursday, April 11, 2013 1:24 PM
Subject: Re: [CentOS] RAID 6 - opinions
On 4/11/2013 12:30 PM, m.r...@5-cent.us wrote:
>> Ok, listening to all of this, I've also been in touch with a tech from th
On 4/11/2013 12:30 PM, m.r...@5-cent.us wrote:
> Ok, listening to all of this, I've also been in touch with a tech from the
> vendor*, who had a couple of suggestions: first, two RAID sets with two
> global hot spares.
I would test how long a drive rebuild takes on a 20 disk RAID6.I
suspect,
m.r...@5-cent.us wrote:
> Ok, listening to all of this, I've also been in touch with a tech from the
> vendor*, who had a couple of suggestions: first, two RAID sets with two
> global hot spares.
>
> I've just spoken with my manager, and we're going with that, then one of
> the tech's other sugges
John R Pierce wrote:
> On 4/11/2013 8:36 AM, m.r...@5-cent.us wrote:
>> I'm setting up this huge RAID 6 box. I've always thought of hot spares,
>> but I'm reading things that are comparing RAID 5 with a hot spare to
>> RAID 6, implying that the latter doesn't need one. I*certainly* have
enough
>>
On 2013-04-11, Joseph Spenner wrote:
>>From: "m.r...@5-cent.us"
>
>>To: CentOS mailing list
>>Sent: Thursday, April 11, 2013 8:36 AM
>>Subject: [CentOS] RAID 6 - opinions
>
>>
>>I'm setting up this huge RAID 6 box. I've a
On 4/11/2013 8:36 AM, m.r...@5-cent.us wrote:
> I'm setting up this huge RAID 6 box. I've always thought of hot spares,
> but I'm reading things that are comparing RAID 5 with a hot spare to RAID
> 6, implying that the latter doesn't need one. I*certainly* have enough
> drives to spare in this RAI
On 04/11/2013 11:36 AM, m.r...@5-cent.us wrote:
> I'm setting up this huge RAID 6 box. I've always thought of hot spares,
> but I'm reading things that are comparing RAID 5 with a hot spare to RAID
> 6, implying that the latter doesn't need one. I *certainly* have enough
> drives to spare in this R
From: Joseph Spenner
> A RAID5 with a hot spare isn't really the same as a RAID6. For those not
> familiar with this, a RAID5 in degraded mode (after it lost a disk) will
> suffer
> a performance hit, as well as while it rebuilds from a hot spare. A RAID6
> after
> losing a disk will not s
>From: "m.r...@5-cent.us"
>To: CentOS mailing list
>Sent: Thursday, April 11, 2013 8:36 AM
>Subject: [CentOS] RAID 6 - opinions
>
>I'm setting up this huge RAID 6 box. I've always thought of hot spares,
>but I'm reading things that are comparin
I'm setting up this huge RAID 6 box. I've always thought of hot spares,
but I'm reading things that are comparing RAID 5 with a hot spare to RAID
6, implying that the latter doesn't need one. I *certainly* have enough
drives to spare in this RAID box: 42 of 'em, so two questions: should I
assign on
1 - 100 of 431 matches
Mail list logo