>
>
> On 2019-07-01 10:01, Warren Young wrote:
>> On Jul 1, 2019, at 8:26 AM, Valeri Galtsev
>> wrote:
>>>
>>> RAID function, which boils down to simple, short, easy to debug well
>>> program.
>
> I didn't intend to start software vs hardware RAID flame war when I
> joined somebody's else
On Jul 1, 2019, at 10:10 AM, Valeri Galtsev wrote:
>
> On 2019-07-01 10:01, Warren Young wrote:
>> On Jul 1, 2019, at 8:26 AM, Valeri Galtsev wrote:
>>>
>>> RAID function, which boils down to simple, short, easy to debug well
>>> program.
>
> I didn't intend to start software vs hardware
>> You seem to be saying that hardware RAID can’t lose data. You’re
>> ignoring the RAID 5 write hole:
>>
>> https://en.wikipedia.org/wiki/RAID#WRITE-HOLE
>>
>> If you then bring up battery backups, now you’re adding cost to the
>> system. And then some ~3-5 years later, downtime to swap
> On Mon, 1 Jul 2019, Warren Young wrote:
>
>> If you then bring up battery backups, now you’re adding cost to the
>> system. And then some ~3-5 years later, downtime to swap the battery,
>> and more downtime. And all of that just to work around the RAID write
>> hole.
>
> Although batteries
On Jul 1, 2019, at 9:10 AM, mark wrote:
>
> ZFS with a zpoolZ2
You mean raidz2.
> which we set up using the LSI card set to JBOD
Some LSI cards require a complete firmware re-flash to get them into “IT mode”
which completely does away with the RAID logic and turns them into dumb SATA
Warren Young wrote on 7/1/2019 9:48 AM:
On Jul 1, 2019, at 7:56 AM, Blake Hudson wrote:
I've never used ZFS, as its Linux support has been historically poor.
When was the last time you checked?
The ZFS-on-Linux (ZoL) code has been stable for years. In recent months, the
BSDs have rebased
On 2019-07-01 10:10, mark wrote:
I haven't been following this thread closely, but some of them have left
me puzzled.
1. Hardware RAID: other than Rocket RAID, who don't seem to support a card
more than about 3 years (i used to have to update and rebuild the
drivers), anything LSI based,
On 2019-07-01 10:01, Warren Young wrote:
On Jul 1, 2019, at 8:26 AM, Valeri Galtsev wrote:
RAID function, which boils down to simple, short, easy to debug well program.
I didn't intend to start software vs hardware RAID flame war when I
joined somebody's else opinion.
Now, commenting
You seem to be saying that hardware RAID can’t lose data. You’re ignoring the
RAID 5 write hole:
https://en.wikipedia.org/wiki/RAID#WRITE-HOLE
If you then bring up battery backups, now you’re adding cost to the system.
And then some ~3-5 years later, downtime to swap the battery, and
I haven't been following this thread closely, but some of them have left
me puzzled.
1. Hardware RAID: other than Rocket RAID, who don't seem to support a card
more than about 3 years (i used to have to update and rebuild the
drivers), anything LSI based, which includes Dell PERC, have been
On Mon, 1 Jul 2019, Warren Young wrote:
If you then bring up battery backups, now you’re adding cost to the system.
And then some ~3-5 years later, downtime to swap the battery, and more
downtime. And all of that just to work around the RAID write hole.
Although batteries have disappeared
On Jul 1, 2019, at 8:26 AM, Valeri Galtsev wrote:
>
> RAID function, which boils down to simple, short, easy to debug well program.
RAID firmware will be harder to debug than Linux software RAID, if only because
of easier-to-use tools.
Furthermore, MD RAID only had to be debugged once, rather
On Jul 1, 2019, at 7:56 AM, Blake Hudson wrote:
>
> I've never used ZFS, as its Linux support has been historically poor.
When was the last time you checked?
The ZFS-on-Linux (ZoL) code has been stable for years. In recent months, the
BSDs have rebased their offerings from Illumos to ZoL.
On July 1, 2019 8:56:35 AM CDT, Blake Hudson wrote:
>
>
>Warren Young wrote on 6/28/2019 6:53 PM:
>> On Jun 28, 2019, at 8:46 AM, Blake Hudson wrote:
>>> Linux software RAID…has only decreased availability for me. This has
>been due to a combination of hardware and software issues that are are
Warren Young wrote on 6/28/2019 6:53 PM:
On Jun 28, 2019, at 8:46 AM, Blake Hudson wrote:
Linux software RAID…has only decreased availability for me. This has been due
to a combination of hardware and software issues that are are generally handled
well by HW RAID controllers, but are often
>>
>>
>>
> IMHO, Hardware raid primarily exists because of Microsoft Windows and
> VMware esxi, neither of which have good native storage management.
>
> Because of this, it's fairly hard to order a major brand (HP, Dell, etc)
> server without raid cards.
>
> Raid cards do have the performance
>
>
>
IMHO, Hardware raid primarily exists because of Microsoft Windows and
VMware esxi, neither of which have good native storage management.
Because of this, it's fairly hard to order a major brand (HP, Dell, etc)
server without raid cards.
Raid cards do have the performance boost of
On Jun 28, 2019, at 8:46 AM, Blake Hudson wrote:
>
> Linux software RAID…has only decreased availability for me. This has been due
> to a combination of hardware and software issues that are are generally
> handled well by HW RAID controllers, but are often handled poorly or
> unpredictably
On 6/28/19 4:46 PM, Blake Hudson wrote:
Unfortunately, I've never had Linux software RAID improve availability - it has
only decreased availability for me. This has been due to a combination of
hardware and software issues that are are generally handled well by HW RAID
controllers, but are
Just a comment: what RAID 6 (we use that instead of 5, as of years ago),
was much larger storage.
When you have, say, over 0.3petabytes, that starts to matter.
mark
___
CentOS mailing list
CentOS@centos.org
On 29/06/19 2:46 AM, Blake Hudson wrote:
Nikos Gatsis - Qbit wrote on 6/27/2019 8:36 AM:
Hello list.
The next days we are going to install Centos 7 on a new server, with
4*3Tb sata hdd as raid-5. We will use the graphical interface to
install and set up raid.
Do I have to consider
Am 28.06.2019 um 16:46 schrieb Blake Hudson :
>
> Nikos Gatsis - Qbit wrote on 6/27/2019 8:36 AM:
>> Hello list.
>>
>> The next days we are going to install Centos 7 on a new server, with 4*3Tb
>> sata hdd as raid-5. We will use the graphical interface to install and set
>> up raid.
>>
>> Do
Nikos Gatsis - Qbit wrote on 6/27/2019 8:36 AM:
Hello list.
The next days we are going to install Centos 7 on a new server, with
4*3Tb sata hdd as raid-5. We will use the graphical interface to
install and set up raid.
Do I have to consider anything before installation, because the disks
Le 28/06/2019 à 14:28, Jonathan Billings a écrit :
> You can't have actually tested these instructions if you think 'sudo
> echo > /path' actually works.
>
> The idiom for this is typically:
>
> echo 5 | sudo tee /proc/sys/dev/raid/speed_limit_min
My bad.
The initial article used this
On Fri, Jun 28, 2019 at 07:01:00AM +0200, Nicolas Kovacs wrote:
> 3. Here's a neat little trick you can use to speed up the initial sync.
>
> $ sudo echo 5 > /proc/sys/dev/raid/speed_limit_min
>
> I've written a detailed blog article about the kind of setup you want.
> It's in French, but
Thank you all for your answers.
Nikos.
On 27/6/2019 4:48 μ.μ., Gary Stainburn wrote:
I have done this a couple of times successfully.
I did set the boot partitions etc as RAID1 on sda and sdb. This I believe is
an old instruction and was based on the fact that the kernel needed access to
If you can afford it I would prefer to use RAID10. You will loose half
of disk space but you will get really faster system. It depends what you
need / what you will use server for.
Mirek
28.6.2019 at 7:01 Nicolas Kovacs:
Le 27/06/2019 à 15:36, Nikos Gatsis - Qbit a écrit :
Do I have to
Le 27/06/2019 à 15:36, Nikos Gatsis - Qbit a écrit :
> Do I have to consider anything before installation, because the disks
> are very large?
I'm doing this kind of installation quite regularly. Here's my two cents.
1. Use RAID6 instead of RAID5. You'll lose a little space, but you'll
gain
On 6/27/19 10:27 AM, Robert Heller wrote:
Actually*grub* needs access to /boot to load the kernel. I don't believe that
grub can access (software) RAID filesystems. RAID1 is effectively an exception
because it is just a mirror set and grub can [RO] access any one of the mirror
set elements as a
On 6/27/19 6:36 AM, Nikos Gatsis - Qbit wrote:
Do I have to consider anything before installation, because the disks
are very large?
Probably not. You'll need to use GPT because they're large, but for a
new server you probably would need to do that anyway in order to boot
under UEFI.
At Thu, 27 Jun 2019 14:48:30 +0100 CentOS mailing list
wrote:
>
> I have done this a couple of times successfully.
>
> I did set the boot partitions etc as RAID1 on sda and sdb. This I believe is
> an old instruction and was based on the fact that the kernel needed access
> to these
Am 27.06.2019 um 15:36 schrieb Nikos Gatsis - Qbit:
Hello list.
The next days we are going to install Centos 7 on a new server, with
4*3Tb sata hdd as raid-5. We will use the graphical interface to install
and set up raid.
You hopefully plan to use just 3 of the disks for the RAID 5 array
Which may very well be the case.
On 6/27/19, 10:40 AM, "CentOS on behalf of John Hodrien"
wrote:
On Thu, 27 Jun 2019, Peda, Allan (NYC-GIS) wrote:
> I'd isolate all that RAID stuff from your OS, so the root, /boot, /usr,
/etc /tmp, /bin swap are on "normal" partition(s). I know
On Thu, 27 Jun 2019, Peda, Allan (NYC-GIS) wrote:
I'd isolate all that RAID stuff from your OS, so the root, /boot, /usr, /etc /tmp, /bin
swap are on "normal" partition(s). I know I'm missing some directories, but
the point is you should be able to unmount that RAID stuff to adjust it
I'd isolate all that RAID stuff from your OS, so the root, /boot, /usr, /etc
/tmp, /bin swap are on "normal" partition(s). I know I'm missing some
directories, but the point is you should be able to unmount that RAID stuff to
adjust it without crippling your system.
I have done this a couple of times successfully.
I did set the boot partitions etc as RAID1 on sda and sdb. This I believe is
an old instruction and was based on the fact that the kernel needed access to
these partitions before RAID access was available.
I'm sure someone more knowledgeable
Hello list.
The next days we are going to install Centos 7 on a new server, with
4*3Tb sata hdd as raid-5. We will use the graphical interface to install
and set up raid.
Do I have to consider anything before installation, because the disks
are very large?
Does the graphical use the
37 matches
Mail list logo