I created directions on converting a system to run its rood on software
raid. This doc will be included in the next mdadm software raid tools
release.
This can be done completelly remotelly with no loss of data.
(assuming you have an extra disk just sitting in a remote computer.)
I thought I would
I created directions on converting a system to run its rood on software
raid. This doc will be included in the next mdadm software raid tools
release.
This can be done completelly remotelly with no loss of data.
(assuming you have an extra disk just sitting in a remote computer.)
I thought I would
Hello lists :
Does anyone knows , How to monitor Software RAID disk I/O
i have a Software RAID5 device named /dev/md0 , i've tried to use iostat to
monitor /dev/md0 I/O status ...
But, it's seems doesn't work !!!
===
fileserver:/etc/rc.boot# iostat -x /dev/md0
On Thu, 10 Jul 2003 11:18, axacheng wrote:
Hello lists :
Does anyone knows , How to monitor Software RAID disk I/O
i have a Software RAID5 device named /dev/md0 , i've tried to use
iostat to monitor /dev/md0 I/O status ...
/proc/partitions does not have any counts for software RAID
Hello lists :
Does anyone knows , How to monitor Software RAID disk I/O
i have a Software RAID5 device named /dev/md0 , i've tried to use iostat
to monitor /dev/md0 I/O status ...
But, it's seems doesn't work !!!
===
fileserver:/etc/rc.boot# iostat -x /dev/md0
On Thu, 10 Jul 2003 11:18, axacheng wrote:
Hello lists :
Does anyone knows , How to monitor Software RAID disk I/O
i have a Software RAID5 device named /dev/md0 , i've tried to use
iostat to monitor /dev/md0 I/O status ...
/proc/partitions does not have any counts for software RAID
Hello Russell
On 18 Apr 2003 at 17:26, Russell Coker wrote:
On Thu, 17 Apr 2003 18:48, I. Forbes wrote:
Do you think there would be any benefit gained from burning in a
new drive, perhaps by running fsck -c -c, in order to find marginal
blocks and get them mapped out before the drive is
On Thu, 17 Apr 2003 18:48, I. Forbes wrote:
Am I correct in assuming that every time a bad block is discovered
and remapped on a software raid1 system:
- there is some data loss
I believe that if drive-0 in the array returns a read error then the data is
read from drive-1 and there is no
later when you want to read the data (and this can happen even if the data is
verified).
Then the drive will return a read error. If you then write to the bad block
the drive will usually perform a re-mapping and after that things will be
fine.
If using software RAID
Hello All
I have had a number of cases with disk's reporting as failed on
systems with IDE drives in software RAID 1 configuration.
I suppose the good news is you can change the drive with minimal
downtime and no loss of data. But some of my customers are
querying the apparent high failure
later when you want to read the data (and this can happen even if the data is
verified).
Then the drive will return a read error. If you then write to the bad block
the drive will usually perform a re-mapping and after that things will be
fine.
If using software RAID then a raidhotadd
On Tuesday 15 April 2003 11:45, I. Forbes wrote:
Hello All
I have had a number of cases with disk's reporting as failed on
systems with IDE drives in software RAID 1 configuration.
I suppose the good news is you can change the drive with minimal
downtime and no loss of data. But some of my
Russell Coker wrote:
raid-extra-boot=/dev/sda,/dev/sdb
According to the documentation of lilo, this shouldn't be necessary,
but apparently either the funcionality or the docs are buggy. Without
that line I couldn't boot at all from the second disk,
the way I've tested it also
On Sat, 25 Jan 2003 14:15, Fraser Campbell wrote:
the way I've tested it also works with
install-mbr /dev/md1
Why would you want to use install-mbr on a RAID device?
I use install-mbr for the MBR on the hard drive (/dev/sda and /dev/sdb in
this case) and then have it load the
On Fre, 2003-01-24 at 00:16, Tinus Nijmeijers wrote:
I'm building a server that needs about 200G of harddisk space and the
data has to be safe. If I need to replace a faulty hd and get downtime
that's fine. Speed is not an issue.
Agree with Russel and thing: what's raid6?
If speed is not an
a backup boot disk available.
Disks (couple of 120G IDE or something) will be in 1+0, raid5 or raid6
(does software raid do raid6?)
Is there any reason to use hardware-raid over software-raid in this
case?
thanks.
Tinus
OK, For booting, I suggest getting a uw or better ie (u2w) scsi
On Fri, 2003-01-24 at 09:22, Adrian 'Dagurashibanipal' von Bidder wrote:
On Fre, 2003-01-24 at 00:16, Tinus Nijmeijers wrote:
I'm building a server that needs about 200G of harddisk space and the
data has to be safe. If I need to replace a faulty hd and get downtime
that's fine. Speed is
drive them at maximum speed.
and are slower. This will be a robust boot system, software raid is not
any good for booting.
The only potential problem with booting from software RAID is if the primary
disk dies. If you have hardware hot-swap then that's only a minor issue
(just unplug the dead
On Fri, 24 Jan 2003 10:26, Tinus Nijmeijers wrote:
My question kind'a stands: If the only thing I ask of it is for the data
to be safe (no speed or no downtime! issues) is there any reason to
use hardware over software raid?
No.
I do not care if I have to take the server down for an hour
hardware over software raid?
I do not care if I have to take the server down for an hour (or 2, or 3)
to replace a disk, be it a raid disk or boot disk. I have plenty of
time, I could even run down to the store, get a new bootdisk, install
debian and be up and running in 2 hours. no problem.
ONLY
On Fri, 24 Jan 2003 21:04, Dave Watkins wrote:
There is perhaps one extra thing hardware RAID will give you. When it comes
to hardware failures a Hardware RAID card will almost always detect a
failed (or failing) drive before any software based system would. In fact
I've seen a RAID card
This was with an Adaptec SCSI card. I'm not sure how it detected the error
(may have been SMART related), but it told me there was an error so I
swapped the drive and ran the diagnostic software over the drive. It came
back clean so I reinstalled the drive and it failed again an hour or so
On Fri, 24 Jan 2003 21:40, Dave Watkins wrote:
to hardware failures a Hardware RAID card will almost always detect a
failed (or failing) drive before any software based system would. In
This was with an Adaptec SCSI card. I'm not sure how it detected the error
(may have been SMART
I've written a document on using Linux software RAID with hot-swap SCSI
hardware.
It's slightly specific to the hardware I use (I wrote it for internal use) but
can easily be adapted to be more generic.
If someone wants to add it to a HOWTO or something then be my guest, please
give me
:- Russell == Russell Coker [EMAIL PROTECTED] writes:
Hi
I've written a document on using Linux software RAID with hot-swap SCSI
hardware.
nice doc, just a little comment about booting:
*Booting*
To make a RAID-1 device bootable you first have to use fdisk to
set
) will be in 1+0, raid5 or raid6
(does software raid do raid6?)
Is there any reason to use hardware-raid over software-raid in this
case?
thanks.
Tinus.
--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
On Fri, 24 Jan 2003 00:16, Tinus Nijmeijers wrote:
The system will boot of a scsi HD, I have a backup boot disk available.
Why not use RAID-1 for the boot device?
Disks (couple of 120G IDE or something) will be in 1+0, raid5 or raid6
(does software raid do raid6?)
What is RAID-6? RAID-6
re
On Thu, 2003-01-23 at 14:42, Pierfrancesco Caci wrote:
Instead of install-mbr, I used the following line in lilo.conf:
raid-extra-boot=/dev/sda,/dev/sdb
According to the documentation of lilo, this shouldn't be necessary,
but apparently either the funcionality or the docs are buggy.
On Fri, 24 Jan 2003 01:25, Andraz Sraka wrote:
On Thu, 2003-01-23 at 14:42, Pierfrancesco Caci wrote:
Instead of install-mbr, I used the following line in lilo.conf:
raid-extra-boot=/dev/sda,/dev/sdb
According to the documentation of lilo, this shouldn't be necessary,
but apparently
IDE or something) will be in 1+0, raid5 or raid6
(does software raid do raid6?)
Is there any reason to use hardware-raid over software-raid in this
case?
thanks.
Tinus
OK, For booting, I suggest getting a uw or better ie (u2w) scsi hardware
raid controller (AMI megatrends seem linux
/mdX devices (so all your partitions, including / and /boot
are mirrored). However, this process is time-consuming and error-prone.
The Software-RAID HOWTO says that only Redhat had some kind of method
for installing directly onto raid, but that HOWTO is dated Jan 2000, so
perhaps things have
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On Wednesday 02 October 2002 23:56, valerian wrote:
Does Debian have or plan to provide a method to install directly onto a
RAID device?
read this simple text at http://tnt.aufbix.org/linux/raid/
hope it helps
- --
Unix IS user friendly...It's
/mdX devices (so all your partitions, including / and /boot
are mirrored). However, this process is time-consuming and error-prone.
The Software-RAID HOWTO says that only Redhat had some kind of method
for installing directly onto raid, but that HOWTO is dated Jan 2000, so
perhaps things have
#include hallo.h
Jason Lim wrote on Thu Mar 21, 2002 um 05:28:22PM:
Don't we all wish there were these floppies... just like the ones Redhat
has.
Oh well... Debian's one big weakness that I see is in the installation
procedure. No easy for newbies, not flexible for power-users.
Same FUD
Folks,
Any pointers to a set of ext3/software raid boot floppies?
Thanks
--
Sanjeev
--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
PROTECTED]
Sent: Thursday, March 21, 2002 2:13 PM
Subject: Software Raid on root/boot, Woody
Folks,
Any pointers to a set of ext3/software raid boot floppies?
Thanks
--
Sanjeev
--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact
[EMAIL PROTECTED
On Wed, 1 Aug 2001, Russell Coker wrote:
On Tue, 31 Jul 2001 16:28, Roger Abrahamsson wrote:
Do anyone here know is if there is any way you can add disks/grow an
existing software raid-5 system (2.4.x kernels)?
The cost of large IDE-disks now makes it possible to have some 300+GB
system
On Wed, 1 Aug 2001 08:31, Roger Abrahamsson wrote:
LVM is the correct solution to this problem. You can run LVM over
multiple RAID-5 and RAID-1 arrays. Then you have RAID for reliability
and LVM to allow easy growth of storage.
I'm still not sure that LVM is ready for serious use
38 matches
Mail list logo