little longer with raidtools-0.90.
Cheers,
Bruno Prior [EMAIL PROTECTED]
uctions on
where to find these patches (hint: they're in the same directory as the
raidtools on kernel.org).
Assuming this is yet again the cause of problems, is anyone else getting sick of
stupid distros like Suse and Mandrake that include raidtools-0.90 without
including support i
lilo.conf points to all the essential system files on
the copy of /boot on sdb (either mount that partition on /boot, or point
explicitly at these files under their normal point in lilo.conf).
Hopefully, this should get you there.
Cheers,
Bruno Prior [EMAIL PROTECTED]
> -Orig
ailing that, look in http://ostenfeld.dk/~jakob/Software-RAID.HOWTO/.
And you can get the appropriate raid-patch from ftp.kernel.org under
/pub/linux/daemons/raid/alpha (use the 2.2.11 patch and ignore the 2 rejects
that throws up).
Cheers,
Bruno Prior [EMAIL PROTECTED]
g swap on RAID, as it should stand up
to a disk failure, you will just have to be careful when you replace the failed
disk, but it is unfortunately not safe to say that it's "no problem" to have
swap on RAID.
Cheers,
Bruno Prior [EMAIL PROTECTED]
> -Original Message--
ect =
number of sectors. Setting both disks in BIOS to use LBA mode and using this
append line to force linux to see the assigned geometry is probably the best way
to go.
This may also be relevant to Stephen Walton's problem if this allows him to use
the disks in LBA mode rather than Normal.
C
If you've got existing raid superblocks on at least one of the partitions
(because of your previous unsuccessful mkraid), this may be why mkraid throws up
a warning. Have you tried "mkraid --really-force"?
Cheers,
Bruno Prior [EMAIL PROTECTED]
> -Original M
houldn't (can't?) fdisk a RAID device. Simply create a raid device as
normal with the capacity you want for swap, run mkswap to make the device a swap
device, and then swapon to turn swap on. To make sure that the device is used
for swap every time you reboot, mark the partitions as ty
,
which explains that there can be problems when resyncing. This doesn't mean, I
suppose, that you can't use swap on RAID, but if you do, you should take it off
raid for resyncing.
Cheers,
Bruno Prior [EMAIL PROTECTED]
evel, initrd filename and kernel version. Now edit /etc/lilo.conf to point
to the new initrd and run lilo to install the changes. Your arrays should start
automatically at the next reboot.
Cheers,
Bruno Prior [EMAIL PROTECTED]
> -Original Message-
> From: [EMAIL PROTECTED]
/dev/loop0
> raid-disk 0
> device/dev/loop1
> raid-disk 1
Besides Tomas' suggestion (which sounds likely), you need a chunk-size line,
even with linear-raid.
Cheers,
Bruno Prior [EMAIL PROTECTED]
le to continue using this system, in which case you can lose all the
unnecessary files in /boot. So:
8. Delete all the sub-directories (the old system filesystem) under /boot, so
that only the /boot files remain.
Cheers,
Bruno Prior [EMAIL PROTECTED]
> -Original Message-
>
ever you want it (changing /etc/fstab to make
this change permanent). To add /dev/sda1 to the array, do "raidhotadd /dev/md0
/dev/sda1". This will add /dev/sda1 as a spare, which will be used to
reconstruct the array and return it to non-degraded mode.
Cheers,
Bruno Prior [
n raid-1 mode.
Try "raidstart --really-force /dev/md0" and see if that makes a difference.
Cheers,
Bruno Prior [EMAIL PROTECTED]
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED]]On Behalf Of Dong Hu
> Sent: 25 November 1999 22:2
h to lilo should simply enable it to
translate the locations of the essential system files on the array from their
logical position on the array to their physical position on one of the disks.
Cheers,
Bruno Prior [EMAIL PROTECTED]
> -Original Message-
> From: [EMAIL PROTEC
eed to follow the steps from backing up the
un-RAIDed partitions above.
Cheers,
Bruno Prior [EMAIL PROTECTED]
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED]]On Behalf Of Joachim Zobel
> Sent: 18 November 1999 20:19
> To: [EMAIL PROTEC
ses "boot=/dev/sdb;
disk=/dev/sdb bios=0x80" to install lilo to the second disk. But then, I haven't
installed RH 6.1, so I'm not talking from experience.
Cheers,
Bruno Prior [EMAIL PROTECTED]
> -Original Message-
> From: [EMAIL PROTECTED]
> [mai
log) and then, if it is not a physical
problem, you add it back into the array with raidhotadd. The array already
contains the good partition, so the RAID code takes care of rebuilding the bad
from the good, simply by adding it into the array.
Cheers,
Bruno Prior [EMAIL PROTECTED]
and a
RAID superblock on the devices, each of which uses up a few blocks. So normally
the physical size of the RAID would be less than the combined physical sizes of
the partitions (because of the RAID superblock), and the filesystem size would
be less than the RAID physical size (because of the filesystem). The fact that
this is not the case indicates that you have a problem, probably for the reasons
mentioned above.
Cheers,
Bruno Prior [EMAIL PROTECTED]
port, including rebuilding and
installing the modules and running lilo to point at the new kernel.
2. Build an initrd with RAID-1 support using something like "mkinitrd --with
raid1 /boot/initrd-raid 2.2.5-15", and make sure you run lilo to point at the
new initrd-raid.
Cheers,
Bruno Prior [EMAIL PROTECTED]
nsidered to be not operational. Can we see them, please?
Cheers,
Bruno Prior [EMAIL PROTECTED]
f necessary,
you will have to recreate the arrays and then restore the data.
> I have also downloaded the recommended vesion of the raittools 0.50
> beta3 and I can't compile then for some reason.
Recommended by whom? This is well out of date. Get raidtools-0.90 if you want
new raidtool
lilo against
/etc/lilo.conf and /etc/lilo.conf.sdb, unmounts the partition, starts the RAID
and mounts it again on /boot.
On the other hand, why go to this trouble, when you could just get the latest
lilo, which can read RAID-1 partitions anyway.
Cheers,
Bruno Prior [EMAIL PROTECTED]
partitions which make up
this array as type "fd" using fdisk. This tells the kernel that the partitions
are part of an array. Without it, the partitions will be ignored during the
auto-recognition phase. Make sure that your RAID is umounted and raidstopped
before you run fdisk on the p
or do you need /dev or any of its devices on a non-RAIDed
partition. In fact, all you need on a non-RAIDed partition is the kernel image,
the system map, boot.b and associated files. In other words, the files that live
in /boot by default. So the simplest strategy is to stick with the default
files
mplication of moving the boot device seems to be tripping a lot of
people up.
Cheers,
Bruno Prior [EMAIL PROTECTED]
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED]]On Behalf Of Marcos Lopez
> Sent: 25 October 1999 19:09
> To: [EMAIL
ice number is
particularly crucial. Could you try it again and check? Be careful that the only
change you make it is to remove the geometry lines. And if you get an error, see
if it is _exactly_ as stated above. Because as it stands, your solution doesn't
make sense, as you yourself recognize.
C
nf.hdc to tell lilo that in the event that you boot from hdc, it will
actually be in the bios location of hda. You should be aware that this trick is
fine for when /dev/hda blows up so completely that it isn't even identified and
/dev/hdc slips to /dev/hda. But in the situation where /dev/hda
/boot-hda1/ on its own little non-raided partition, so lilo can read the
essential files from it.
Cheers,
Bruno Prior [EMAIL PROTECTED]
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED]]On Behalf Of Markus Hofmann
> Sent: 12 October 1999 1
, Fax/Data: +49 6257 83037
> SWB - The Software Brewery - | http://www.swb.de/ | Anime no Otaku
Hope this helps.
Cheers,
Bruno Prior [EMAIL PROTECTED]
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED]]On Behalf Of Matt Shirel
> Sent:
appropriate kernel version for
2.2.5-15, and the appropriate initrd name for /boot/initrd. If you have changed
the name of your initrd, edit /etc/lilo.conf to point at the new initrd. Run
lilo to point it at the new initrd.
Cheers,
Bruno Prior [EMAIL PROTECTED]
> -Original Messag
Can someone clarify whether a raidtab should include both lines, just the
failed-disk line or whether either way is acceptable.
Cheers,
Bruno Prior [EMAIL PROTECTED]
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED]]On Behalf Of Martin Lichtin
&
he RAID HOWTO indicates it should "like magic".
It's not quite "like magic". You need to raidhotadd each partition on sdb to the
appropriate md device. e.g. "raidhotadd /dev/md0 /dev/sdb1". I think you are
referring to the situation where disks are not in
ure, such as 4. I
think it just needs to be there so that mkraid can successfully parse
/etc/raidtab, although there may be some future use planned for it.
Cheers,
Bruno Prior [EMAIL PROTECTED]
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTE
the situation where
/dev/sda's MBR gets corrupted, or any other situation where /dev/sda continues
to be recognized but cannot boot. You will want to keep a boot floppy handy for
these circumstances.
Cheers,
Bruno Prior [EMAIL PROTECTED]
> -Original Message-
> From: [E
lly be
better off using partitions of rougly equal size. And I take it you realize that
you will lose all the data on the partitions when you create the array?
Cheers,
Bruno Prior [EMAIL PROTECTED]
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTE
tive), creating filesystems on the md devices will not
affect the sda partitions at this time. So you can go ahead and mke2fs the md
devices without destroying data on the sda partitions.
Cheers,
Bruno Prior [EMAIL PROTECTED]
> -Original Message-
> From: Jonathan Nath
mix SCSI and IDE in a software-RAID. It is more likely to be that in
upgrading to 2.2.12, Kenny didn't realize that he had to patch the kernel with
the raid0145 patch (from
ftp://ftp..kernel.org/pub/linux/daemons/raid/alpha), because the
stock RedHat 2.2.5 kernels came ready-patched.
Cheers,
B
ge on
/dev/md1 (which it can't do)?
Cheers,
Bruno Prior [EMAIL PROTECTED]
then restore
from the backup.
> If I decided to change the chuck
> size to 8 from 32, what would be the step required?
Likewise. Backup, rebuild, restore.
Cheers,
Bruno Prior [EMAIL PROTECTED]
ht want
a spare-disk in a RAID-1 setup.
Cheers,
Bruno Prior [EMAIL PROTECTED]
disk device. This applies to every md device
in your raidtab. Which is puzzling, because you seem to imply that you have
successfully created /dev/md1. Are you sure /dev/md1 has been created and is
actually running? What does /proc/mdstat say? What messages did you get when you
ran "mkraid /dev/md1"? What does "df" tell you about what partitions are mounted
where? I've got a sneaking suspicion that / is still on /dev/sda1 and not on
/dev/md1 as your fstab indicates. You need to sort this out before you try
booting onto root-RAID.
Cheers,
Bruno Prior [EMAIL PROTECTED]
ey should be auto-recognized at the next reboot.
Cheers,
Bruno Prior [EMAIL PROTECTED]
k kernels is for use with
the old mdtools), but we need the above info to confirm this.
Cheers,
Bruno Prior [EMAIL PROTECTED]
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED]]On Behalf Of Ekasit
> Kijsipongse
> Sent: 03 September 1999 1
contents of your syslog, and the output of dmesg.
What have you done to enable raid support?
Cheers,
Bruno Prior [EMAIL PROTECTED]
7;primary'?" thread on 27 May indicated that this
no longer worked with the 2.2.6 raid-patch. I don't know whether it works with
2.2.10. Anyone know more about this?
Why do you want to remove a working disk from a RAID-1 array anyway?
Cheers,
Bruno Prior [EMAIL PROTECTED]
add /dev/md1 /dev/md2, wouldn't that get you there?
Cheers,
Bruno Prior [EMAIL PROTECTED]
rtition) have not yet been mounted.
You have two options: either recompile your kernel with raid support built in,
or build a new initrd which includes the raid modules you need with
"mkinitrd --with raid1 /boot/initrd ". You will need to run lilo
either way to point at the new kernel/init
sly out-of-date for the latest raidtools.
Cheers,
Bruno Prior [EMAIL PROTECTED]
ns of type 0xfd before the raid had been created. The raid code did not
free the inodes, so you could not run mkraid. I think this has been fixed, but
I'm not sure. It would be safest only to set them to 0xfd when you know that the
raids will be created before the next reboot.
Incidentally, I
message to him at [EMAIL PROTECTED]
> I think my next move will be to give up raid 0.90 and try the stock 0.36
> unless someone else can suggest something else to try.
I wouldn't have thought this would help. This error message looks like an IDE
problem, not a raid problem. But you
x27;t want to do this without
backing up, and if I'd backed up anyway, I would be inclined to take the easy
route and rebuild the array.
Cheers,
Bruno Prior [EMAIL PROTECTED]
but all the RAID-1 examples I have seen either use
"persistent-superblock 1" or don't have a persistent superblock line (which
therefore defaults to "persistent-superblock 1"). What made you decide to put a
"persistent-superblock 0" line in your raidtab?
Cheers,
Bruno Prior [EMAIL PROTECTED]
to
create the md devices as Luca says. If you download and build raidtools, the
"make install" part of the process should do this for you. If not, follow Luca's
instructions. The added bonus of downloading the raidtools tarball is that it
includes the new HOWTO. I'm not sure if RedHat's RPM includes this, or where it
puts it.
Cheers,
Bruno Prior [EMAIL PROTECTED]
40-256 94 63
E-mail: [EMAIL PROTECTED]
Support: [EMAIL PROTECTED]
They should be able to tell you how to get hold of their cards in Germany.
Cheers,
Bruno Prior [EMAIL PROTECTED]
ource with the raid patch and then
rebuild the kernel with whatever RAID options are needed, and then compile the
raidtools. Hopefully the RAID should compile with the failed disk option once he
is running a kernel with the latest patch and using the latest raidtools.
Cheers,
Bruno Prior [EMAIL PROTECTED]
D-1. You would need to
reboot into a system where one of the constituent partitions of the root array
is mounted as root (preferably read-only to prevent corruption of the array)
whenever you wanted to run lilo. But this might be worth some experimentation
for a system such as the one envisaged.
Cheers,
Bruno Prior [EMAIL PROTECTED]
> nr-raid-disks 3
Shouldn't that be "nr-raid-disks 2"?
Cheers,
Bruno Prior [EMAIL PROTECTED]
trd
from the next reboot.
I think the messages in syslog are from when you mkraid'ed the arrays, not from
the reboot (hence the time discrepancy).
I still think this system is too "belt-and-braces" and an unnecessary waste of
disk space, by the way.
Cheers,
Bruno Prior [EMAIL PROTECTED]
Q.
Cheers,
Bruno Prior [EMAIL PROTECTED]
uld use mdtools 0.42 or raidtools 0.50 and not bother with
the patch. But you would probably be better off with the latest raid support.
Cheers,
Bruno Prior [EMAIL PROTECTED]
lnxlists/linux-raid/) for the last couple of weeks
for a lot of advice about the complications with using 2.2.8+ kernels.
Cheers,
Bruno Prior [EMAIL PROTECTED]
replace the first).
But you could argue the case that this was an unnecessary level of paranoia. It
depends on the circumstances.
Cheers,
Bruno Prior [EMAIL PROTECTED]
daemons/raid/alpha
Cheers,
Bruno Prior [EMAIL PROTECTED]
> device /dev/sdi1
> spare-disk 7
This is the problem. That should be:
> device /dev/sdi1
> spare-disk 0
Cheers,
Bruno Prior [EMAIL PROTECTED]
your raidtab, or
"mkraid -a" to do them all together.
Have a look at the Software-RAID HOWTO in the raidtools package or at
http://ostenfeld.dk/~jakob/Software-RAID.HOWTO/. That includes all the necessary
instructions for a basic RAID setup.
Cheers,
Bruno Prior [EMAIL PROTECTED]
-a' to your system startup and stop scripts.
Thomas, notice in Steve's message the line:
> > I have raid tools 19990421-0.
Very little of your above advice applies with this version of the raidtools.
Cheers,
Bruno Prior [EMAIL PROTECTED]
so on...
Of course, this assumes that the kernel doesn't get hung by a
controller failure.
Cheers,
Bruno Prior [EMAIL PROTECTED]
o that linux is booting purely from /dev/hda,
and then reboot into a normally running system. Then follow the steps
above for installing lilo to the second disk but substituting /dev/hdc
for /dev/sdb. If /dev/hdc slips to /dev/hda when hda fails, you will
need someone to tell you what the bios address of the first IDE disk
is, and replace 0x80 in /etc/lilo.conf.hdc with that. If it doesn't
slip, simply remove the "disk=..." line altogether.
Hope this hasn't been an irrelevant ramble.
Cheers,
Bruno Prior [EMAIL PROTECTED]
This ain't going to work for raidtools-0.90.
The solution should be to drop back to kernel 2.2.7 with the latest raid
patches.
Cheers,
Bruno Prior [EMAIL PROTECTED]
RAID-1 superblocks on the partitions? That is, of
course, assuming that you had used these partitions for RAID-1 before.
If your RAID-0 is without persistent-superblocks, I believe you will have to
mkraid it every time you want to start it.
Cheers,
Bruno Prior [EMAIL PROTECTED]
>
lot of misunderstandings. Can't the legacy
stuff be taken out and turned into a patch for those who like the older tools?
Cheers,
Bruno Prior [EMAIL PROTECTED]
ch 532.6Mb partitions
into 3 parts to provide arrays for /home as well as /usr and /var. We really need to
know what the machine will be used for.
If the 3 partitions are not on separate disks, we need to know to which disks they
belong in order to make any useful suggestions.
Cheers,
Bruno Prior [EMAIL PROTECTED]
ID-1 and -5 offer redundancy, so you can continue to operate when one (at
least, depending on configuration) disk dies. In future, you might want to create (at
least) two arrays on a 2-disk set: a RAID-1 array for essential system files and
data, and a RAID-0 array for less essential files.
Cheers,
145 patch and raidtools package from
ftp.kernel.org/pub/linux/daemons/raid/alpha), but for more recent raidtools, you need
to do "make install_dev" in the raidtools directory. Might work for older versions,
but I don't know.
Cheers,
Bruno Prior [EMAIL PROTECTED]
event which this setup cannot handle is
corruption of the MBR on disk one, but I don't think you will get perfection without
going to hardware RAID or server-clustering. However, I would have thought this is
security enough for the majority of applications.
Cheers,
Bruno Prior [EMAIL PROTECTED]
> Correct me if i'm wrong, but doesn't the standard rh6.0 (and 6.1 kernel for
> that matter) use old style raid?
Like James says, I believe you are mistaken. As far as I know, RH 6.* uses
new-style RAID.
Cheers,
Bruno Prior [EMAIL PROTECTED]
D-1/0 combination. My guess is that you could only autostart
the base arrays, not the arrays which built on them.
Hope I've got the above right. If not, I'm sure more knowledgeable RAID experts will
set you straight.
Cheers,
Bruno Prior [EMAIL PROTECTED]
nsulation against the kernel being taking down by the failure of a disk
containing a swap partition.
Cheers,
Bruno Prior [EMAIL PROTECTED]
fy disk
geometry at boot-time? I don't know, but it's nothing to do with a limitation of
RAID.
Anyway, the simple answer is no. The new-style RAID superblocks have always been
at the end of the partitions, so this typo changes nothing in practice.
Cheers,
Bruno Prior [EMAIL PR
way, you will still need to patch the kernel source, although with different
versions of the patch for the different kernel versions.
Cheers,
Bruno Prior [EMAIL PROTECTED]
it in degraded mode.
I hope I have remembered everything correctly. It is some time since I set my system
up, so I may have forgotten something vital. I am sure someone will pick up on it and
let you know if I have.
Cheers,
Bruno Prior [EMAIL PROTECTED]
e you will also need to include a
persistent-superblock line in your raidtab.
Cheers,
Bruno Prior [EMAIL PROTECTED]
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED]]On Behalf Of Stojan Rancic
> Sent: 29 October 1998 18:24
> To: [EMAIL
ormer, should one do a "mkraid -f --force-resync"?
Cheers,
Bruno Prior [EMAIL PROTECTED]
tomatically recognize and start them at start-up (you will
need RAID support built into the kernel, or an initrd if you are using
modular-RAID). The kernel also takes care of stopping the md devices at
shutdown.
Cheers,
Bruno Prior [EMAIL PROTECTED]
n /mnt/dev/MAKEDEV ...
> right now?
That's because you didn't copy /dev across in your tar. Following my
instructions should transfer /dev as well.
I hope we're getting closer. Sorry for the confusion caused by my previous,
terse instructions. Root-RAID isn't the simplest thing in the world, but it
is getting easier.
Cheers,
Bruno Prior [EMAIL PROTECTED]
have you read the recent posts (see "Kernel/raidtools version
recommendations?" thread) from other users who are choosing RAID-5 rather
than RAID-0 for their newsfeeds, because the increased risk of failure which
RAID-0 entails meant that they could end up rebuilding their spool twice a
year?
Cheers,
Bruno Prior [EMAIL PROTECTED]
file unused since linking not done
>
> Will this create any trouble?
Don't know much about linuxthreads, but I believe if you are using a
glibc-based distribution, you don't need linuxthreads as threads are already
built into glibc.
Cheers,
Bruno Prior [EMAIL PROTECTED]
88 matches
Mail list logo