Re: [FAQ-answer] Re: soft RAID5 + journalled FS + power failure = problems ?

2000-01-12 Thread Mark Ferrell

  Perhaps I am confused.  How is it that a power outage while attached
to the UPS becomes "unpredictable"?  

  We run a Dell PowerEdge 2300/400 using Linux software raid and the
system monitors it's own UPS.  When power failure occures the system
will bring itself down to a minimal state (runlevel 1) after the
batteries are below 50% .. and once below 15% it will shutdown which
turns off the UPS.  When power comes back on the UPS fires up and the
system resumes as normal.

  Addmitedly this wont prevent issues like god reaching out and slapping
my system via lightning or something, nor will it resolve issues where
someone decides to grab the power cable and swing around on it severing
the connection from the UPS to the system .. but for the most part it
has thus far prooven to be a fairly decent configuration.

Benno Senoner wrote:
> 
> "Stephen C. Tweedie" wrote:
> 
> (...)
> 
> >
> > 3) The soft-raid backround rebuild code reads and writes through the
> >buffer cache with no synchronisation at all with other fs activity.
> >After a crash, this background rebuild code will kill the
> >write-ordering attempts of any journalling filesystem.
> >
> >This affects both ext3 and reiserfs, under both RAID-1 and RAID-5.
> >
> > Interaction 3) needs a bit more work from the raid core to fix, but it's
> > still not that hard to do.
> >
> > So, can any of these problems affect other, non-journaled filesystems
> > too?  Yes, 1) can: throughout the kernel there are places where buffers
> > are modified before the dirty bits are set.  In such places we will
> > always mark the buffers dirty soon, so the window in which an incorrect
> > parity can be calculated is _very_ narrow (almost non-existant on
> > non-SMP machines), and the window in which it will persist on disk is
> > also very small.
> >
> > This is not a problem.  It is just another example of a race window
> > which exists already with _all_ non-battery-backed RAID-5 systems (both
> > software and hardware): even with perfect parity calculations, it is
> > simply impossible to guarantee that an entire stipe update on RAID-5
> > completes in a single, atomic operation.  If you write a single data
> > block and its parity block to the RAID array, then on an unexpected
> > reboot you will always have some risk that the parity will have been
> > written, but not the data.  On a reboot, if you lose a disk then you can
> > reconstruct it incorrectly due to the bogus parity.
> >
> > THIS IS EXPECTED.  RAID-5 isn't proof against multiple failures, and the
> > only way you can get bitten by this failure mode is to have a system
> > failure and a disk failure at the same time.
> >
> 
> >
> > --Stephen
> 
> thank you very much for these clear explanations,
> 
> Last doubt: :-)
> Assume all RAID code - FS interaction problems get fixed,
> since a linux soft-RAID5 box has no battery backup,
> does this mean that we will loose data
> ONLY if there is a power failure AND successive disk failure ?
> If we loose the power and then after reboot all disks remain intact
> can the RAID layer reconstruct all information in a safe way ?
> 
> The problem is that power outages are unpredictable even in presence
> of UPSes therefore it is important to have some protection against
> power losses.
> 
> regards,
> Benno.



Re: 2.2.14 + raid-2.2.14-B1 on PPC failing on bootup

2000-01-11 Thread Mark Ferrell

It is possible that the problem is a result of the raid code not beeing PPC
friendly when concerning byte boundries.

Open up linux/include/linux/raid/md_p.h
At line 161 you should have something ressembling the following

__u32 sb_csum;  /*  6 checksum of the whole superblock
*/
__u64 events; /*  7 number of superblock updates
(64-bit!)  */
__u32 gstate_sreserved[MD_SB_GENERIC_STATE_WORDS - 9];

Try swapping the __u32 sb_csum and __u64 events around so that it looks like

__u64 events; /*  7 number of superblock updates
(64-bit!)  */
__u32 sb_csum;  /*  6 checksum of the whole superblock
*/
__u32 gstate_sreserved[MD_SB_GENERIC_STATE_WORDS - 9];

This should fix the byte boundry problem that seems to cause a few issues
on PPC systems.  This problem and solution was previously reported by
Corey Minyard whom notated that the PPC is a bit more picky about byte
boundries then the x86 architecture.

"Kevin M. Myer" wrote:

> Hi,
>
> I am running kernel 2.2.14 + Ingo's latest RAID patches on an Apple
> Network Server.  I have (had) a RAID 5 array with 5 4Gb Seagate drives in
> it working nicely with 2.2.11 and I had to do something silly, like
> upgrade the kernel so I can use the big LCD display on the front to
> display cute messages.
>
> Now, I seem to have a major problem - I can make the array fine.  I can
> create a filesystem fine.  I can start and stop the array fine.  But I
> can't reboot.  Once I reboot, the kernel loads until it reaches the raid
> detection.  It detects the five drives and identifies them as a RAID5
> array and then, endlessly, the following streams across my screen:
>
> <[dev 00:00]><[dev 00:00]><[dev 00:00]><[dev 00:00]><[dev 00:00]><[dev
> 00:00]><[dev 00:00]>
>
> ad infiniteum and forever.
>
> I have no choice but to reboot with an old kernel, run mkraid on the whole
> array again, remake the file system and download the 5 Gigs of Linux and
> BSD software that I had mirrored.
>
> Can anyone tell me where to start looking for clues as to whats going
> on?  I'm using persistent superblocks and as far as I can tell, everything
> is getting updated when I shutdown the machine and reboot
> it.  Unfortunately, the kernel nevers gets to the point where it can dump
> the stuff from dmesg into syslog so I have no record of what its actually
> stumbling over.
>
> Any ideas of what to try?  Need more information?
>
> Thanks,
>
> Kevin
>
> --
>  ~Kevin M. Myer
> . .   Network/System Administrator
> /V\   ELANCO School District
>// \\
>   /(   )\
>^`~'^



Re: [new release] raidreconf utility

1999-11-02 Thread Mark Ferrell

Jakob Østergaard wrote:

> On Tue, Nov 02, 1999 at 02:16:57PM -0600, Mark Ferrell wrote:
> > You also have to remember that in most LVM implementations adding a device to the
> > LVM does not add it to the raid.
> [snip]
> > "Oh look .. the 90G raid5 array is getting pretty full .. I guess we should add
> > more drives to it .. oh .. hold it .. it's a raid5 .. we can't just add more space
> > to it .. we have to LVM attach another raid5 array to the array in order to keep
> > redundancy"
>
> My only experience with LVM is from HPUX.  I could create the equivalent of RAID-0
> there using LVM only, and it is my understanding that LVM for Linux can do the same.
> It should indeed be possible to create the equivalent of RAID-5 as well, using only
> LVM.  But still the LVM would have to support extending the parity-VG.

LVM on AIX and HPUX is functionally the same, only in AIX you don't have to buy the
extra SW package to resize the FS while it's mounted.  Actually .. they may be derrived

from the same base code .. but I am not certain there.

> raidreconf will hopefully be useful for people to do these tricks, until the LVM
> gets the needed features (which may be years ahead).

Yah .. AIX supports a method of doing stripping and parity .. but as I understand the
LVM
support is tightly bound to the JFS in order to make this possible.  As well, I believe
AIX
no longer supports reducing the size of FS.  Actually .. I am not certain it was ever a

'supported' feature .. though I have tools for doing it on AIX3.

> > The ability to add/remove/resize the lower level raid enviroment would be in my
> > opinion alot more beneficial in the long run if it is possible.
>
> IMHO the support for redundancy should be in the LVM layer. This would eliminate
> the need for RAID support as we know it today, because LVM could provide the same
> functionality, only even more flexible.  But it will take time.

A single layer approach would deffinately reduce alot of complications.

>
> Today, and the day after, we're still going to use the RAID as we know it now. LVM
> is inherently cooler, but that doesn't do everyone much good right now as it doesn't
> provide the equivalent of a resizable RAID-5. It's my belief that people need that.

Very much agreed.

> Cheers,
> --
> 
> : [EMAIL PROTECTED]  : And I see the elder races, :
> :.: putrid forms of man:
> :   Jakob Østergaard  : See him rise and claim the earth,  :
> :OZ9ABN   : his downfall is at hand.   :
> :.:{Konkhra}...:

--
Mark Ferrell



Re: [new release] raidreconf utility

1999-11-02 Thread Mark Ferrell

You also have to remember that in most LVM implementations adding a device to the
LVM does not add it to the raid.

For example, let's say we have a raid1 array with 2 devices on it, and we have
assigned the array to be part of an LVM.  Now, let's say you add a 3rd drive.  At
this point you have not added the device to the raid1 array, but only to the lvm
volume group, thusly there will be no redundancy apon the device.  LVM+Raid
support comes in handy when you want to clunk together groups of raid arrays.  But
bare in mind that it wont necessarily make your life easier .. in putting an LVM
layer over the top of raid you can actually force yourself into greater
restrictions about how you can use the device.

"Oh look .. the 90G raid5 array is getting pretty full .. I guess we should add
more drives to it .. oh .. hold it .. it's a raid5 .. we can't just add more space
to it .. we have to LVM attach another raid5 array to the array in order to keep
redundancy"

The ability to add/remove/resize the lower level raid enviroment would be in my
opinion alot more beneficial in the long run if it is possible.


Jakob Østergaard wrote:

> On Tue, Nov 02, 1999 at 01:56:06PM +0100, Egon Eckert wrote:
> > > There's a new version of the raidreconf utility out.  I call it 0.0.2.
> >
> > Isn't this what supposed 'LVM' to be about?  (BTW there seem to be 2
> > different implementations of LVM on the web -- one included in 0.90 raid
> > patch and one on http://linux.msede.com/lvm/)
>
> Well, yes and no.  LVM gives you pretty much the same features, the ability
> to add disks to a device to grow it.
>
> The only reason I started the raidreconf utility was, because I needed to be
> able to add/remove disks from RAID arrays *today*.  LVM is, from what I can
> understand, still not implemented to a state where you can use it and rely
> on it. I know I can rely on the RAID code in the kernel, so all I was missing
> was a utility to add/remove disks from RAID sets.  Now I have one, at least
> for RAID-0 :)
>
> While I'm at it, I hope to build in some conversion features too, so that you
> can convert between RAID levels.  The utility can already convert a single
> block device into a RAID-0, but being able to convert a five disk RAID-0 into
> eg. a seven disk RAID-5 would be pretty sweet I guess.  Remember, this is all
> functionality that, once raidreconf works, is perfectly stable and well tested,
> because all the ``real'' support for the RAID levels has been in the kernels or
> at least the patches for a long time now.
>
> > Can someone clarify this?
> >
> > A few months ago I asked what's the 'translucent' feature as well, but no
> > reply.. :(
>
> I would actually like to know about the state of LVM, HSM, and all the other
> nice storage features being worked on in Linux.  I wouldn't want to spend time
> on this utility if it was entirely redundant.  But then again, I don't think it
> is, at this time.   Hopefully, in a year or so, nobody will care about
> raidreconf, because we have LVM working and providing even more features.  Or
> maybe some raidreconf code could be used for the LVM for providing the
> conversion features.
>
> Time will show:)
>
> --
> 
> : [EMAIL PROTECTED]  : And I see the elder races, :
> :.....: putrid forms of man:
> :   Jakob Østergaard  : See him rise and claim the earth,  :
> :OZ9ABN   : his downfall is at hand.   :
> :.:{Konkhra}...:

--
 Mark Ferrell  : [EMAIL PROTECTED]
(972) 685-7868 : Desk
(972) 685-4210 : Lab
(972) 879-4326 : Pager





Re: 71% full raid - no space left on device

1999-10-20 Thread Mark Ferrell


I was under the impression that the reiserfs was more of just an experiemnt
in making a BTree sorted filesystem.
"Stephen C. Tweedie" wrote:
Hi,
On Thu, 14 Oct 1999 09:22:25 -0700, Thomas Davis <[EMAIL PROTECTED]>
said:
> I don't know of any Unix FS with dynamic inode allocation.. 
Is there
> one?
Reiserfs does, doesn't it?
--Stephen

--
 Mark Ferrell  : [EMAIL PROTECTED]
(972) 685-7868 : Desk
(972) 685-4210 : Lab
(972) 879-4326 : Pager
 


Re: booting from raid

1999-10-18 Thread Mark Ferrell


It can always be on a raid mirrored partition though.
Daniel Wirth wrote:
The Kernel, which LILO wants to boot is located on
which device ? It
MUST NOT be located on any of your RAID-Partitions but on a non-striped
boot-partition!
Could you please post you kernel version and if you applied the
raid-patches 0.90?
On Mon, 18 Oct 1999, Carol Bosshart - KVG Internet Services wrote:
> Date: Mon, 18 Oct 1999 16:48:16 +0200 (MEST)
> From: Carol Bosshart - KVG Internet Services <[EMAIL PROTECTED]>
> To: [EMAIL PROTECTED]
> Subject: booting from raid
>
> Hello all,
>
> I set up raid1 on a SuSE Linux Box.
> For this i choosed 'method 1' form Jakob Ostergaards Software RAID-HOWTO:
> copy the whole installation from a spare (IDE) disk.
>
>  I have 3 partitions:
>
> Device   
Partition   Mountpoint
> ---
> /dev/md0  (sda1, sdb1)   
/
> /dev/md1  (sda3, sdb3)   
/usr
> /dev/md2  (sda4, sdb4)   
/info
>
> /dev/hda  (the installation spare disk)
>
> swap  sda2,
sdb2
>
> My problem is that i cannot boot from my raid partition /dev/md0
>
> When the IDE disk /dev/hda is connected, everything works fine, but
> without /dev/hda i just get 'LI' from the LILO-Prompt.
>
> The lilo manpages says that this results from an incorrect map file.
>
> When i try to install lilo i egt the message:
>
>   'Sorry, don't know how to handle
device 0x0900'
>
> I think my problems are caused by a wrong lilo bootblock/map.
>
> Does anyone know a solution for my problem?
>
>   Carol
>
>
--
Gruesse,
Daniel
__
DANIEL WIRTH
|
<[EMAIL PROTECTED]> 
|
http://www.wirthuell.de     
|
Fax.: +49 89 66617-48603
|
Fak. fuer Physik +49 761 203-5896    |
Administration CIP-Pool: 203-7682    |
__

--
 Mark Ferrell  : [EMAIL PROTECTED]
(972) 685-7868 : Desk
(972) 685-4210 : Lab
(972) 879-4326 : Pager
 


Re: Linux box locking up ..

1999-09-23 Thread Mark Ferrell

I am getting the feeling that something in the IDE code isn't SMP
safe as all the lockups on SMP I am hearing about involving raid
on IDE.

I have had zero flaws with SMP + raid .. but then I don't use any
IDE components in my systems at all.

HA Quoc Viet wrote:

> "Jason A. Diegmueller" wrote:
> >
> > : A number of us are having similar problems currently with
> > : 2.2.1? with many ide disks and SMP.  It is probably worth
> > : sending linux-kernel a summary of your difficulty, as the
> > : topic is being discussed.  What Alan would really like is
> > : if you could run 2.2.13pre11 without the raid code, and get a
> > : similar lockup... perhaps just by keying heavy disk activity
> > : on each disk at the same time.. at least this works for the
> > : many-ide-SMP lockups (something like dd if=/dev/sdX of=/dev/null
> > : count=50 & running on each drive at the same time.
> >
> > For what it's worth, this particular box is NOT running any IDE
> > drives.  It has (4) 2.1gb SCSI drives on an aic78xx adapter.
>
> dunno if that's gonna help you a bit or not, but I happen to
> have exactly (4) 4.3gb SCSI drives on an aic78xxx ( the Ultra Wide
> version) too, on an SMP box (2.2.12). however, I boot on an IDE
> drive, and uses no RAID.
>
> This one config is working fine.
>
> I'd say, it's the raid thingy, then ...
>
> Viet

--
 Mark Ferrell  : [EMAIL PROTECTED]
(972) 685-7868 : Desk
(972) 685-4210 : Lab
(972) 879-4326 : Pager





Re: Linux box locking up ..

1999-09-23 Thread Mark Ferrell

No, I havn't had lockups at all that I remember, regardless of how hard
I beat on the machine.  The system is a Dell PowerEdge2300 using
AIC-7880 controlers for the drives, and using the AIC-7860(?) for the
dat drive and cd-rom.

I run the same patches on a Dell OptiPlex GX1 using an AIC-7880
controler, and on another system at home running a Tyan Tomcat IV Dual
P233mmx ( I am pushing the specs on this one).  So far all machines have
been stable.  The PowerEdge's uptime would have been longer, but I just
upgraded it to the 2.2.12 patch from the 2.2.7 w/ Raid0145 + devfs.

Notably though I had to piece-o-meal the patches together as the devfs
patches and raid patches don't get along well.

No, I havn't had lockups at all that I remember, regardless of how hard
I beat on the machine.  The system is a Dell PowerEdge2300 using
AIC-7880 controlers for the drives, and using the AIC-7860(?) for the dat
drive and cd-rom.

I run the same patches on a Dell OptiPlex GX1 using an AIC-7880
controler, and on another system at home running a Tyan Tomcat IV Dual
P233mmx ( I am pushing the specs on this one).  So far all machines have
been stable.  The PowerEdge's uptime would have been longer, but I just
upgraded it to the 2.2.12 patch from the 2.2.7 w/ Raid0145 + devfs.

Notably though I had to piece-o-meal the patches together as the devfs
patches and raid patches don't get along well.



Re: Linux box locking up ..

1999-09-22 Thread Mark Ferrell
t say I've never seen this in my entire life, so
> : : I wanted to get some input.  This is sent to both the
> : : linux-admin and linux-raid lists.
> : :
> : : I have a customer/friend (yes, the two-in-one combo that
> : : is often noted for being dangerous =) who recently upgraded
> : : an old SCO setup they had to SCO Openserver 5.0.5.  This
> : : included selling him new hardware and the works.  This meant
> : : that his old HP Netserver LXe Pro (which is a gorgeous machine)
> : : became a spare server for us to utilize.
> : :
> : : Being the avid Linux geek I am, I immediately dumped Linux
> : : on there.  The Mylex DAC960 was not supported (the card
> : : is the old 2.x firmware variety, and HP wanted money to upgrade
> : : us to the 3.x series) so I could not utilize the hardware
> : : RAID.  Instead, I went with software.
> : :
> : : It is a dual-processored machine (capable of 4, only utilizing
> : : two PPro 200's at this time, 512k cache each), so I have
> : : SMP compiled it.  So at the current time, it basically is:
> : :  linux-2.2.11-SMP with raid0145-990824 with four 2gb drives
> : :  in a RAID-5.  The SCSI bus is onboard Adaptec 78xx.  The
> : :  network card is an Intel Etherexpress PRO.
> : :
> : : The problem?  It locks up.  Solid.  I've never in my life
> : : seen a Linux box just lock up, with no hints anywhere in
> : : logfiles.  On the other hand, I've never gotten my hands on
> : : hardware this "big" (This sucker was $31k retail when they
> : : bought it).  The machine is currently virutally unused (other
> : : then qpopper for POP mail); SAMBA is setup on it, but is
> : : currently completely unutilized at this time.
> : :
> : : It seems to lock hard every few days.  Maybe 3 or 4?  I see
> : : no coorelation of activity (ie, users doing something) and
> : : lockups, but am willing to dig a little deeper if someone has
> : : an idea.
> : :
> : : I was wondering two things:
> : :  A. Are there any known incompatibilities with any of this
> : : hardware?  I've seen some mentions of aix78xx, SMP, and
> : : raid causing problem.  Is this what I'm bumping in to?
> : :  B. Is there anything I can do to figure out WHAT is causing
> : : the hard lockups?  Again, no hints in /var/log/messages or
> : : anywhere else.  Possibly a serial cable to a dumb terminal
> : : constantly dumping system information?
> : :
> : : Any information or clues would be more then appreciated.  Replies
> : : directly to the list are more then fine; I subscribe to both.
> : :
> : : Thanks.
> : :
> : : ::: Jason A. Diegmueller
> : : ::: Microsoft Certified Systems Engineer
> : : ::: 513/542-1500 WORK  //  [EMAIL PROTECTED]
> : : ::: Systems Administrator, Bertke Systems Innovations
> : :
> : :
> : : --------------------------
> : : ------
> : :  to unsubscribe email "unsubscribe linux-admin" to
> : : [EMAIL PROTECTED]
> : :  See the linux-admin FAQ: http://www.kalug.lug.net/linux-admin-FAQ/
> : :
> :

--
 Mark Ferrell  : [EMAIL PROTECTED]
(972) 685-7868 : Desk
(972) 685-4210 : Lab
(972) 879-4326 : Pager





Re: Booting Root RAID 1 Directly _Is_ Possible

1999-08-24 Thread Mark Ferrell

Maybe you two could work together and make a RaidRoot-HOWTO that covers both lilo
and grub??

--
 Mark Ferrell  : [EMAIL PROTECTED]

Andy Poling wrote:

> On 23 Aug 1999, Harald Nordgård-Hansen wrote:
> > James Manning <[EMAIL PROTECTED]> writes:
> > > > How come I've been running this for about a year and a half, then?
> > >
> > > I believe he's talking about not having to do *any* non-raid partitions
> > > (ie your /boot I believe, reading your lilo.conf)
> > > (...)
> > >
> > > Please enlighten if I missed the point in his, or your, posts.
> >
> > As I said, /boot resides inside my / raid0 set.  There is no need to
> > have non-raid disks when booting with lilo, all you have to do is tell
> > lilo how to access the underlying data from your raid-set, i.e. the
> > translation from /dev/md0 to /dev/sda1 in my case.  And using the disk
> > parameter of lilo, this is a fairly straight-forward task.
>
> I wish I'd seen that explained alot sooner... it might have saved me alot of
> trouble, because I certainly _tried_ to figure out how to get lilo to work
> that way.  I think "straightforward" might be an optimistic description of
> the difficulty level.  :-)
>
> > So again, using lilo and very little magic, there is absolutely no
> > need for a separate partition for /boot.
>
> I'm still going to stick with grub because it only needs to be installed once
> and then works forever, and it requires no "magic" after that - it just works.
>
> Several folks recommended that I write up a mini-howto on grub booting RAID
> 1, and that's what I'm going to do.
>
> Harald, it might help if you did the same thing for lilo booting RAID 1...
> and then folks can figure out how to use either method.
>
> -Andy



Re: Is the latest RAID stuff in 2.2.11-ac3 ?

1999-08-18 Thread Mark Ferrell

Oh yah,

Alan, the tulip drivers stopped functioning correctly for me in 2.2.12.  I
have a gut feeling it's not a kernel issue but I havn't had a chance to beat
on it.
Basicly description:
Patched a freshly extracted 2.2.11 w/ 2.2.12-final patch.  copied my
/boot/config-2.2.11-smp from my previous kernel as .config in the source and
did a make oldconfig
Installed kernel, Everything came up correctly, including the tulip driver,
but the card wasn't capable of sending data to the network.  ifconfig shows
that it was reciving packets and sending yet it wasn't.  Wish I could be more
descriptive but I am at work now and don't have any of my logs ..

Will get home and do a make mrproper and try again and see if it still has
issues .. will drop you a log if all is not well.

--
 Mark Ferrell  : [EMAIL PROTECTED]

"Ferrell, Mark (EXCHANGE:RICH2:2K25)" wrote:

> Was playing w/ 2.2.12-final last night in alan's 2.2.12pre releases and it
> appears to fully support the newer raid source.
>
> --
>  Mark Ferrell  : [EMAIL PROTECTED]
>
> [EMAIL PROTECTED] wrote:
>
> > Is the latest (ie. 19990724) RAID stuff in 2.2.11-ac3 ?
> >
> > If not, what version of the RAID software does this
> > kernel correspond to?
> >
> > On a related issue, when will all the good stuff
> > like RAID and the large fdset patch make it into
> > the real kernel - I really need these, and they are
> > surely stable enough by now.
> >
> > Rich.
> >
> > --
> > [EMAIL PROTECTED] | Free email for life at: http://www.postmaster.co.uk/
> > BiblioTech Ltd, Unit 2 Piper Centre, 50 Carnwath Road, London, SW6 3EG.
> > +44 171 384 6917 | Click here to play XRacer: http://xracer.annexia.org/
> > --- Original message content Copyright © 1999 Richard Jones ---



Re: Is the latest RAID stuff in 2.2.11-ac3 ?

1999-08-18 Thread Mark Ferrell

Was playing w/ 2.2.12-final last night in alan's 2.2.12pre releases and it
appears to fully support the newer raid source.

--
 Mark Ferrell  : [EMAIL PROTECTED]

[EMAIL PROTECTED] wrote:

> Is the latest (ie. 19990724) RAID stuff in 2.2.11-ac3 ?
>
> If not, what version of the RAID software does this
> kernel correspond to?
>
> On a related issue, when will all the good stuff
> like RAID and the large fdset patch make it into
> the real kernel - I really need these, and they are
> surely stable enough by now.
>
> Rich.
>
> --
> [EMAIL PROTECTED] | Free email for life at: http://www.postmaster.co.uk/
> BiblioTech Ltd, Unit 2 Piper Centre, 50 Carnwath Road, London, SW6 3EG.
> +44 171 384 6917 | Click here to play XRacer: http://xracer.annexia.org/
> --- Original message content Copyright © 1999 Richard Jones ---



Re: Question Re: Software Mirroring and the root partition

1999-07-22 Thread Mark Ferrell

I could be off my rocker, but I believe the answer your looking for is the fact

that Lilo cannot correctly use a raid for booting. Typically I would imagine
you
would settup / as the mirror, and perhaps make a /boot that's it's own non-raid

partition for kernel images and such, thusly lilo would be able to handle the
images
correctly.  This is something I have done for a Raid5 / partition and it works
fine.

I am not aware of any other direct limitation short of the aspect that lilo
simply doesn't
understand raid .. but as long as you can get the kernel loaded and the kernel
is patched
to auto-detect the raid arrayz then all should be fine.

Corse .. like I said .. I could be off my rocker and there might be a
limitation that I am not
aware of.

Mark F.[EMAIL PROTECTED]
Nortel Networks


Christopher A. Gantz wrote:

> Hello Everyone,
>
> First I'd like to apologize if this question has been asked before, however,
> I am trying to determine the specific reason why mirroring of the root (/)
> partition can not be done with software (i.e. host based) mirroring.  I've
> read (In an FAQ) that it has something to do with what both Lilo and Loadin
> expect from a root (/) partition and/or the boot block.  Therefore, I was
> hoping that someone could tell me the specifics of the problem as well as
> any other information about what can and can't be a mirrored partition.
>
> Also was wondering what was the status of providing RAID 1 + 0 functionality
> in software for Linux.
>
> Thanks in advance for any help
>
> -- Chris
>
> 
> ////
> // Christopher A. Gantz, MTSemail: [EMAIL PROTECTED]  //
> // Lucent Technologies, Bell Labs Innovation  //
> // Rm 37W07, 11900 N. Pecos St. voice: (303) 538-0101 //
> // Westminster, CO  80234 fax: (303) 538-3155 //
> ////
> 



Re: linux-raid 0.9 on SUSE-Linux 6.1

1999-07-09 Thread Mark Ferrell


I installed the raid 0.90 tools and installed and patch the 2.2.6
kernel.  Though I did it from source .. don't know if that matters.
Corse .. getting Yast to believe it's root fs was /dev/md0 is a
completely different storry.

Schackel, Fa. Integrata, ZRZ DA wrote:

> Hello everybody,
>
> I'm using SUSE Linux 6.1 with kernel 2.2.5
> Shipped with SUSE is mdtools 0.42.
>
> So I was loadding the 0.90 rpm packet.
> I installed it an by calling any raid-tool
> I get a segmentation fault.
>
> Is there anybody who managed tho problem
> and could provide me any help ?
>
> Thx
> Barney