Patrik Rådman wrote:
>
> /dev/md0: Invalid argument
> #
>
> Looking at an strace, this seems to be the problem:
>
> open("/dev/md0", O_RDWR)= 4
> ioctl(4, 0x400c0930, 0xb61c)= -1 EINVAL (Invalid argument)
Are you sure you're running the patched kernel? You usually
Hello.
I have a stock RHat 6.0 upgraded to 2.2.13 and the raid works fine.
However during the reboot the /dev/mdX partitions can't be fsck'd
as raidstart fails to start up.
How do I configure my init.d scripts to invoke raidstart as appropriate?
STeve
Hi,
I'm trying to create a linear RAID of two loopback devices. The code in the
standard 2.2.13 kernel acted very strangely, (sometimes it worked, mostly it
didn't) so I downloaded the raid0145-19990824-2.2.11 patch, and raidtools of
the same date.
However, I can't create /dev/md0 with the new r
Hi,
On Mon, 6 Dec 1999 20:17:12 +0100, Luca Berra <[EMAIL PROTECTED]> said:
> do you mean that the problem arises ONLY, when a disk fails and has to
> be reconstructed?
No, it can happen any time the kernel does a resync after an unclean
shutdown.
--Stephen
Hi,
On Mon, 6 Dec 1999 16:11:14 -0500 (EST), Andy Poling
<[EMAIL PROTECTED]> said:
> On Mon, Dec 06, 1999 at 02:53:22PM +, Stephen C. Tweedie wrote:
>> Sorry, but since then we did find a fault. Raid resync goes through the
>> buffer cache. Swap bypasses the buffer cache. There is no cohe
On Mon, Dec 06, 1999 at 02:53:22PM +, Stephen C. Tweedie wrote:
> Sorry, but since then we did find a fault. Raid resync goes through the
> buffer cache. Swap bypasses the buffer cache. There is no coherency
> between the two activities. It is possible for raid1 and raid5
> background resy
Short and simple version of my previous long winded message: I can't
auto-mount my RAID-1 on bootup, because the rounding done by mkraid means
that the file system size doesn't agree with the partition size, which
causes the bootup fsck to fail with an error. Is there a fix or
workaround? RedHa
On 6 Dec 1999 07:03:18 -0800, Stephen C. Tweedie <[EMAIL PROTECTED]> wrote:
>Sorry, but since then we did find a fault. Raid resync goes through the
>buffer cache. Swap bypasses the buffer cache. There is no coherency
>between the two activities. It is possible for raid1 and raid5
>background
On Mon, Dec 06, 1999 at 02:53:22PM +, Stephen C. Tweedie wrote:
> Sorry, but since then we did find a fault. Raid resync goes through the
> buffer cache. Swap bypasses the buffer cache. There is no coherency
> between the two activities. It is possible for raid1 and raid5
> background resy
On Mon, Dec 06, 1999 at 12:56:29PM -0500, [EMAIL PROTECTED] wrote:
> I've had very little trouble with the old user-space NFS code, but
> recently I setup a Red Hat 6.1 system using knfsd and have been having
> some trouble in a setup where the 6.1 box is the server and several older
> (2.0.x kern
A few notes:
1) We have been using Linux NFS here for years. Once you have it running it
appears to be rock-solid. We've even NFS mounted a filesystem, from another
server, and then shared it out over Samba, to client workstations. This was
a production configuration and it lasted for years, until
On Mon, 6 Dec 1999, Dong Hu wrote:
> We need about 200G hard disk space for software a development
> environment. I am considering of using linux-raid5 configuration.
> We will use nfs to share the disk on 100 Ethernet Lan.
>
> My concern is, how stable an reliable is linux NFS and raid5?
> Any
Just to verify - you would like to create a Raid5 on 1 machine and share
it via NFS, or would you like to create a Raid5 out of other machines'
drives shared via NFS? (the first is very reliable, but the latter is a
little far fetched, though may be technically possible)
My impressions of Raid a
Would you happen to have a raid0145-19990824 patch that applies cleanly against
a 2.0.38 kernel?? Been trying to do that for some time now with no
success. Just using the older tools with an unpatched 2.0.38 kernel works OK
but . . .
Just hoping - cheers
Alex Vandenham
Avantel Systems
Ottawa,
Hi,
We need about 200G hard disk space for software a development
environment. I am considering of using linux-raid5 configuration.
We will use nfs to share the disk on 100 Ethernet Lan.
My concern is, how stable an reliable is linux NFS and raid5?
Any experience of using this in a production en
On Dec 5, 12:01pm, Danilo Godec wrote:
} Subject: Re: Raid with new kernel
Good morning to everyone.
> On Sun, 5 Dec 1999, ACEAlex wrote:
>
> > the 2.2.13 kernel with is the latest stable. But when i try to start using
> > it i get a different startup screens (see belove). Do i have to patch th
Hi,
On Fri, 26 Nov 1999 18:04:27 +0100, Martin Bene <[EMAIL PROTECTED]> said:
> At 11:35 25.11.99 +0100, Thomas Waldmann wrote:
>> What's more interesting for me: how about swap on RAID-5 ?
> Personaly, I've only used raid1, but I can give you a quote from Ingo - and
> he should know:
> At 14:
[EMAIL PROTECTED] wrote:
> Is there some good documentation about raid and ide? I am using the
> raid0145-19990824-2.2.11 patch at the moment and although they work
> fine am wondering if performance is enhanced with the ide patches
> since all my drives are ide (I know don't you think i WANT S
> [mailto:[EMAIL PROTECTED]]On Behalf Of root
> Sent: Monday, December 06, 1999 12:04 AM
> I am a little confused about how to set up raid1.
> I want to mirror /dev/sda to /dev/sdb (two 9 gig SCSI disks)
> The example says to have raidtab say /dev/sda1 and /dev/sdb1
> Does this mirror the whole d
I get the following message after doing mkraid /dev/md0
handling MD device /dev/md0
analyzing super-block
disk 0: /dev/sda1, 24066kB, raid superblock at 24000kB
/dev/sda1 is mounted
mkraid: aborted, see the syslog and /proc/mdstat for potential clues.
I am a little confused about how to set up ra
I get the following message after doing mkraid /dev/md0
handling MD device /dev/md0
analyzing super-block
disk 0: /dev/sda1, 24066kB, raid superblock at 24000kB
/dev/sda1 is mounted
mkraid: aborted, see the syslog and /proc/mdstat for potential clues.
There are no clues.
I am a little confused a
21 matches
Mail list logo