Hi,
On Wed, Jun 28, 2000 at 06:35:51PM +0200, Benno Senoner wrote:
> > As far as I know the issue has been fixed in 2.4.* kernel series.
> > ReiserFS and software RAID5 is NOT safe in 2.2.*
>
> but Stephen Tweedie (some time ago) pointed out that ,
> the only way to make a software raid system
Hi,
On Thu, Mar 30, 2000 at 11:13:13PM +0200, Thomas Kotzian wrote:
> There was a discussion about LVM, reiserfs,... , and i need the URL or the address
> for the mailinglist for fs-devel the File-system development group.
[EMAIL PROTECTED]
--Stephen
Hi,
Chris Wedgwood writes:
> > This may affect data which was not being written at the time of the
> > crash. Only raid 5 is affected.
>
> Long term -- if you journal to something outside the RAID5 array (ie.
> to raid-1 protected log disks) then you should be safe against this
> type of
Hi,
Benno Senoner writes:
> wow, really good idea to journal to a RAID1 array !
>
> do you think it is possible to to the following:
>
> - N disks holding a soft RAID5 array.
> - reserve a small partition on at least 2 disks of the array to hold a RAID1
> array.
> - keep the journal o
Hi,
On Wed, 12 Jan 2000 22:09:35 +0100, Benno Senoner <[EMAIL PROTECTED]>
said:
> Sorry for my ignorance I got a little confused by this post:
> Ingo said we are 100% journal-safe, you said the contrary,
Raid resync is safe in the presence of journaling. Journaling is not
safe in the presence
Hi,
On Wed, 12 Jan 2000 11:28:28 MET-1, "Petr Vandrovec"
<[EMAIL PROTECTED]> said:
> I did not follow this thread (on -fsdevel) too close (and I never
> looked into RAID code, so I should shut up), but... can you
> confirm that after buffer with data is finally marked dirty, parity
> is recomp
Hi,
On Wed, 12 Jan 2000 07:21:17 -0500 (EST), Ingo Molnar <[EMAIL PROTECTED]>
said:
> On Wed, 12 Jan 2000, Gadi Oxman wrote:
>> As far as I know, we took care not to poke into the buffer cache to
>> find clean buffers -- in raid5.c, the only code which does a find_buffer()
>> is:
> yep, this i
Hi,
On Tue, 11 Jan 2000 16:41:55 -0600, "Mark Ferrell"
<[EMAIL PROTECTED]> said:
> Perhaps I am confused. How is it that a power outage while attached
> to the UPS becomes "unpredictable"?
One of the most common ways to get an outage while on a UPS is somebody
tripping over, or otherwise r
Hi,
On Wed, 12 Jan 2000 00:12:55 +0200 (IST), Gadi Oxman
<[EMAIL PROTECTED]> said:
> Stephen, I'm afraid that there are some misconceptions about the
> RAID-5 code.
I don't think so --- I've been through this with Ingo --- but I
appreciate your feedback since I'm getting inconsistent advise her
Hi,
On Tue, 11 Jan 2000 15:03:03 +0100, mauelsha
<[EMAIL PROTECTED]> said:
>> THIS IS EXPECTED. RAID-5 isn't proof against multiple failures, and the
>> only way you can get bitten by this failure mode is to have a system
>> failure and a disk failure at the same time.
> To try to avoid this k
Hi,
On Tue, 11 Jan 2000 20:17:22 +0100, Benno Senoner <[EMAIL PROTECTED]>
said:
> Assume all RAID code - FS interaction problems get fixed, since a
> linux soft-RAID5 box has no battery backup, does this mean that we
> will loose data ONLY if there is a power failure AND successive disk
> failur
Hi,
This is a FAQ: I've answered it several times, but in different places,
so here's a definitive answer which will be my last one: future
questions will be directed to the list archives. :-)
On Tue, 11 Jan 2000 16:20:35 +0100, Benno Senoner <[EMAIL PROTECTED]>
said:
>> then raid can miscalcul
Hi,
On Fri, 07 Jan 2000 13:26:21 +0100, Benno Senoner <[EMAIL PROTECTED]>
said:
> what happens when I run RAID5+ jornaled FS and the box is just writing
> data to the disk and then a power outage occurs ?
> Will this lead to a corrupted filesystem or will only the data which
> was just written,
Hi,
On Mon, 6 Dec 1999 20:17:12 +0100, Luca Berra <[EMAIL PROTECTED]> said:
> do you mean that the problem arises ONLY, when a disk fails and has to
> be reconstructed?
No, it can happen any time the kernel does a resync after an unclean
shutdown.
--Stephen
Hi,
On Mon, 6 Dec 1999 16:11:14 -0500 (EST), Andy Poling
<[EMAIL PROTECTED]> said:
> On Mon, Dec 06, 1999 at 02:53:22PM +, Stephen C. Tweedie wrote:
>> Sorry, but since then we did find a fault. Raid resync goes through the
>> buffer cache. Swap bypasses the buffe
Hi,
On Fri, 26 Nov 1999 18:04:27 +0100, Martin Bene <[EMAIL PROTECTED]> said:
> At 11:35 25.11.99 +0100, Thomas Waldmann wrote:
>> What's more interesting for me: how about swap on RAID-5 ?
> Personaly, I've only used raid1, but I can give you a quote from Ingo - and
> he should know:
> At 14:
Hi,
On Tue, 26 Oct 1999 11:42:41 -0400 (EDT), David Holl
<[EMAIL PROTECTED]> said:
> would specifying differing input & output block sizes with dd help?
Unfortunately not, no. The underlying device blocksize is set when the
device is first opened.
--Stephen
Hi,
On Tue, 19 Oct 1999 20:12:20 -0700, "Tom Livingston" <[EMAIL PROTECTED]>
said:
>> Has anyone else tried raw-io with md devices? It works for me but the
>> performance is quite bad.
> This is a recently reported issue on the linux-kernel mailing list.
> The jist of it is that rawio is usin
Hi,
On Wed, 20 Oct 1999 13:12:23 +0400, Hans Reiser <[EMAIL PROTECTED]>
said:
> We don't have inodes in our FS, but we do have stat data, and that
> is dynamically allocated (dynamic per FS, not per file yet, soon but
> not yet each field will be optional and inheritable per file).
> Does XFS d
Hi,
On Thu, 14 Oct 1999 09:22:25 -0700, Thomas Davis <[EMAIL PROTECTED]>
said:
> I don't know of any Unix FS with dynamic inode allocation.. Is there
> one?
Reiserfs does, doesn't it?
--Stephen
Hi,
On Mon, 11 Oct 1999 17:02:27 -0500, Stephen Waters <[EMAIL PROTECTED]>
said:
> This blurb in the latest Kernel Traffic has some status information on
> ext3 and ACLs that might be relevant. 12-18mo for a really stable
> version, but version 0.02 is supposed (maybe already) to be out very
> s
Hi,
On Mon, 11 Oct 1999 16:58:46 -0400, Tom Kunz <[EMAIL PROTECTED]> said:
> Hmm, well GFS isn't exactly an improvement on NBD, it's more like an
> entirely different filesystem type.
GFS is a shared disk filesystem. It doesn't care how the disk is
shared, and one of the side projects
Hi,
On Mon, 11 Oct 1999 13:55:23 -0400, Tom Kunz <[EMAIL PROTECTED]> said:
> Stephen (and others who might know),
> Are there homepages and/or mailing lists for these teams? I would be
> highly interested in participating...
One is the GFS team at http://gfs.lcse.umn.edu/. The other has
Hi,
On Thu, 7 Oct 1999 01:59:31 -0500, [EMAIL PROTECTED]
(G.W. Wettstein) said:
>> If this works, you can also add a third machine and make a threefold
>> raid1 for added HA. Curious myself if this would work. Unfortunately
>> cannot test this myself.
> This strategy for doing HA has interested
Hi,
On Thu, 29 Jul 1999 09:38:20 -0700, Carlos Hwa <[EMAIL PROTECTED]>
said:
> I have a 2 disk raid0 with 32k chunk size using raidtools 0.90 beta10
> right now, and have applied stephen tweedie's raw i/o patch. the raw io
> patch works fine with a single disk but if i try to use raw io on
> /de
Hi,
On Wed, 14 Jul 1999 14:56:35 +0100 (BST), A James Lewis
<[EMAIL PROTECTED]> said:
> I have heard hints on this list that there is some lingering filesystem
> stability problem with 2.2.10, can anyone fill me in as to what the
> situation is and if it's been fixed?
We don't know of any corru
Hi,
On Mon, 26 Apr 1999 21:28:20 +0100 (IST), Paul Jakma <[EMAIL PROTECTED]>
said:
> it was close between 32k and 64k. 128k was noticably slower (for
> bonnie) so i didn't bother with 256k.
Fine, but 128k will be noticeably faster for some other tasks. Like I
said, it depends on whether you p
Hi,
On Thu, 22 Apr 1999 20:45:52 +0100 (IST), Paul Jakma <[EMAIL PROTECTED]>
said:
> i tried this with raid0, and if bonnie is any guide, the optimal
> configuration is 64k chunk size, 4k e2fs block size.
Going much above 64k will mean that readahead has to work very much
harder to keep all t
Hi,
On Sat, 24 Apr 1999 21:09:05 +0200 (MEST), Francisco Jose Montilla
<[EMAIL PROTECTED]> said:
> Hi, I happen to came across a couple of statements that somewhat
> involves the use of RAID, statements that I believe are not absolutely
> correct, if not false, or half truths.
> -
Hi,
On Tue, 20 Apr 1999 22:32:15 -0700 (PDT), Michael
<[EMAIL PROTECTED]> said:
> Not entirely true. Leave the original data where it is. Build a degraded
> raid array on the new disk(s) and copy the data over from the old disk.
> Reconfigure to use the new degraded raid array, then hot add the
Hi,
On Sat, 17 Apr 1999 16:22:59 -0400 (EDT), "m. allan noah"
<[EMAIL PROTECTED]> said:
> have you ACTUALLY used grub to boot off of raid1? i dont see how grub is
> capable. it would have to be able to read the md device. prove me wrong
> please.
raid-1 has the property that the raid superblock
Hi,
On Fri, 16 Apr 1999 20:26:36 +0100, "Jim Ford"
<[EMAIL PROTECTED]> said:
> I am running out of space on my root device and am thinking of adding
> another scsi disk using Raid - linear or 0 (whichever is the
> easiest!). Is software Raid as fearsome as all the docs I read
> suggest? Ideally,
Hi,
On Wed, 14 Apr 1999 21:59:49 +0100 (BST), A James Lewis <[EMAIL PROTECTED]>
said:
> It wasn't a month ago that this was not possible because it needed to
> allocate memory for the raid and couldn't because it needed to swap to
> do it? Was I imagining this or have you guys been working too
Hi,
On Wed, 14 Apr 1999 15:32:40 -0400, "Joe Garcia" <[EMAIL PROTECTED]> said:
> Swapping to a file should work, but if I remember correctly you get
> horrible performance.
Swap-file performance on 2.2 kernels is _much_ better.
--Stephen
Hi,
On 15 Apr 1999 00:13:48 -, <[EMAIL PROTECTED]> said:
> AFAIK, the swap code uses raw file blocks on disk, rather than passing
> through to vfs, cause you dont want to cache swap accesses, think
> about it :)
Sort of correct. It does bypass most of the VFS, but it does use the
standard
Hi,
> The only place I would even imagine this would be possible would be in
> the mode pages, but my recollection of the SCSI standard says that all
> of these modes pages are read only. :(
IIRC there are some writable fields in some drives to allow you to set
caching/writeback behaviour, for e
Hi,
On Tue, 06 Apr 1999 10:39:21 +0100, Richard Jones
<[EMAIL PROTECTED]> said:
> Can it be a cabling fault? I thought that SCSI had parity
> checking.
Yes, it does, and all _decent_ scsi cards support it. Some old ones do
not (especially ISA cards).
--Stephen
Hi,
On Mon, 29 Mar 1999 11:28:25 +0100, Richard Jones
<[EMAIL PROTECTED]> said:
> Not so fast there :-)
> In the stress tests, I've encountered almost silent
> filesystem corruption. The filesystem reports errors
> as attached below, but the file operations continue
> without error, corrupting
Hi,
On Sun, 28 Mar 1999 15:27:26 -0500 (EST), Laszlo Vecsey
<[EMAIL PROTECTED]> said:
> Isnt there room in the raid header for an additional flag to mark the
> 'partition' type? I realize this might require a 'mkraid --upgrade' to be
> run, but at least the 'partitions' could then be detected an
Hi,
On Fri, 19 Mar 1999 15:05:18 -0800 (PST), Dan Hollis
<[EMAIL PROTECTED]> said:
> On Fri, 19 Mar 1999, Stephen C. Tweedie wrote:
>> SGI has been making some very linux-friendly noises recently, so maybe
>> this is a possibility. I certainly hope so. However, rig
Hi,
On Fri, 19 Mar 1999 01:42:07 +0100 (MET), Senoner Benno
<[EMAIL PROTECTED]> said:
> how about asking to SGI if they can contribute to the journaling FS,
> since they want to give some parts of IRIX (which has a journal FS)
> to the open source community ?
SGI has been making some very linux
Hi,
On Wed, 10 Mar 1999 16:07:57 + (GMT), A James Lewis <[EMAIL PROTECTED]>
said:
> About 3 weeks ago Steven sent a message to the kernel mailing list that
> suggested that that although it's mostly complete it's not actually
> functional yet (or wasn't) so don't expect "stable" patches or a
Hi,
On Wed, 10 Mar 1999 11:49:56 AST, [EMAIL PROTECTED] said:
> anyone care to guess how stable/useable this new ext2/ext3 filesystem is?
s/is/will be/
> Where are the patches available, and has anyone actually used it
> successfully?
The only patches around right now include demonstration co
Hi,
On Sat, 13 Feb 1999 18:14:14 -0500, Michael Stone
<[EMAIL PROTECTED]> said:
> On Wed, Feb 10, 1999 at 09:43:12AM -0600, Chris Price wrote:
>> Instead of pointing fingers at Redhat, I would ask if there is
>> someone with teh Linux-raid community that actively corresponds with
>> redhat to le
Hi,
On Thu, 11 Feb 1999 17:08:48 -0500 (EST), Billy Harvey
<[EMAIL PROTECTED]> said:
> I'm new to raid discussion. Why would you expect a 60 minute fsck
> everytime? Would the boot up not skip that if the shutdown was clean?
> Journaling seems like a complicated solution to save the time of an
Hi,
On Fri, 12 Feb 1999 00:02:02 +0100, Benno Senoner <[EMAIL PROTECTED]>
said:
> Stephen: do you have an estimate time for wich journaling will be
> ready for use, even as an alpha-patch ?
Hopefully two to three months for demonstration code. It should be
usable this summer some time.
> do y
Hi,
On Thu, 11 Feb 1999 11:04:37 -0800 (PST), Dan Hollis
<[EMAIL PROTECTED]> said:
> I assume the next filesystem (ext3?) will support journaling?
> Hope, hope?
Yes. ext3 came _this_ close "><" to finishing its first transaction
commit yesterday, but there's something in the buffer setup which
Hi,
On Thu, 11 Feb 1999 09:00:20 +0100, Benno Senoner <[EMAIL PROTECTED]>
said:
> can someone please explain what journaling precisely does, (is this
> a sort of mechanism, which leaves the filesystem in a consistent
> status, even in case of disk write interruption, due of power loss
> or other
Hi,
On Tue, 9 Feb 1999 13:31:14 +0100 (CET), MOLNAR Ingo
<[EMAIL PROTECTED]> said:
> Stephen Tweedie is working on the journalling extensions. [not sure what
> the current status is, he had a working prototype end of last year.]
I had journaling and buffer commit code, but not any filesystem
pe
Hi,
On Mon, 8 Feb 1999 14:14:28 -0800 (PST), [EMAIL PROTECTED] (Alvin
Oga) said:
> I have a hardware raid controller running off a P2-200
> with 64Gb of disk...99% full... and it takes
> about 45 min to e2fsck it when it goes down dirty...
> and takes about 10 min to mount it if it's clean
The
Hi,
On Mon, 8 Feb 1999 14:14:28 -0800 (PST), [EMAIL PROTECTED] (Alvin
Oga) said:
> hi benno
> I have a hardware raid controller running off a P2-200
> with 64Gb of disk...99% full... and it takes
> about 45 min to e2fsck it when it goes down dirty...
> and takes about 10 min to mount it if it's
Hi,
On Thu, 28 Jan 1999 18:56:48 -0800, "David S. Miller"
<[EMAIL PROTECTED]> said:
> You need to start using data at cylinder 1 on all disks or it will get
> nuked. It doesn't happen on the first disk because ext2 skips some
> space at the beginning of the volume.
> Swap space has the same pr
Hi,
On 07 Nov 1998 16:30:29 +0200, Osma Ahvenlampi <[EMAIL PROTECTED]> said:
> Conventional server setup wisdom says to partition /, /usr, /var,
> /home and perhaps /var/spool separately. However, how does the RAID-5
> subsystem perform when multiple md devices are configured to span the
> same
Hi,
On 05 Nov 1998 17:48:41 +0200, Osma Ahvenlampi <[EMAIL PROTECTED]> said:
> I'm trying to back up a 14GB RAID-5 array (19981005 snapshot
> drivers/tools) to a 12/24GB DAT drive using dump. Here's what happens:
> running /sbin/dump 0ufbB /dev/nst0 32 18874368 /raid
...
> DUMP: bread: lseek
Hi,
On Fri, 06 Nov 1998 16:44:56 -0400 (EST), Dave Wreski <[EMAIL PROTECTED]>
said:
>> Me, for a start! I found it very useful to be able to combine together
>> a few scraps of spare space on a number of mounted disks to create a
>> scratch partition of useful size.
> Why not use striping inst
Hi,
On Fri, 6 Nov 1998 15:59:04 +0100, Luca Berra <[EMAIL PROTECTED]> said:
>> > freeze the log, backup, unfreeze like vxfs (Veritas) does
>>
>> A lfs does make this easier, yes, but there are other ways to do it. In
>> particular, you can achieve the same effect at the block device level if
>
Hi,
On Wed, 4 Nov 1998 20:15:15 +0100, Luca Berra <[EMAIL PROTECTED]> said:
> On Mon, Nov 02, 1998 at 01:24:39PM -0500, Jorj Bauer wrote:
>> useful. On a busy mail machine, it's difficult to get a static backup of
>> the contents of /var/spool/mail. If the raid tools supported the ability
>> to
Hi,
On Mon, 2 Nov 1998 22:03:34 +0100 (CET), MOLNAR Ingo
<[EMAIL PROTECTED]> said:
> On Mon, 2 Nov 1998, Jorj Bauer wrote:
>> Are there any plans for the ability to add and remove mirrors on the fly?
> echo "scsi remove-single-device 0 0 3 0" >/proc/scsi/scsi
> (the numbers identify the SCSI
Hi,
On Sun, 18 Oct 1998 12:05:11 +0100, "Johan Gronvall" <[EMAIL PROTECTED]>
said:
> I'm new to this list so please bare with me if I ask stupid questions.
> I'm looking for a kind of linear solution. I have however got the
> impression that you can only 'concatenate' 2 disks or partitions to
>
Hi,
On Sun, 18 Oct 1998 15:55:35 +0200 (CEST), MOLNAR Ingo
<[EMAIL PROTECTED]> said:
> On Sun, 18 Oct 1998, Tod Detre wrote:
>> in 2.1 kernels you can mak nfs a block device. raid can work with block
>> devices so if you raid5 several nfs computers one can go down, but you
>> still can go on.
Hi,
On Sun, 18 Oct 1998 23:42:39 +, "Adam Williams"
<[EMAIL PROTECTED]> said:
> Any pointers on where to gets doc's for this setup?
linux/Documentation/nbd.txt (surprise!) documents network block
devices. The fact that raid may be running on nbd doesn't affect the
upper raid stuff at all.
Hi,
On Thu, 15 Oct 1998 13:06:47 +1200, Cameron Hart
<[EMAIL PROTECTED]> said:
> Hi there, the subject line may be somewhat misleading - my question is
> more to do with quota's than RAID. Perhaps someone can help me anyway...
> Where I am working we are giving design students large amount of di
Hi,
On Thu, 8 Oct 1998 18:53:08 +0300 (EEST), Matti Aarnio
<[EMAIL PROTECTED]> said:
>> > Do several parallel writes, and then start 2 or 3 parallel
>> > (f)syncs. If you do one, it completes rather rapidly, but
>> > two in parallel is bad medicine, and three is, well ..
63 matches
Mail list logo