Re: limits on raid

2007-06-19 Thread Phillip Susi
[EMAIL PROTECTED] wrote: one channel, 2 OS drives plus the 45 drives in the array. Huh? You can only have 16 devices on a scsi bus, counting the host adapter. And I don't think you can even manage that much reliably with the newer higher speed versions, at least not without some very specia

Re: [dm-devel] Re: [RFD] BIO_RW_BARRIER - what it means for devices, filesystems, and dm/md.

2007-05-31 Thread Phillip Susi
Jens Axboe wrote: No Stephan is right, the barrier is both an ordering and integrity constraint. If a driver completes a barrier request before that request and previously submitted requests are on STABLE storage, then it violates that principle. Look at the code and the various ordering options.

Re: [RFD] BIO_RW_BARRIER - what it means for devices, filesystems, and dm/md.

2007-05-31 Thread Phillip Susi
David Chinner wrote: That sounds like a good idea - we can leave the existing WRITE_BARRIER behaviour unchanged and introduce a new WRITE_ORDERED behaviour that only guarantees ordering. The filesystem can then choose which to use where appropriate So what if you want a synchronous write, b

Re: [RFD] BIO_RW_BARRIER - what it means for devices, filesystems, and dm/md.

2007-05-31 Thread Phillip Susi
David Chinner wrote: you are understanding barriers to be the same as syncronous writes. (and therefor the data is on persistant media before the call returns) No, I'm describing the high level behaviour that is expected by a filesystem. The reasons for this are below You say no, but then

Re: [dm-devel] Re: [RFD] BIO_RW_BARRIER - what it means for devices, filesystems, and dm/md.

2007-05-30 Thread Phillip Susi
Phillip Susi wrote: Hrm... I may have misunderstood the perspective you were talking from. Yes, when the bio is completed it must be on the media, but the filesystem should issue both requests, and then really not care when they complete. That is to say, the filesystem should not wait for

Re: [dm-devel] Re: [RFD] BIO_RW_BARRIER - what it means for devices, filesystems, and dm/md.

2007-05-30 Thread Phillip Susi
Stefan Bader wrote: You got a linear target that consists of two disks. One drive (a) supports barriers and the other one (b) doesn't. Device-mapper just maps the requests to the appropriate disk. Now the following sequence happens: 1. block x gets mapped to drive b 2. block y (with barrier) get

Re: [RFD] BIO_RW_BARRIER - what it means for devices, filesystems, and dm/md.

2007-05-30 Thread Phillip Susi
David Chinner wrote: Barrier != synchronous write, Of course. FYI, XFS only issues barriers on *async* writes. But barrier semantics - as far as they've been described by everyone but you indicate that the barrier write is guaranteed to be on stable storage when it returns. Hrm... I may have

Re: [RFD] BIO_RW_BARRIER - what it means for devices, filesystems, and dm/md.

2007-05-29 Thread Phillip Susi
David Chinner wrote: Sounds good to me, but how do we test to see if the underlying device supports barriers? Do we just assume that they do and only change behaviour if -o nobarrier is specified in the mount options? The idea is that ALL block devices will support barriers; if the underlying

Re: [RFD] BIO_RW_BARRIER - what it means for devices, filesystems, and dm/md.

2007-05-29 Thread Phillip Susi
Neil Brown wrote: md/dm modules could keep count of requests as has been suggested (though that would be a fairly big change for raid0 as it currently doesn't know when a request completes - bi_endio goes directly to the filesystem). Are you sure? I believe that dm handles bi_endio becaus

Re: [dm-devel] [RFD] BIO_RW_BARRIER - what it means for devices, filesystems, and dm/md.

2007-05-25 Thread Phillip Susi
Neil Brown wrote: There is no guarantee that a device can support BIO_RW_BARRIER - it is always possible that a request will fail with EOPNOTSUPP. Why is it not the job of the block layer to translate for broken devices and send them a flush/write/flush? These devices would find it very

Re: [dm-devel] Re: [RFD] BIO_RW_BARRIER - what it means for devices, filesystems, and dm/md.

2007-05-25 Thread Phillip Susi
Jens Axboe wrote: A barrier write will include a flush, but it may also use the FUA bit to ensure data is on platter. So the only situation where a fallback from a barrier to flush would be valid, is if the device lied and told you it could do FUA but it could not and that is the reason why the b

Re: raw I/O support for Fedora Core 4

2006-03-23 Thread Phillip Susi
The raw device driver is obsolete because it has been superseded by the O_DIRECT open flag. If you want to have dd perform unbuffered IO then pass the iflag=direct option for input, or oflag=direct option for output, and it will use O_DIRECT to bypass the buffer cache. This of course assumes

Re: [PATCH 000 of 5] md: Introduction

2006-01-23 Thread Phillip Susi
Ville Herva wrote: PS: Speaking of debugging failing initrd init scripts; it would be nice if the kernel gave an error message on wrong initrd format rather than silently failing... Yes, I forgot to make the cpio with the "-H newc" option :-/. LOL, yea, that one got me too when I was first g

Re: [PATCH 000 of 5] md: Introduction

2006-01-19 Thread Phillip Susi
Neil Brown wrote: Maybe the problem here is thinking of md and dm as different things. Try just not thinking of them at all. Think about it like this: The linux kernel support lvm The linux kernel support multipath The linux kernel support snapshots The linux kernel support raid0 The lin

Re: [PATCH 000 of 5] md: Introduction

2006-01-19 Thread Phillip Susi
Neil Brown wrote: The in-kernel autodetection in md is purely legacy support as far as I am concerned. md does volume detection in user space via 'mdadm'. What other "things like" were you thinking of. Oh, I suppose that's true. Well, another thing is your new mods to support on the fly r

Re: [PATCH 000 of 5] md: Introduction

2006-01-19 Thread Phillip Susi
I'm currently of the opinion that dm needs a raid5 and raid6 module added, then the user land lvm tools fixed to use them, and then you could use dm instead of md. The benefit being that dm pushes things like volume autodetection and management out of the kernel to user space where it belongs.

Re: [PATCH 000 of 5] md: Introduction

2006-01-17 Thread Phillip Susi
Michael Tokarev wrote: Compare this with my statement about "offline" "reshaper" above: separate userspace (easier to write/debug compared with kernel space) program which operates on an inactive array (no locking needed, no need to worry about other I/O operations going to the array at the time