[EMAIL PROTECTED] wrote:
one channel, 2 OS drives plus the 45 drives in the array.
Huh? You can only have 16 devices on a scsi bus, counting the host
adapter. And I don't think you can even manage that much reliably with
the newer higher speed versions, at least not without some very specia
Jens Axboe wrote:
No Stephan is right, the barrier is both an ordering and integrity
constraint. If a driver completes a barrier request before that request
and previously submitted requests are on STABLE storage, then it
violates that principle. Look at the code and the various ordering
options.
David Chinner wrote:
That sounds like a good idea - we can leave the existing
WRITE_BARRIER behaviour unchanged and introduce a new WRITE_ORDERED
behaviour that only guarantees ordering. The filesystem can then
choose which to use where appropriate
So what if you want a synchronous write, b
David Chinner wrote:
you are understanding barriers to be the same as syncronous writes. (and
therefor the data is on persistant media before the call returns)
No, I'm describing the high level behaviour that is expected by
a filesystem. The reasons for this are below
You say no, but then
Phillip Susi wrote:
Hrm... I may have misunderstood the perspective you were talking from.
Yes, when the bio is completed it must be on the media, but the
filesystem should issue both requests, and then really not care when
they complete. That is to say, the filesystem should not wait for
Stefan Bader wrote:
You got a linear target that consists of two disks. One drive (a)
supports barriers and the other one (b) doesn't. Device-mapper just
maps the requests to the appropriate disk. Now the following sequence
happens:
1. block x gets mapped to drive b
2. block y (with barrier) get
David Chinner wrote:
Barrier != synchronous write,
Of course. FYI, XFS only issues barriers on *async* writes.
But barrier semantics - as far as they've been described by everyone
but you indicate that the barrier write is guaranteed to be on stable
storage when it returns.
Hrm... I may have
David Chinner wrote:
Sounds good to me, but how do we test to see if the underlying
device supports barriers? Do we just assume that they do and
only change behaviour if -o nobarrier is specified in the mount
options?
The idea is that ALL block devices will support barriers; if the
underlying
Neil Brown wrote:
md/dm modules could keep count of requests as has been suggested
(though that would be a fairly big change for raid0 as it currently
doesn't know when a request completes - bi_endio goes directly to the
filesystem).
Are you sure? I believe that dm handles bi_endio becaus
Neil Brown wrote:
There is no guarantee that a device can support BIO_RW_BARRIER - it is
always possible that a request will fail with EOPNOTSUPP.
Why is it not the job of the block layer to translate for broken devices
and send them a flush/write/flush?
These devices would find it very
Jens Axboe wrote:
A barrier write will include a flush, but it may also use the FUA bit to
ensure data is on platter. So the only situation where a fallback from a
barrier to flush would be valid, is if the device lied and told you it
could do FUA but it could not and that is the reason why the b
The raw device driver is obsolete because it has been superseded by the
O_DIRECT open flag. If you want to have dd perform unbuffered IO then
pass the iflag=direct option for input, or oflag=direct option for
output, and it will use O_DIRECT to bypass the buffer cache.
This of course assumes
Ville Herva wrote:
PS: Speaking of debugging failing initrd init scripts; it would be nice if
the kernel gave an error message on wrong initrd format rather than silently
failing... Yes, I forgot to make the cpio with the "-H newc" option :-/.
LOL, yea, that one got me too when I was first g
Neil Brown wrote:
Maybe the problem here is thinking of md and dm as different things.
Try just not thinking of them at all.
Think about it like this:
The linux kernel support lvm
The linux kernel support multipath
The linux kernel support snapshots
The linux kernel support raid0
The lin
Neil Brown wrote:
The in-kernel autodetection in md is purely legacy support as far as I
am concerned. md does volume detection in user space via 'mdadm'.
What other "things like" were you thinking of.
Oh, I suppose that's true. Well, another thing is your new mods to
support on the fly r
I'm currently of the opinion that dm needs a raid5 and raid6 module
added, then the user land lvm tools fixed to use them, and then you
could use dm instead of md. The benefit being that dm pushes things
like volume autodetection and management out of the kernel to user space
where it belongs.
Michael Tokarev wrote:
Compare this with my statement about "offline" "reshaper" above:
separate userspace (easier to write/debug compared with kernel
space) program which operates on an inactive array (no locking
needed, no need to worry about other I/O operations going to the
array at the time
17 matches
Mail list logo