} development; [EMAIL PROTECTED]; linux-kernel@vger.kernel.org;
} [EMAIL PROTECTED]; Jens Axboe; David Chinner; Andreas Dilger
} Subject: Re: [dm-devel] Re: [RFD] BIO_RW_BARRIER - what it means for
} devices, filesystems, and dm/md.
}
} On Wed, 11 Jul 2007 18:44:21 EDT, Ric Wheeler said:
} > [EM
PROTECTED]; linux-kernel@vger.kernel.org;
} [EMAIL PROTECTED]; Jens Axboe; David Chinner; Andreas Dilger
} Subject: Re: [dm-devel] Re: [RFD] BIO_RW_BARRIER - what it means for
} devices, filesystems, and dm/md.
}
} On Wed, 11 Jul 2007 18:44:21 EDT, Ric Wheeler said:
} > [EMAIL PROTECTED] wrote:
} >
[EMAIL PROTECTED] wrote:
On Wed, 11 Jul 2007 18:44:21 EDT, Ric Wheeler said:
[EMAIL PROTECTED] wrote:
On Tue, 10 Jul 2007 14:39:41 EDT, Ric Wheeler said:
All of the high end arrays have non-volatile cache (read, on power loss, it is a
promise that it will get all of your data out to permane
On Wed, 11 Jul 2007 18:44:21 EDT, Ric Wheeler said:
> [EMAIL PROTECTED] wrote:
> > On Tue, 10 Jul 2007 14:39:41 EDT, Ric Wheeler said:
> >
> >> All of the high end arrays have non-volatile cache (read, on power loss,
> >> it is a
> >> promise that it will get all of your data out to permanent st
[EMAIL PROTECTED] wrote:
On Tue, 10 Jul 2007 14:39:41 EDT, Ric Wheeler said:
All of the high end arrays have non-volatile cache (read, on power loss, it is a
promise that it will get all of your data out to permanent storage). You don't
need to ask this kind of array to drain the cache. In fa
Ric Wheeler wrote:
>> Don't those thingies usually have NV cache or backed by battery such
>> that ORDERED_DRAIN is enough?
>
> All of the high end arrays have non-volatile cache (read, on power loss,
> it is a promise that it will get all of your data out to permanent
> storage). You don't need t
[EMAIL PROTECTED] wrote:
> On Tue, 10 Jul 2007 14:39:41 EDT, Ric Wheeler said:
>
>> All of the high end arrays have non-volatile cache (read, on power loss, it
>> is a
>> promise that it will get all of your data out to permanent storage). You
>> don't
>> need to ask this kind of array to drai
On Tue, 10 Jul 2007 14:39:41 EDT, Ric Wheeler said:
> All of the high end arrays have non-volatile cache (read, on power loss, it
> is a
> promise that it will get all of your data out to permanent storage). You
> don't
> need to ask this kind of array to drain the cache. In fact, it might jus
Tejun Heo wrote:
[ cc'ing Ric Wheeler for storage array thingie. Hi, whole thread is at
http://thread.gmane.org/gmane.linux.kernel.device-mapper.devel/3344 ]
I am actually on the list, just really, really far behind in the thread ;-)
Hello,
[EMAIL PROTECTED] wrote:
but when you consider
On Thu, Jul 05 2007, Tejun Heo wrote:
> Hello, Jens.
>
> Jens Axboe wrote:
> > On Mon, May 28 2007, Neil Brown wrote:
> >> I think the implementation priorities here are:
> >>
> >> 1/ implement a zero-length BIO_RW_BARRIER option.
> >> 2/ Use it (or otherwise) to make all dm and md modules handle
Hello, Jens.
Jens Axboe wrote:
> On Mon, May 28 2007, Neil Brown wrote:
>> I think the implementation priorities here are:
>>
>> 1/ implement a zero-length BIO_RW_BARRIER option.
>> 2/ Use it (or otherwise) to make all dm and md modules handle
>>barriers (and loop?).
>> 3/ Devise and implement
Jens Axboe wrote:
> On Sat, Jun 02 2007, Tejun Heo wrote:
>> Hello,
>>
>> Jens Axboe wrote:
Would that be very different from issuing barrier and not waiting for
its completion? For ATA and SCSI, we'll have to flush write back cache
anyway, so I don't see how we can get performance
; dm-
} [EMAIL PROTECTED]; [EMAIL PROTECTED]; Stefan Bader; Andreas Dilger
} Subject: Re: [RFD] BIO_RW_BARRIER - what it means for devices,
} filesystems, and dm/md.
}
} On Sat, Jun 02 2007, Tejun Heo wrote:
} > Hello,
} >
} > Jens Axboe wrote:
} > >> Would that be very diff
Jens Axboe wrote:
On Fri, Jun 01 2007, Bill Davidsen wrote:
Jens Axboe wrote:
On Thu, May 31 2007, Bill Davidsen wrote:
Jens Axboe wrote:
On Thu, May 31 2007, David Chinner wrote:
On Thu, May 31, 2007 at 08:26:45AM +0200, Jens Axboe wrote:
On Fri, Jun 01 2007, Bill Davidsen wrote:
> Jens Axboe wrote:
> >On Thu, May 31 2007, Bill Davidsen wrote:
> >
> >>Jens Axboe wrote:
> >>
> >>>On Thu, May 31 2007, David Chinner wrote:
> >>>
> >>>
> On Thu, May 31, 2007 at 08:26:45AM +0200, Jens Axboe wrote:
>
>
On Sat, Jun 02 2007, Tejun Heo wrote:
> Hello,
>
> Jens Axboe wrote:
> >> Would that be very different from issuing barrier and not waiting for
> >> its completion? For ATA and SCSI, we'll have to flush write back cache
> >> anyway, so I don't see how we can get performance advantage by
> >> impl
Hello,
Jens Axboe wrote:
>> Would that be very different from issuing barrier and not waiting for
>> its completion? For ATA and SCSI, we'll have to flush write back cache
>> anyway, so I don't see how we can get performance advantage by
>> implementing separate WRITE_ORDERED. I think zero-lengt
Jens Axboe wrote:
On Thu, May 31 2007, Phillip Susi wrote:
Jens Axboe wrote:
No Stephan is right, the barrier is both an ordering and integrity
constraint. If a driver completes a barrier request before that request
and previously submitted requests are on STABLE storage, then it
violat
Neil Brown wrote:
On Friday June 1, [EMAIL PROTECTED] wrote:
On Thu, May 31, 2007 at 02:31:21PM -0400, Phillip Susi wrote:
David Chinner wrote:
That sounds like a good idea - we can leave the existing
WRITE_BARRIER behaviour unchanged and introduce a new WRITE_ORDERED
behaviour
[EMAIL PROTECTED] wrote:
> On Fri, 01 Jun 2007 16:16:01 +0900, Tejun Heo said:
>> Don't those thingies usually have NV cache or backed by battery such
>> that ORDERED_DRAIN is enough?
>
> Probably *most* do, but do you really want to bet the user's data on it?
Thought we were talking about high-e
On Fri, 01 Jun 2007 16:16:01 +0900, Tejun Heo said:
> Don't those thingies usually have NV cache or backed by battery such
> that ORDERED_DRAIN is enough?
Probably *most* do, but do you really want to bet the user's data on it?
> The problem is that the interface between the host and a storage de
Jens Axboe wrote:
On Thu, May 31 2007, Bill Davidsen wrote:
Jens Axboe wrote:
On Thu, May 31 2007, David Chinner wrote:
On Thu, May 31, 2007 at 08:26:45AM +0200, Jens Axboe wrote:
On Thu, May 31 2007, David Chinner wrote:
IOWs, there are two par
On Fri, Jun 01 2007, Tejun Heo wrote:
> Jens Axboe wrote:
> > On Thu, May 31 2007, David Chinner wrote:
> >> On Thu, May 31, 2007 at 08:26:45AM +0200, Jens Axboe wrote:
> >>> On Thu, May 31 2007, David Chinner wrote:
> IOWs, there are two parts to the problem:
>
> 1 - guaranteeing I
On Fri, Jun 01, 2007 at 03:59:51PM +1000, Neil Brown wrote:
> On Friday June 1, [EMAIL PROTECTED] wrote:
> > On Thu, May 31, 2007 at 02:31:21PM -0400, Phillip Susi wrote:
> > > David Chinner wrote:
> > > >That sounds like a good idea - we can leave the existing
> > > >WRITE_BARRIER behaviour unchan
[ cc'ing Ric Wheeler for storage array thingie. Hi, whole thread is at
http://thread.gmane.org/gmane.linux.kernel.device-mapper.devel/3344 ]
Hello,
[EMAIL PROTECTED] wrote:
> but when you consider the self-contained disk arrays it's an entirely
> different story. you can easily have a few gig of
On Fri, Jun 01 2007, Neil Brown wrote:
> On Friday June 1, [EMAIL PROTECTED] wrote:
> > On Thu, May 31, 2007 at 02:31:21PM -0400, Phillip Susi wrote:
> > > David Chinner wrote:
> > > >That sounds like a good idea - we can leave the existing
> > > >WRITE_BARRIER behaviour unchanged and introduce a n
On Friday June 1, [EMAIL PROTECTED] wrote:
> On Thu, May 31, 2007 at 02:31:21PM -0400, Phillip Susi wrote:
> > David Chinner wrote:
> > >That sounds like a good idea - we can leave the existing
> > >WRITE_BARRIER behaviour unchanged and introduce a new WRITE_ORDERED
> > >behaviour that only guarant
On Fri, 1 Jun 2007, Tejun Heo wrote:
but one
thing we should bear in mind is that harddisks don't have humongous
caches or very smart controller / instruction set. No matter how
relaxed interface the block layer provides, in the end, it just has to
issue whole-sale FLUSH CACHE on the device to
Stefan Bader wrote:
> 2007/5/30, Phillip Susi <[EMAIL PROTECTED]>:
>> Stefan Bader wrote:
>> >
>> > Since drive a supports barrier request we don't get -EOPNOTSUPP but
>> > the request with block y might get written before block x since the
>> > disk are independent. I guess the chances of this are
Jens Axboe wrote:
> On Thu, May 31 2007, David Chinner wrote:
>> On Thu, May 31, 2007 at 08:26:45AM +0200, Jens Axboe wrote:
>>> On Thu, May 31 2007, David Chinner wrote:
IOWs, there are two parts to the problem:
1 - guaranteeing I/O ordering
2 - guaranteeing blocks are on
On Thu, May 31, 2007 at 02:31:21PM -0400, Phillip Susi wrote:
> David Chinner wrote:
> >That sounds like a good idea - we can leave the existing
> >WRITE_BARRIER behaviour unchanged and introduce a new WRITE_ORDERED
> >behaviour that only guarantees ordering. The filesystem can then
> >choose which
On Thu, May 31 2007, [EMAIL PROTECTED] wrote:
> On Thu, 31 May 2007, Jens Axboe wrote:
>
> >On Thu, May 31 2007, Phillip Susi wrote:
> >>David Chinner wrote:
> >>>That sounds like a good idea - we can leave the existing
> >>>WRITE_BARRIER behaviour unchanged and introduce a new WRITE_ORDERED
> >>>
On Thu, 31 May 2007, Jens Axboe wrote:
On Thu, May 31 2007, Phillip Susi wrote:
David Chinner wrote:
That sounds like a good idea - we can leave the existing
WRITE_BARRIER behaviour unchanged and introduce a new WRITE_ORDERED
behaviour that only guarantees ordering. The filesystem can then
cho
On Thu, May 31 2007, Phillip Susi wrote:
> David Chinner wrote:
> >That sounds like a good idea - we can leave the existing
> >WRITE_BARRIER behaviour unchanged and introduce a new WRITE_ORDERED
> >behaviour that only guarantees ordering. The filesystem can then
> >choose which to use where appropr
On Thu, May 31 2007, Phillip Susi wrote:
> Jens Axboe wrote:
> >No Stephan is right, the barrier is both an ordering and integrity
> >constraint. If a driver completes a barrier request before that request
> >and previously submitted requests are on STABLE storage, then it
> >violates that principl
Jens Axboe wrote:
No Stephan is right, the barrier is both an ordering and integrity
constraint. If a driver completes a barrier request before that request
and previously submitted requests are on STABLE storage, then it
violates that principle. Look at the code and the various ordering
options.
David Chinner wrote:
That sounds like a good idea - we can leave the existing
WRITE_BARRIER behaviour unchanged and introduce a new WRITE_ORDERED
behaviour that only guarantees ordering. The filesystem can then
choose which to use where appropriate
So what if you want a synchronous write, b
David Chinner wrote:
you are understanding barriers to be the same as syncronous writes. (and
therefor the data is on persistant media before the call returns)
No, I'm describing the high level behaviour that is expected by
a filesystem. The reasons for this are below
You say no, but then
On Thu, May 31 2007, Bill Davidsen wrote:
> Jens Axboe wrote:
> >On Thu, May 31 2007, David Chinner wrote:
> >
> >>On Thu, May 31, 2007 at 08:26:45AM +0200, Jens Axboe wrote:
> >>
> >>>On Thu, May 31 2007, David Chinner wrote:
> >>>
> IOWs, there are two parts to the problem:
>
Jens Axboe wrote:
On Thu, May 31 2007, David Chinner wrote:
On Thu, May 31, 2007 at 08:26:45AM +0200, Jens Axboe wrote:
On Thu, May 31 2007, David Chinner wrote:
IOWs, there are two parts to the problem:
1 - guaranteeing I/O ordering
2 - guaranteeing blocks are
Neil Brown wrote:
On Monday May 28, [EMAIL PROTECTED] wrote:
There are two things I'm not sure you covered.
First, disks which don't support flush but do have a "cache dirty"
status bit you can poll at times like shutdown. If there are no drivers
which support these, it can be ignored.
2007/5/30, Phillip Susi <[EMAIL PROTECTED]>:
Stefan Bader wrote:
>
> Since drive a supports barrier request we don't get -EOPNOTSUPP but
> the request with block y might get written before block x since the
> disk are independent. I guess the chances of this are quite low since
> at some point a
On Thu, May 31 2007, David Chinner wrote:
> On Thu, May 31, 2007 at 08:26:45AM +0200, Jens Axboe wrote:
> > On Thu, May 31 2007, David Chinner wrote:
> > > IOWs, there are two parts to the problem:
> > >
> > > 1 - guaranteeing I/O ordering
> > > 2 - guaranteeing blocks are on persistent storag
On Thu, May 31, 2007 at 08:26:45AM +0200, Jens Axboe wrote:
> On Thu, May 31 2007, David Chinner wrote:
> > IOWs, there are two parts to the problem:
> >
> > 1 - guaranteeing I/O ordering
> > 2 - guaranteeing blocks are on persistent storage.
> >
> > Right now, a single barrier I/O is use
On Thu, May 31 2007, David Chinner wrote:
> IOWs, there are two parts to the problem:
>
> 1 - guaranteeing I/O ordering
> 2 - guaranteeing blocks are on persistent storage.
>
> Right now, a single barrier I/O is used to provide both of these
> guarantees. In most cases, all we really
On Wed, May 30 2007, Phillip Susi wrote:
> >That would be the exactly how I understand Documentation/block/barrier.txt:
> >
> >"In other words, I/O barrier requests have the following two properties.
> >1. Request ordering
> >...
> >2. Forced flushing to physical medium"
> >
> >"So, I/O barriers ne
On Monday May 28, [EMAIL PROTECTED] wrote:
> Neil Brown writes:
> >
>
> [...]
>
> > Thus the general sequence might be:
> >
> > a/ issue all "preceding writes".
> > b/ issue the commit write with BIO_RW_BARRIER
> > c/ wait for the commit to complete.
> > If it was successf
On Thu, May 31, 2007 at 02:07:39AM +0100, Alasdair G Kergon wrote:
> On Thu, May 31, 2007 at 10:46:04AM +1000, Neil Brown wrote:
> > If a filesystem cares, it could 'ask' as suggested above.
> > What would be a good interface for asking?
>
> XFS already tests:
> bd_disk->queue->ordered == QUEUE_
On Thu, May 31, 2007 at 10:46:04AM +1000, Neil Brown wrote:
> If a filesystem cares, it could 'ask' as suggested above.
> What would be a good interface for asking?
XFS already tests:
bd_disk->queue->ordered == QUEUE_ORDERED_NONE
Alasdair
--
[EMAIL PROTECTED]
-
To unsubscribe from this list: s
On Thu, May 31, 2007 at 10:46:04AM +1000, Neil Brown wrote:
> What if the truth changes (as can happen with md or dm)?
You get notified in endio() that the barrier had to be emulated?
Alasdair
--
[EMAIL PROTECTED]
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the
On Monday May 28, [EMAIL PROTECTED] wrote:
> On Mon, May 28, 2007 at 12:57:53PM +1000, Neil Brown wrote:
> > What exactly do you want to know, and why do you care?
>
> If someone explicitly mounts "-o barrier" and the underlying device
> cannot do it, then we want to issue a warning or reject the
On Monday May 28, [EMAIL PROTECTED] wrote:
> There are two things I'm not sure you covered.
>
> First, disks which don't support flush but do have a "cache dirty"
> status bit you can poll at times like shutdown. If there are no drivers
> which support these, it can be ignored.
There are really
On Tuesday May 29, [EMAIL PROTECTED] wrote:
> Neil Brown wrote:
> > md/dm modules could keep count of requests as has been suggested
> > (though that would be a fairly big change for raid0 as it currently
> > doesn't know when a request completes - bi_endio goes directly to the
> > filesystem).
On Wed, May 30, 2007 at 09:52:49AM -0700, [EMAIL PROTECTED] wrote:
> On Wed, 30 May 2007, David Chinner wrote:
> >with the barrier is on stable storage when I/o completion is
> >signalled. The existing barrier implementation (where it works)
> >provide these requirements. We need barriers to retai
Phillip Susi wrote:
Hrm... I may have misunderstood the perspective you were talking from.
Yes, when the bio is completed it must be on the media, but the
filesystem should issue both requests, and then really not care when
they complete. That is to say, the filesystem should not wait for bloc
Stefan Bader wrote:
You got a linear target that consists of two disks. One drive (a)
supports barriers and the other one (b) doesn't. Device-mapper just
maps the requests to the appropriate disk. Now the following sequence
happens:
1. block x gets mapped to drive b
2. block y (with barrier) get
On Wed, 30 May 2007, David Chinner wrote:
On Tue, May 29, 2007 at 05:01:24PM -0700, [EMAIL PROTECTED] wrote:
On Wed, 30 May 2007, David Chinner wrote:
On Tue, May 29, 2007 at 04:03:43PM -0400, Phillip Susi wrote:
David Chinner wrote:
The use of barriers in XFS assumes the commit write to be
David Chinner wrote:
Barrier != synchronous write,
Of course. FYI, XFS only issues barriers on *async* writes.
But barrier semantics - as far as they've been described by everyone
but you indicate that the barrier write is guaranteed to be on stable
storage when it returns.
Hrm... I may have
On Wed, May 30, 2007 at 11:12:37AM +0200, Stefan Bader wrote:
> it might be better to indicate -EOPNOTSUPP right from
> device-mapper.
Indeed we should. For support, on receipt of a barrier, dm core should
send a zero-length barrier to all active underlying paths, and delay
mapping any further I
On Mon, May 28 2007, Neil Brown wrote:
> I think the implementation priorities here are:
>
> 1/ implement a zero-length BIO_RW_BARRIER option.
> 2/ Use it (or otherwise) to make all dm and md modules handle
>barriers (and loop?).
> 3/ Devise and implement appropriate fall-backs with-in the blo
> in-flight I/O to go to zero?
Something like that is needed for some dm targets to support barriers.
(We needn't always wait for *all* in-flight I/O.)
When faced with -EOPNOTSUP, do all callers fall back to a sync in
the places a barrier would have been used, or are there any more
sophisticated
The order that these are expected by the filesystem to hit stable
storage are:
1. block 4 and 10 on stable storage in any order
2. barrier block X on stable storage
3. block 5 and 20 on stable storage in any order
The point I'm trying to make is that in XFS, block 5 and 20 cannot
be allowed to
On Tue, May 29, 2007 at 05:01:24PM -0700, [EMAIL PROTECTED] wrote:
> On Wed, 30 May 2007, David Chinner wrote:
>
> >On Tue, May 29, 2007 at 04:03:43PM -0400, Phillip Susi wrote:
> >>David Chinner wrote:
> >>>The use of barriers in XFS assumes the commit write to be on stable
> >>>storage before it
On Wed, 30 May 2007, David Chinner wrote:
On Tue, May 29, 2007 at 04:03:43PM -0400, Phillip Susi wrote:
David Chinner wrote:
The use of barriers in XFS assumes the commit write to be on stable
storage before it returns. One of the ordering guarantees that we
need is that the transaction (comm
On Tue, May 29, 2007 at 04:03:43PM -0400, Phillip Susi wrote:
> David Chinner wrote:
> >The use of barriers in XFS assumes the commit write to be on stable
> >storage before it returns. One of the ordering guarantees that we
> >need is that the transaction (commit write) is on disk before the
> >m
On Tue, May 29, 2007 at 11:25:42AM +0200, Stefan Bader wrote:
> doing a sort of suspend, issuing the
> barrier request, calling flush to all mapped devices and then wait for
> in-flight I/O to go to zero?
Something like that is needed for some dm targets to support barriers.
(We needn't always wa
David Chinner wrote:
Sounds good to me, but how do we test to see if the underlying
device supports barriers? Do we just assume that they do and
only change behaviour if -o nobarrier is specified in the mount
options?
The idea is that ALL block devices will support barriers; if the
underlying
Neil Brown wrote:
md/dm modules could keep count of requests as has been suggested
(though that would be a fairly big change for raid0 as it currently
doesn't know when a request completes - bi_endio goes directly to the
filesystem).
Are you sure? I believe that dm handles bi_endio becaus
2007/5/28, Alasdair G Kergon <[EMAIL PROTECTED]>:
On Mon, May 28, 2007 at 11:30:32AM +1000, Neil Brown wrote:
> 1/ A BIO_RW_BARRIER request should never fail with -EOPNOTSUP.
The device-mapper position has always been that we require
> a zero-length BIO_RW_BARRIER
(i.e. containing no data to
> 2007/5/25, Neil Brown <[EMAIL PROTECTED]>:
> BIO_RW_FAILFAST: means low-level driver shouldn't do much (or no)
> error recovery. Mainly used by mutlipath targets to avoid long SCSI
> recovery. This should just be propagated when passing requests on.
Is it "much" or "no"?
Would it be reasonable
On Mon, May 28, 2007 at 02:48:45PM +1000, Timothy Shimmin wrote:
> I'm taking it that the FUA write will just guarantee that that
> particular write has made it to disk on i/o completion
> (and no write cache flush is done).
Correct. It only applies to that one write command.
jeremy
-
To unsubsc
(dunny why you explicitly dropped me off the cc/to list when replying to
my email, hence I missed it for 3 days)
On Fri, May 25 2007, Phillip Susi wrote:
> Jens Axboe wrote:
> >A barrier write will include a flush, but it may also use the FUA bit to
> >ensure data is on platter. So the only situa
Neil Brown wrote:
We can think of there being three types of devices:
1/ SAFE. With a SAFE device, there is no write-behind cache, or if
there is it is non-volatile. Once a write completes it is
completely safe. Such a device does not require barriers
or ->iss
Neil Brown writes:
>
[...]
> Thus the general sequence might be:
>
> a/ issue all "preceding writes".
> b/ issue the commit write with BIO_RW_BARRIER
> c/ wait for the commit to complete.
> If it was successful - done.
> If it failed other than with EOPNOTSUPP, a
On Mon, May 28, 2007 at 11:30:32AM +1000, Neil Brown wrote:
> 1/ A BIO_RW_BARRIER request should never fail with -EOPNOTSUP.
The device-mapper position has always been that we require
> a zero-length BIO_RW_BARRIER
(i.e. containing no data to read or write - or emulated, possibly
device-speci
Hello,
Neil Brown wrote:
> 1/ A BIO_RW_BARRIER request should never fail with -EOPNOTSUP.
>
> This is certainly a very attractive position - it makes the interface
> cleaner and makes life easier for filesystems and other clients of
> the block interface.
> Currently filesystems handle -EOPNO
Hi,
--On 28 May 2007 12:45:59 PM +1000 David Chinner <[EMAIL PROTECTED]> wrote:
On Mon, May 28, 2007 at 11:30:32AM +1000, Neil Brown wrote:
Thanks everyone for your input. There was some very valuable
observations in the various emails.
I will try to pull most of it together and bring out wh
On Mon, May 28, 2007 at 12:57:53PM +1000, Neil Brown wrote:
> On Monday May 28, [EMAIL PROTECTED] wrote:
> > On Mon, May 28, 2007 at 11:30:32AM +1000, Neil Brown wrote:
> > > Thanks everyone for your input. There was some very valuable
> > > observations in the various emails.
> > > I will try to
On Monday May 28, [EMAIL PROTECTED] wrote:
> On Mon, May 28, 2007 at 11:30:32AM +1000, Neil Brown wrote:
> >
> > Thanks everyone for your input. There was some very valuable
> > observations in the various emails.
> > I will try to pull most of it together and bring out what seem to be
> > the im
On Mon, May 28, 2007 at 11:30:32AM +1000, Neil Brown wrote:
>
> Thanks everyone for your input. There was some very valuable
> observations in the various emails.
> I will try to pull most of it together and bring out what seem to be
> the important points.
>
>
> 1/ A BIO_RW_BARRIER request sho
On Friday May 25, [EMAIL PROTECTED] wrote:
> 2007/5/25, Neil Brown <[EMAIL PROTECTED]>:
> > - Are there other bit that we could handle better?
> > BIO_RW_FAILFAST? BIO_RW_SYNC? What exactly do they mean?
> >
> BIO_RW_FAILFAST: means low-level driver shouldn't do much (or no)
> error recovery
Thanks everyone for your input. There was some very valuable
observations in the various emails.
I will try to pull most of it together and bring out what seem to be
the important points.
1/ A BIO_RW_BARRIER request should never fail with -EOPNOTSUP.
This is certainly a very attractive positi
Hello, Neil Brown.
Please cc me on blkdev barriers and, if you haven't yet, reading
Documentation/block/barrier.txt can be helpful too.
Neil Brown wrote:
[--snip--]
> 1/ SAFE. With a SAFE device, there is no write-behind cache, or if
> there is it is non-volatile. Once a write complet
On May 25, 2007 17:58 +1000, Neil Brown wrote:
>These devices would find it very hard to support BIO_RW_BARRIER.
>Doing this would require keeping track of all in-flight requests
>(which some, possibly all, of the above don't) and then:
> When a BIO_RW_BARRIER request arrives:
>
Jens Axboe wrote:
A barrier write will include a flush, but it may also use the FUA bit to
ensure data is on platter. So the only situation where a fallback from a
barrier to flush would be valid, is if the device lied and told you it
could do FUA but it could not and that is the reason why the b
2007/5/25, Neil Brown <[EMAIL PROTECTED]>:
HOW DO MD or DM USE THIS
1/ striping devices.
This includes md/raid0 md/linear dm-linear dm-stripe and probably
others.
These devices can easily support blkdev_issue_flush by simply
calling blkdev_issue_flush o
On Fri, May 25 2007, David Chinner wrote:
> > The second, while much easier, can fail.
>
> So we do a test I/O to see if the device supports them before
> enabling that mode. But, as we've recently discovered, this is not
> sufficient to detect *correctly functioning* barrier support.
Right, tho
On Fri, May 25, 2007 at 05:58:25PM +1000, Neil Brown wrote:
> We can think of there being three types of devices:
>
> 1/ SAFE. With a SAFE device, there is no write-behind cache, or if
> there is it is non-volatile. Once a write completes it is
> completely safe. Such a de
88 matches
Mail list logo