Hello,
Re-reading the code, I see that I was wrong with the previous comment of mine.  
So, in summary

i) If block driver does not support FLUSH and FUA, then 
__generic_make_request() checks will clear BIO FLUSH and FUA flags and hence 
blk_insert_flush() will not be invoked.

ii) If block driver clears FLUSH & FUA flags while IO is in flight, then there 
is possibility of IO missing __generic_make_request() checks and hitting issue 
with blk_insert_flush() being discussed here.

iii) If block driver sets only REQ_FUA without REQ_FLUSH then 
__generic_make_request() checks will not clear BIO flags and hence 
blk_insert_flush() will be invoked which will hit the blk_insert_flush() issue 
being discussed here.  However, setting REQ_FUA without REQ_FLUSH is not as per 
documentation and it is invalid for block driver to do so.

Thanks,
Ajith

On Wednesday, 9 January 2013 09:44:55 UTC+5:30, Ajith Kumar  wrote:
> Hello,
> 
> Thanks for the response.
> 
> A block device driver during initialization would decide if it is capable of 
> supporting FLUSH/FUA or not.  Suppose driver claims FLUSH/FUA support then 
> any bio targeted at this driver with FLUSH bit set(which is commonly set by 
> file system like XFS for doing internal logging) has a data corruption 
> vulnerability in case of an abrupt shutdown.  So, IMO the vulnerability is 
> not open to rare window where driver changes q->flush_flags while IO is in 
> flight, but for a much larger window from time driver comes up and throughout 
> it's life.
> 
> 
> 
> Thanks,
> 
> Ajith
> 
> 
> 
> On Wednesday, 9 January 2013 00:15:31 UTC+5:30, Tejun Heo  wrote:
> 
> > Hello,
> 
> > 
> 
> > 
> 
> > 
> 
> > On Tue, Jan 08, 2013 at 10:04:23AM -0800, [email protected] wrote:
> 
> > 
> 
> > > Hi,
> 
> > 
> 
> > > Could you please provide clarity on the following.
> 
> > 
> 
> > > ">   Hmmm... yes, this can become a correctness issue if (and only if)
> 
> > 
> 
> > > >   blk_queue_flush() is called to change q->flush_flags while requests
> 
> > 
> 
> > > >   are in-flight;"
> 
> > 
> 
> > >
> 
> > 
> 
> > > Could you please clarify as to why is it a correctness issue only if
> 
> > 
> 
> > > blk_queue_flush() is used to change flush_flags when requests are in
> 
> > 
> 
> > > flight ?  As I understand, XFS does set WRITE_FLUSH_FUA flag in
> 
> > 
> 
> > > _xfs_buf_ioapply() function irrespective of whether the underlying
> 
> > 
> 
> > > device supports flush capabilities or not which will flow into
> 
> > 
> 
> > > blk_insert_flush().  Is my reading of the code correct and is there
> 
> > 
> 
> > > a general correctness issue here which potentially results in XFS
> 
> > 
> 
> > > file system corruption in case of an abrupt shutdown independent of
> 
> > 
> 
> > > q->flush_flags getting changed while request is in flight.
> 
> > 
> 
> > 
> 
> > 
> 
> > My memory is kinda fuzzy at this point but if a queue doesn't support
> 
> > 
> 
> > flush, its flush_flags should be zero and
> 
> > 
> 
> > generic_make_request_checks() will clear REQ_FLUSH|REQ_FUA from
> 
> > 
> 
> > bio->bi_rw so we never hit blk_insert_flush() and the request will be
> 
> > 
> 
> > processed as a normal IO one; however, if REQ_FLUSH goes off after a
> 
> > 
> 
> > request passed generic_make_request_checks() but before
> 
> > 
> 
> > blk_flush_policy(), it'll become null op and its data payload won't
> 
> > 
> 
> > get written out to the underlying device, which is data corruption.
> 
> > 
> 
> > 
> 
> > 
> 
> > Thanks.
> 
> > 
> 
> > 
> 
> > 
> 
> > -- 
> 
> > 
> 
> > tejun
> 
> > 
> 
> > --
> 
> > 
> 
> > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> 
> > 
> 
> > the body of a message to [email protected]
> 
> > 
> 
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
> > 
> 
> > Please read the FAQ at  http://www.tux.org/lkml/

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to