On Tue, 2016-05-03 at 09:04 +1000, Dave Chinner wrote:
> On Mon, May 02, 2016 at 11:18:36AM -0400, Jeff Moyer wrote:
> > 
> > Dave Chinner <da...@fromorbit.com> writes:
> > 
> > > 
> > > On Mon, Apr 25, 2016 at 11:53:13PM +0000, Verma, Vishal L wrote:
> > > > 
> > > > On Tue, 2016-04-26 at 09:25 +1000, Dave Chinner wrote:
> > > You're assuming that only the DAX aware application accesses it's
> > > files.  users, backup programs, data replicators, fileystem
> > > re-organisers (e.g.  defragmenters) etc all may access the files
> > > and
> > > they may throw errors. What then?
> > I'm not sure how this is any different from regular storage.  If an
> > application gets EIO, it's up to the app to decide what to do with
> > that.
> Sure - they'll fail. But the question I'm asking is that if the
> application that owns the data is supposed to do error recovery,
> what happens when a 3rd party application hits an error? If that
> consumes the error, the the app that owns the data won't ever get a
> chance to correct the error.
> 
> This is a minefield - a 3rd party app that swallows and clears DAX
> based IO errors is a data corruption vector. can yo imagine if
> *grep* did this? The model that is being promoted here effectively
> allows this sort of behaviour - I don't really think we
> should be architecting an error recovery strategy that has the
> capability to go this wrong....
> 

Just to address this bit - No. Any number of backup/3rd party
application can hit the error and _fail_ but surely they won't try to
_write_ the bad location? Only a write to the bad sector will clear it
in this model - and until such time, all reads will just keep erroring
out. This works for DAX/mmap based reads/writes too - mmap-stores
won't/can't clear errors - you have to go through the block path, and in
the altest version of my patch set, that has to be explicitly through
O_DIRECT.

Reply via email to