On Tue, Jun 05, 2018 at 10:14:51PM +0800, Qu Wenruo wrote: > > > On 2018年06月05日 22:07, David Sterba wrote: > > On Tue, Jun 05, 2018 at 09:47:46PM +0800, Qu Wenruo wrote: > >> > >> > >> On 2018年06月05日 21:42, David Sterba wrote: > >>> On Tue, Jun 05, 2018 at 01:34:03PM +0800, Qu Wenruo wrote: > >>>> Hi David, > >>>> > >>>> It would be pretty nice if we could get this fix (or previous RFC patch) > >>>> to get into current release cycle. > >>>> > >>>> As it's a unrecoverable data corruption, it would be better to get it > >>>> fixed as soon as possible. > >>> > >>> That we can do, I'm planning to send 2nd pull by the end of the next > >>> week as there's at least one patch in the queue now. > >>> > >>> This patch seems to big, can you please prepare a minimal version? > >> > >> The previous version (a completely different direction though) is much > >> smaller. > >> https://patchwork.kernel.org/patch/10440541/ > >> > >> However personally speaking, I still prefer this one, as it's much simpler. > > > > As this will go to older stable kernels, I'd rather split that to more > > patches where the first one is > > > > --- a/fs/btrfs/scrub.c > > +++ b/fs/btrfs/scrub.c > > @@ -2799,7 +2799,7 @@ static int scrub_extent(struct scrub_ctx *sctx, > > struct map_lookup *map, > > have_csum = scrub_find_csum(sctx, logical, csum); > > if (have_csum == 0) > > ++sctx->stat.no_csum; > > - if (sctx->is_dev_replace && !have_csum) { > > + if (0 && sctx->is_dev_replace && !have_csum) { > > ret = copy_nocow_pages(sctx, logical, l, > > mirror_num, > > > > physical_for_dev_replace); > > --- > > > > and then the whole callchain of copy_nocow_pages continues. > > Understood. > I could go this method.
FYI, I'd need to send the 2nd pull request on Tuesday so I'm adding the proposed fix with the current changelog to the queue now. https://git.kernel.org/pub/scm/linux/kernel/git/kdave/linux.git/commit/?h=next-fixes&id=8c83e0b1b20b094491bec6c52839aa3596a87f03 -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html