On Thu, Jun 01, 2017 at 09:01:26AM +0800, Qu Wenruo wrote: > > > At 05/31/2017 10:30 PM, David Sterba wrote: > > On Wed, May 31, 2017 at 08:31:35AM +0800, Qu Wenruo wrote: > >>>> Yes it's hard to find such deadlock especially when lockdep will not > >>>> detect it. > >>>> > >>>> And this makes the advantage of using stack memory in v3 patch more > >>>> obvious. > >>>> > >>>> I didn't realize the extra possible deadlock when memory pressure is > >>>> high, and to make completely correct usage of GFP_ flags we should let > >>>> caller to choose its GFP_ flag, which will introduce more modification > >>>> and more possibility to cause problem. > >>>> > >>>> So now I prefer the stack version a little more. > >>> > >>> The difference is that the stack version will always consume the stack > >>> at runtime. The dynamic allocation will not, but we have to add error > >>> handling and make sure we use right gfp flags. So it's runtime vs review > >>> trade off, I choose to spend time on review. > >> > >> OK, then I'll update the patchset to allow passing gfp flags for each > >> reservation. > > > > You mean to add gfp flags to extent_changeset_alloc and update the > > direct callers or to add gfp flags to the whole reservation codepath? > > Yes, I was planning to do it. > > > I strongly prefer to use GFP_NOFS for now, although it's not ideal. > > OK, then keep GFP_NOFS. > But I also want to know the reason why. > > Is it just because we don't have good enough tool to detect possible > deadlock caused by wrong GFP_* flags in write path?
Yes, basically. I'ts either overzealous GFP_NOFS or potential deadlock with GFP_KERNEL. We'll deal with the NOFS eventually, so we want to be safe until then. Michal Hocko has a debugging patch that will report use of NOFS when it's not needed, but we have to explicitly mark the sections for that. This hasn't happened and is not easy to do as we have to audit lots of codepaths. -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html