We need to make sure implementations don't cheat and don't have a
possible schedule/blocking point deeply burried where review can't
catch it.

I'm not sure whether this is the best way to make sure all the
might_sleep() callsites trigger, and it's a bit ugly in the code flow.
But it gets the job done.

Cc: Andrew Morton <a...@linux-foundation.org>
Cc: Michal Hocko <mho...@suse.com>
Cc: David Rientjes <rient...@google.com>
Cc: "Christian König" <christian.koe...@amd.com>
Cc: Daniel Vetter <daniel.vet...@ffwll.ch>
Cc: "Jérôme Glisse" <jgli...@redhat.com>
Cc: linux...@kvack.org
Signed-off-by: Daniel Vetter <daniel.vet...@intel.com>
---
 mm/mmu_notifier.c | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/mm/mmu_notifier.c b/mm/mmu_notifier.c
index 59e102589a25..4d282cfb296e 100644
--- a/mm/mmu_notifier.c
+++ b/mm/mmu_notifier.c
@@ -185,7 +185,13 @@ int __mmu_notifier_invalidate_range_start(struct mm_struct 
*mm,
        id = srcu_read_lock(&srcu);
        hlist_for_each_entry_rcu(mn, &mm->mmu_notifier_mm->list, hlist) {
                if (mn->ops->invalidate_range_start) {
-                       int _ret = mn->ops->invalidate_range_start(mn, mm, 
start, end, blockable);
+                       int _ret;
+
+                       if (IS_ENABLED(CONFIG_DEBUG_ATOMIC_SLEEP) && !blockable)
+                               preempt_disable();
+                       _ret = mn->ops->invalidate_range_start(mn, mm, start, 
end, blockable);
+                       if (IS_ENABLED(CONFIG_DEBUG_ATOMIC_SLEEP) && !blockable)
+                               preempt_enable();
                        if (_ret) {
                                pr_info("%pS callback failed with %d in 
%sblockable context.\n",
                                                
mn->ops->invalidate_range_start, _ret,
-- 
2.19.1

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

Reply via email to