> On Feb 11, 2019, at 11:07 AM, Andy Lutomirski <l...@kernel.org> wrote:
> 
> I'm certainly amenable to other solutions, but this one does seem the
> least messy.  I looked at my old patch, and it doesn't do what you
> want.  I'd suggest you just add a percpu variable like cpu_dr7 and rig
> up some accessors so that it stays up to date.  Then you can skip the
> dr7 writes if there are no watchpoints set.
> 
> Also, EFI is probably a less interesting example than rare_write.
> With rare_write, especially the dynamically allocated variants that
> people keep coming up with, we'll need a swath of address space fully
> as large as the vmalloc area. and getting *that* right while still
> using the kernel address range might be more of a mess than we really
> want to deal with.

As long as you feel comfortable with this solution, I’m fine with it.

Here is what I have (untested). I prefer to save/restore all the DRs,
because IIRC DR6 indications are updated even if breakpoints are disabled
(in DR7). And anyhow, that is the standard interface.


-- >8 --

From: Nadav Amit <na...@vmware.com>
Date: Mon, 11 Feb 2019 03:07:08 -0800
Subject: [PATCH] mm: save DRs when loading temporary mm

Signed-off-by: Nadav Amit <na...@vmware.com>
---
 arch/x86/include/asm/mmu_context.h | 18 ++++++++++++++++++
 1 file changed, 18 insertions(+)

diff --git a/arch/x86/include/asm/mmu_context.h 
b/arch/x86/include/asm/mmu_context.h
index d684b954f3c0..4f92ec3df149 100644
--- a/arch/x86/include/asm/mmu_context.h
+++ b/arch/x86/include/asm/mmu_context.h
@@ -13,6 +13,7 @@
 #include <asm/tlbflush.h>
 #include <asm/paravirt.h>
 #include <asm/mpx.h>
+#include <asm/debugreg.h>
 
 extern atomic64_t last_mm_ctx_id;
 
@@ -358,6 +359,7 @@ static inline unsigned long __get_current_cr3_fast(void)
 
 typedef struct {
        struct mm_struct *prev;
+       unsigned short bp_enabled : 1;
 } temp_mm_state_t;
 
 /*
@@ -380,6 +382,15 @@ static inline temp_mm_state_t use_temporary_mm(struct 
mm_struct *mm)
        lockdep_assert_irqs_disabled();
        state.prev = this_cpu_read(cpu_tlbstate.loaded_mm);
        switch_mm_irqs_off(NULL, mm, current);
+
+       /*
+        * If breakpoints are enabled, disable them while the temporary mm is
+        * used - they do not belong and might cause wrong signals or crashes.
+        */
+       state.bp_enabled = hw_breakpoint_active();
+       if (state.bp_enabled)
+               hw_breakpoint_disable();
+
        return state;
 }
 
@@ -387,6 +398,13 @@ static inline void unuse_temporary_mm(temp_mm_state_t prev)
 {
        lockdep_assert_irqs_disabled();
        switch_mm_irqs_off(NULL, prev.prev, current);
+
+       /*
+        * Restore the breakpoints if they were disabled before the temporary mm
+        * was loaded.
+        */
+       if (prev.bp_enabled)
+               hw_breakpoint_restore();
 }
 
 #endif /* _ASM_X86_MMU_CONTEXT_H */
-- 
2.17.1


Reply via email to