On Thu, Jun 21, 2018 at 02:58:59PM +0200, Mischa wrote:
> > On 19 Jun 2018, at 20:28, Mischa <obs...@high5.nl> wrote:
> > 
> >> On 19 Jun 2018, at 17:51, Mike Larkin <mlar...@azathoth.net> wrote:
> >> On Tue, Jun 19, 2018 at 03:42:06PM +0200, obs...@high5.nl wrote:
> >>>> Synopsis:        VMs stop intermitently after vcpu_run_loop error
> >>>> Category:        system
> >>>> Environment:
> >>>   System      : OpenBSD 6.3
> >>>   Details     : OpenBSD 6.3 (GENERIC.MP) #4: Sun Jun 17 11:22:20 CEST 2018
> >>>                    
> >>> r...@syspatch-63-amd64.openbsd.org:/usr/src/sys/arch/amd64/compile/GENERIC.MP
> >>> 
> >>>   Architecture: OpenBSD.amd64
> >>>   Machine     : amd64
> >>>> Description:
> >>>   Currently running 12 VMs on a single machine. After some random time, 
> >>> VMs randomly shutdown. Sometimes a single VM and sometimes more. It's 
> >>> always after an error message like the following a VM stops.
> >>> Jun 19 11:16:49 j5 vmd[59907]: vcpu_run_loop: vm 11 / vcpu 0 run ioctl 
> >>> failed: Invalid argument
> >>> Jun 19 11:16:49 j5 vmd[51030]: vcpu_run_loop: vm 6 / vcpu 0 run ioctl 
> >>> failed: Invalid argument
> >>> Jun 19 11:16:49 j5 vmd[64667]: vcpu_run_loop: vm 9 / vcpu 0 run ioctl 
> >>> failed: Invalid argument
> >>> Jun 19 11:16:49 j5 vmd[53125]: vcpu_run_loop: vm 10 / vcpu 0 run ioctl 
> >>> failed: Invalid argument
> >>> Jun 19 11:16:49 j5 vmd[61348]: vcpu_run_loop: vm 7 / vcpu 0 run ioctl 
> >>> failed: Invalid argument
> >>> Jun 19 11:16:49 j5 vmd[59411]: vcpu_run_loop: vm 4 / vcpu 0 run ioctl 
> >>> failed: Invalid argument
> >>> Jun 19 11:16:49 j5 vmd[52083]: vcpu_run_loop: vm 12 / vcpu 0 run ioctl 
> >>> failed: Invalid argument
> >>> Jun 19 11:16:49 j5 vmd[21423]: vcpu_run_loop: vm 3 / vcpu 0 run ioctl 
> >>> failed: Invalid argument
> >>> Jun 19 11:16:49 j5 vmd[40185]: vcpu_run_loop: vm 5 / vcpu 0 run ioctl 
> >>> failed: Invalid argument
> >>> Jun 19 11:16:49 j5 vmd[60512]: vcpu_run_loop: vm 8 / vcpu 0 run ioctl 
> >>> failed: Invalid argument
> >>> Jun 19 11:16:49 j5 vmd[44187]: vcpu_run_loop: vm 1 / vcpu 0 run ioctl 
> >>> failed: Invalid argument
> >>> Jun 19 11:16:49 j5 vmd[46728]: vcpu_run_loop: vm 2 / vcpu 0 run ioctl 
> >>> failed: Invalid argument
> >>> Jun 19 11:20:30 j5 vmd[81571]: vmd: vm 22 event thread exited 
> >>> unexpectedly     
> >> 
> >> This is almost surely the following bug, fixed in April (log from pmap.c):
> >> 
> >> revision 1.113
> >> date: 2018/04/17 06:31:55;  author: mlarkin;  state: Exp;  lines: +275 
> >> -64;  commitid: BaLjO2NVfYaZP00l;
> >> Better way of allocating EPT entries.
> >> 
> >> Don't use the standard pmap PTE functions to manipulate EPT PTEs. This
> >> occasionally caused VMs to fail after random amounts of time due to
> >> loading the pmap on the CPU and the processor updating A/D bits (which
> >> are reserved bits in EPT). This ultimately manifested itself as errors
> >> from vmd ("vcpu X run ioctl failed".)
> >> 
> >> tested by many, on different types of HW, no regressions noted
> >> 
> >> ---
> >> 
> >> Can you try -current and see if you can still reproduce this problem?
> 
> Tried -current today but got a kernel panic, seems to be unrelated to vmd but 
> wasn't able to collect all the information that is needed to file the bug.
> Only got the trace. Will try -current again in a couple of days.
> 
> The below is what I was able to collect, will do better next time.
> 
> panic: kernel diagnostic assertion "_kernel_lock_held()" failed: file 
> "/usr/src/sys/net/if.c", line 1382
> Stopped at      db_enter+0x12:  popq    %r11
>     TID    PID    UID     PRFLAGS     PFLAGS  CPU  COMMAND
>  374176   6581    107    0x100010  0x4000000    3K vmd
>   94393   6581    107    0x100010  0x4000000    2  vmd
>  311214   9135    107    0x100010  0x4000000    0  vmd
> *299692  82346     91        0x12          0    1  snmpd
> db_enter() at db_enter+0x12
> panic() at panic+0x138
> __assert(ffffffff818eebd4,ffff8000222de2e0,0,ffff8000222de3d8) at 
> __assert+0x24
> 
> ifa_ifwithaddr(1962634d45f6caa5,ffff8000222de3d8) at ifa_ifwithaddr+0xed      
>  
> in_pcbaddrisavail(c69add51b77cda1a,ffffff01f1cb6118,18,16) at 
> in_pcbaddrisavail
> +0xd0
> udp_output(8b483746c94285a0,237d,0,0) at udp_output+0x168
> sosend(c197a5735fd92204,ffffff01edcbc798,ffff8000ffff9bd8,6b,5,ffff8000ffff9bf8
> ) at sosend+0x351
> sendit(cb9e953daf7fb585,ffff8000ffff9bd8,ffff8000222de6c0,ffff8000222de5c0,ffff
> 8000222de6d0) at sendit+0x3fb
> sys_sendmsg(e1aea756e75a2f45,1c0,ffff8000ffff9bd8) at sys_sendmsg+0x15a       
>  
> syscall(eee152956d09f98c) at syscall+0x32a
> Xsyscall_untramp(6,0,0,0,0,1c) at Xsyscall_untramp+0xe4
> end of kernel
> end trace frame: 0x7f7ffffc0410, count: 4
> https://www.openbsd.org/ddb.html describes the minimum info required in bug   
>  
> reports.  Insufficient info makes it difficult to find and fix bugs.
> ddb{1}>
> 
> 
> Mischa
> 

Yeah, there is certainly nothing to do with vmm/vmd in that trace :)

If you're interested, You can probably apply this diff to 6.3-stable with the
same result. There are likely some things in the -current tree that are causing
the other error.

Here's the original diff:


In an EPT pmap (those used on Intel CPUs using vmm), we should never load
a pointer to that pmap into the processor's %cr3. I tried to paper over this
restriction before by ensuring we never touched the < 512GB mappings, but
there must be some code path somewhere that does this (something in uvm_fault
walking over the pmap, I think). If this happens, the processor sets bits
in the PTEs automatically (Acessed/Dirty) that conflict with EPT PTEs - bits
in those locations are reserved MBZ. When that happens, you get an EPT
misconfiguration and vmm terminates the VM.

Some machines (like my haswell or x230) only very infrequently exhibited this
bug.

Some machines (like my dell and those owned by a few people on the list)
exhibit the bug almost immediately after starting a VM.

This diff creates a set of slimmed-down PTE manipulation routines for EPT pmaps,
based roughly on the code I wrote for meltdown. And it ensures (via kassert)
that we never load an EPT pmap directly on the cpu anymore. It also gives back
the meltdown PML4 page that is not needed in EPT pmaps.

I've tested this in anger on the dell that had problems and it seems to fix the
issue; I've been running about a dozen mixed OpenBSD/Linux VMs with active use
in each (builds/updates/etc) for a few hours without any misconfigurations
seen.

The pte remote routine isn't particularly optimized, but EPT pmaps don't
generally have PTE removals while running.

As this only affects EPT pmaps, I'd like to get this in, presuming the dell
runs a few more days. It shouldn't affect AMD RVI or any regular pmaps.

I'll clean up the various DPRINTFs before commit if everything looks good.

Can people who use vmm(4) apply this and let me know if any regressions are
seen?

-ml

Index: arch/amd64/amd64/pmap.c
===================================================================
RCS file: /cvs/src/sys/arch/amd64/amd64/pmap.c,v
retrieving revision 1.111
diff -u -p -a -u -r1.111 pmap.c
--- arch/amd64/amd64/pmap.c     13 Mar 2018 07:37:58 -0000      1.111
+++ arch/amd64/amd64/pmap.c     13 Apr 2018 21:32:16 -0000
@@ -299,6 +299,9 @@ static boolean_t pmap_is_active(struct p
 paddr_t pmap_map_ptes(struct pmap *);
 struct pv_entry *pmap_remove_pv(struct vm_page *, struct pmap *, vaddr_t);
 void pmap_do_remove(struct pmap *, vaddr_t, vaddr_t, int);
+void pmap_remove_ept(struct pmap *, vaddr_t, vaddr_t);
+void pmap_do_remove_ept(struct pmap *, vaddr_t);
+int pmap_enter_ept(struct pmap *, vaddr_t, paddr_t, vm_prot_t);
 boolean_t pmap_remove_pte(struct pmap *, struct vm_page *, pt_entry_t *,
     vaddr_t, int, struct pv_entry **);
 void pmap_remove_ptes(struct pmap *, struct vm_page *, vaddr_t,
@@ -369,13 +372,16 @@ pmap_sync_flags_pte(struct vm_page *pg, 
 
 /*
  * pmap_map_ptes: map a pmap's PTEs into KVM
+ *
+ * This should not be done for EPT pmaps
  */
-
 paddr_t
 pmap_map_ptes(struct pmap *pmap)
 {
        paddr_t cr3 = rcr3();
 
+       KASSERT(pmap->pm_type != PMAP_TYPE_EPT);
+
        /* the kernel's pmap is always accessible */
        if (pmap == pmap_kernel() || pmap->pm_pdirpa == cr3) {
                cr3 = 0;
@@ -409,62 +415,6 @@ pmap_unmap_ptes(struct pmap *pmap, paddr
        }
 }
 
-/*
- * pmap_fix_ept
- *
- * Fixes up an EPT PTE for vaddr 'va' by reconfiguring the low bits to
- * conform to the EPT format (separate R/W/X bits and various "must be
- * 0 bits")
- *
- * Parameters:
- *  pm: The pmap in question
- *  va: The VA to fix up
- */
-void
-pmap_fix_ept(struct pmap *pm, vaddr_t va)
-{
-       u_long mask, shift;
-       pd_entry_t pde, *pd;
-       paddr_t pdpa;
-       int lev, offs;
-
-       pdpa = pm->pm_pdirpa;
-       shift = L4_SHIFT;
-       mask = L4_MASK;
-       for (lev = PTP_LEVELS; lev > 0; lev--) {
-               pd = (pd_entry_t *)PMAP_DIRECT_MAP(pdpa);
-               offs = (VA_SIGN_POS(va) & mask) >> shift;
-
-               pd[offs] |= EPT_R | EPT_W | EPT_X;
-               /*
-                * Levels 3-4 have bits 3:7 'must be 0'
-                * Level 2 has bits 3:6 'must be 0', and bit 7 is always
-                * 0 in our EPT format (thus, bits 3:7 == 0)
-                */
-               switch(lev) {
-               case 4:
-               case 3:
-               case 2:
-                       /* Bits 3:7 = 0 */
-                       pd[offs] &= ~(0xF8);
-                       break;
-               case 1: pd[offs] |= EPT_WB;
-                       break;
-               }
-               
-               pde = pd[offs];
-
-               /* Large pages are different, break early if we run into one. */
-               if ((pde & (PG_PS|PG_V)) != PG_V)
-                       panic("pmap_fix_ept: large page in EPT");
-
-               pdpa = (pd[offs] & PG_FRAME);
-               /* 4096/8 == 512 == 2^9 entries per level */
-               shift -= 9;
-               mask >>= 9;
-       }
-}
-
 int
 pmap_find_pte_direct(struct pmap *pm, vaddr_t va, pt_entry_t **pd, int *offs)
 {
@@ -1195,6 +1145,8 @@ pmap_destroy(struct pmap *pmap)
                        KASSERT((pg->pg_flags & PG_BUSY) == 0);
 
                        pg->wire_count = 0;
+                       pmap->pm_stats.resident_count--;
+                       
                        uvm_pagefree(pg);
                }
        }
@@ -1517,7 +1469,10 @@ pmap_remove_pte(struct pmap *pmap, struc
 void
 pmap_remove(struct pmap *pmap, vaddr_t sva, vaddr_t eva)
 {
-       pmap_do_remove(pmap, sva, eva, PMAP_REMOVE_ALL);
+       if (pmap->pm_type == PMAP_TYPE_EPT)
+               pmap_remove_ept(pmap, sva, eva);
+       else
+               pmap_do_remove(pmap, sva, eva, PMAP_REMOVE_ALL);
 }
 
 /*
@@ -2141,6 +2096,277 @@ pmap_enter_special(vaddr_t va, paddr_t p
                DPRINTF("%s: no U+K mapping for special mapping?\n", __func__);
 }
 
+void pmap_remove_ept(struct pmap *pmap, vaddr_t sgpa, vaddr_t egpa)
+{
+       vaddr_t v;
+
+       for (v = sgpa; v < egpa + PAGE_SIZE; v += PAGE_SIZE)
+               pmap_do_remove_ept(pmap, v);
+}
+
+void
+pmap_do_remove_ept(struct pmap *pmap, paddr_t gpa)
+{
+       uint64_t l4idx, l3idx, l2idx, l1idx;
+       struct vm_page *pg3, *pg2, *pg1;
+       paddr_t npa3, npa2, npa1;
+       pd_entry_t *pd4, *pd3, *pd2, *pd1;
+       pd_entry_t *pptes;
+
+       l4idx = (gpa & L4_MASK) >> L4_SHIFT; /* PML4E idx */
+       l3idx = (gpa & L3_MASK) >> L3_SHIFT; /* PDPTE idx */
+       l2idx = (gpa & L2_MASK) >> L2_SHIFT; /* PDE idx */
+       l1idx = (gpa & L1_MASK) >> L1_SHIFT; /* PTE idx */
+
+       /* Start at PML4 / top level */
+       pd4 = (pd_entry_t *)pmap->pm_pdir;
+
+       if (!pd4)
+               return;
+
+       /* npa3 = physaddr of PDPT */
+       npa3 = pd4[l4idx] & PMAP_PA_MASK;
+       if (!npa3)
+               return;
+       pd3 = (pd_entry_t *)PMAP_DIRECT_MAP(npa3);
+       pg3 = PHYS_TO_VM_PAGE(npa3);
+
+       /* npa2 = physaddr of PD page */
+       npa2 = pd3[l3idx] & PMAP_PA_MASK;
+       if (!npa2)
+               return;
+       pd2 = (pd_entry_t *)PMAP_DIRECT_MAP(npa2);
+       pg2 = PHYS_TO_VM_PAGE(npa2);
+
+       /* npa1 = physaddr of PT page */
+       npa1 = pd2[l2idx] & PMAP_PA_MASK;
+       if (!npa1)
+               return;
+       pd1 = (pd_entry_t *)PMAP_DIRECT_MAP(npa1);
+       pg1 = PHYS_TO_VM_PAGE(npa1);
+
+       if (pd1[l1idx] == 0)
+               return;
+
+       pd1[l1idx] = 0;
+       pg1->wire_count--;
+       pmap->pm_stats.resident_count--;
+
+       if (pg1->wire_count > 1)
+               return;
+
+       pg1->wire_count = 0;
+       pptes = (pd_entry_t *)PMAP_DIRECT_MAP(npa2);
+       pptes[l2idx] = 0;
+       uvm_pagefree(pg1);
+       pmap->pm_stats.resident_count--;
+
+       pg2->wire_count--;
+       if (pg2->wire_count > 1)
+               return;
+
+       pg2->wire_count = 0;
+       pptes = (pd_entry_t *)PMAP_DIRECT_MAP(npa3);
+       pptes[l3idx] = 0;
+       uvm_pagefree(pg2);
+       pmap->pm_stats.resident_count--;
+
+       pg3->wire_count--;
+       if (pg3->wire_count > 1)
+               return;
+
+       pg3->wire_count = 0;
+       pptes = pd4;
+       pptes[l4idx] = 0;
+       uvm_pagefree(pg3);
+       pmap->pm_stats.resident_count--;
+}
+
+int
+pmap_enter_ept(struct pmap *pmap, paddr_t gpa, paddr_t hpa, vm_prot_t prot)
+{
+       uint64_t l4idx, l3idx, l2idx, l1idx;
+       pd_entry_t *pd, npte;
+       struct vm_page *ptp, *pptp, *pg;
+       paddr_t npa;
+       struct uvm_object *obj;
+
+       if (gpa > MAXDSIZ)
+               return ENOMEM;
+
+       l4idx = (gpa & L4_MASK) >> L4_SHIFT; /* PML4E idx */
+       l3idx = (gpa & L3_MASK) >> L3_SHIFT; /* PDPTE idx */
+       l2idx = (gpa & L2_MASK) >> L2_SHIFT; /* PDE idx */
+       l1idx = (gpa & L1_MASK) >> L1_SHIFT; /* PTE idx */
+
+       DPRINTF("%s: gpa=0x%llx hpa=0x%llx l4idx=%lld l3idx=%lld "
+           "l2idx=%lld l1idx=%lld\n", __func__, (uint64_t)gpa,
+           (uint64_t)hpa, l4idx, l3idx, l2idx, l1idx);
+
+       /* Start at PML4 / top level */
+       pd = (pd_entry_t *)pmap->pm_pdir;
+
+       if (!pd)
+               return ENOMEM;
+
+       /* npa = physaddr of PDPT */
+       npa = pd[l4idx] & PMAP_PA_MASK;
+
+       /* Valid PML4e for the 512GB region containing gpa? */
+       if (!npa) {
+               /* No valid PML4e - allocate PDPT page and set PML4e */
+               obj = &pmap->pm_obj[2]; /* PML4 UVM object */
+               ptp = uvm_pagealloc(obj, ptp_va2o(gpa, 3), NULL,
+                   UVM_PGA_USERESERVE|UVM_PGA_ZERO);
+
+               if (ptp == NULL)
+                       return ENOMEM;
+
+               /*
+                * New PDPT page - we are setting the first entry, so set
+                * the wired count to 1
+                */
+               ptp->wire_count = 1;
+
+               /* Calculate phys address of this new PDPT page */
+               npa = VM_PAGE_TO_PHYS(ptp);
+
+               /*
+                * Higher levels get full perms; specific permissions are
+                * entered at the lowest level.
+                */
+               pd[l4idx] = (npa | EPT_R | EPT_W | EPT_X);
+
+               pmap->pm_stats.resident_count++;
+
+               DPRINTF("%s: allocated new PDPT page at phys 0x%llx, "
+                   "setting PML4e[%lld] = 0x%llx\n", __func__,
+                   (uint64_t)npa, l4idx, pd[l4idx]);
+
+               pptp = ptp;
+       } else {
+               /* Already allocated PML4e */
+               pptp = PHYS_TO_VM_PAGE(npa);
+       }
+
+       pd = (pd_entry_t *)PMAP_DIRECT_MAP(npa);
+       if (!pd)
+               panic("%s: can't locate PDPT @ pa=0x%llx\n", __func__,
+                   (uint64_t)npa);
+
+       /* npa = physaddr of PD page */
+       npa = pd[l3idx] & PMAP_PA_MASK;
+
+       /* Valid PDPTe for the 1GB region containing gpa? */
+       if (!npa) {
+               /* No valid PDPTe - allocate PD page and set PDPTe */
+               obj = &pmap->pm_obj[1]; /* PDPT UVM object */
+               ptp = uvm_pagealloc(obj, ptp_va2o(gpa, 2), NULL,
+                   UVM_PGA_USERESERVE|UVM_PGA_ZERO);
+
+               if (ptp == NULL)
+                       return ENOMEM;
+
+               /*
+                * New PD page - we are setting the first entry, so set
+                * the wired count to 1
+                */
+               ptp->wire_count = 1;
+               pptp->wire_count++;
+
+               npa = VM_PAGE_TO_PHYS(ptp);
+
+               /*
+                * Higher levels get full perms; specific permissions are
+                * entered at the lowest level.
+                */
+               pd[l3idx] = (npa | EPT_R | EPT_W | EPT_X);
+
+               pmap->pm_stats.resident_count++;
+
+               DPRINTF("%s: allocated new PD page at phys 0x%llx, "
+                   "setting PDPTe[%lld] = 0x%llx\n", __func__,
+                   (uint64_t)npa, l3idx, pd[l3idx]);
+
+               pptp = ptp;
+       } else {
+               /* Already allocated PDPTe */
+               pptp = PHYS_TO_VM_PAGE(npa);
+       }
+
+       pd = (pd_entry_t *)PMAP_DIRECT_MAP(npa);
+       if (!pd)
+               panic("%s: can't locate PD page @ pa=0x%llx\n", __func__,
+                   (uint64_t)npa);
+
+       /* npa = physaddr of PT page */
+       npa = pd[l2idx] & PMAP_PA_MASK;
+
+       /* Valid PDE for the 2MB region containing gpa? */
+       if (!npa) {
+               /* No valid PDE - allocate PT page and set PDE */
+               obj = &pmap->pm_obj[0]; /* PDE UVM object */
+               ptp = uvm_pagealloc(obj, ptp_va2o(gpa, 1), NULL,
+                   UVM_PGA_USERESERVE|UVM_PGA_ZERO);
+
+               if (ptp == NULL)
+                       return ENOMEM;
+
+               pptp->wire_count++;
+
+               npa = VM_PAGE_TO_PHYS(ptp);
+
+               /*
+                * Higher level get full perms; specific permissions are
+                * entered at the lowest level.
+                */
+               pd[l2idx] = (npa | EPT_R | EPT_W | EPT_X);
+
+               pmap->pm_stats.resident_count++;
+
+               DPRINTF("%s: allocated new PT page at phys 0x%llx, "
+                   "setting PDE[%lld] = 0x%llx\n", __func__,
+                   (uint64_t)npa, l2idx, pd[l2idx]);
+       } else {
+               /* Find final ptp */
+               ptp = PHYS_TO_VM_PAGE(npa);
+               if (ptp == NULL)
+                       panic("%s: ptp page vanished?", __func__);
+       }
+
+       pd = (pd_entry_t *)PMAP_DIRECT_MAP(npa);
+       if (!pd)
+               panic("%s: can't locate PT page @ pa=0x%llx\n", __func__,
+                   (uint64_t)npa);
+
+       DPRINTF("%s: setting PTE, PT page @ phys 0x%llx virt 0x%llx prot "
+           "0x%llx was 0x%llx\n", __func__, (uint64_t)npa, (uint64_t)pd,
+           (uint64_t)prot, (uint64_t)pd[l1idx]);
+
+       npte = hpa | EPT_WB;
+       if (prot & PROT_READ)
+               npte |= EPT_R;
+       if (prot & PROT_WRITE)
+               npte |= EPT_W;
+       if (prot & PROT_EXEC)
+               npte |= EPT_X;
+
+       pg = PHYS_TO_VM_PAGE(hpa);
+
+       if (pd[l1idx] == 0) {
+               ptp->wire_count++;
+               pmap->pm_stats.resident_count++;
+       } else {
+               /* XXX flush ept */
+       }
+       
+       pd[l1idx] = npte;
+
+       DPRINTF("%s: setting PTE[%lld] = 0x%llx\n", __func__, l1idx, pd[l1idx]);
+
+       return 0;
+}
+
 /*
  * pmap_enter: enter a mapping into a pmap
  *
@@ -2160,6 +2386,9 @@ pmap_enter(struct pmap *pmap, vaddr_t va
        int error, shootself;
        paddr_t scr3;
 
+       if (pmap->pm_type == PMAP_TYPE_EPT)
+               return pmap_enter_ept(pmap, va, pa, prot);
+
        KASSERT(!(wc && nocache));
        pa &= PMAP_PA_MASK;
 
@@ -2355,9 +2584,6 @@ enter_now:
 
        error = 0;
 
-       if (pmap->pm_type == PMAP_TYPE_EPT)
-               pmap_fix_ept(pmap, va);
-
 out:
        if (pve)
                pool_put(&pmap_pv_pool, pve);
@@ -2588,9 +2814,15 @@ pmap_convert(struct pmap *pmap, int mode
        pmap->pm_type = mode;
 
        if (mode == PMAP_TYPE_EPT) {
-               /* Clear low 512GB region (first PML4E) */
+               /* Clear PML4 */
                pte = (pt_entry_t *)pmap->pm_pdir;
-               *pte = 0;
+               memset(pte, 0, PAGE_SIZE);
+
+               /* Give back the meltdown pdir */
+               if (pmap->pm_pdir_intel) {
+                       pool_put(&pmap_pdp_pool, pmap->pm_pdir_intel);
+                       pmap->pm_pdir_intel = 0;
+               }
        }
 
        return (0);     

Reply via email to