> Date: Fri, 15 Aug 2025 11:51:18 +0200
> From: Martin Pieuchot <[email protected]>
>
> On 15/08/25(Fri) 09:39, Miod Vallat wrote:
> > > Please don't. Keeping that page read-only is important for security.
> > > Maybe if nobody cares about the amd64 and i386 pmaps we should just
> > > delete those architectures?
> >
> > But remember, because the end argument was wrong (sz instead of va +
> > sz), this call did *nothing*.
> >
> > At least the commented out code will be correct now.
>
> Exactly. Since you committed this code Mark it does nothing. That's
> what I said in the original report it's dead code.
The original code (before I "broke" it) did an unmap of the memory.
BTW that means we need to fix the comment.
Anyway, I went back to those mails from march and I think I have a
simple fix. This just adopts the idiom used by pmap_do_remove() which
*must* work correctly for both userland and kernel addresses.
I simply dropped those bogus "& PG_FRAME" statements. As the comments
say they should be no-ops. And other architectures don't seem to
care, which suggests the comment is right. And even if someone would
pass in non-page-aligned addresses the code will still do the right
thing anyway.
Tested (with a diff to fix the uvm_map_protect() call) on amd64. Only
compile tested on i386. But if we break things someone will fix it
isn't it?
ok?
Index: arch/amd64/amd64/pmap.c
===================================================================
RCS file: /cvs/src/sys/arch/amd64/amd64/pmap.c,v
retrieving revision 1.181
diff -u -p -r1.181 pmap.c
--- arch/amd64/amd64/pmap.c 5 Jul 2025 22:54:53 -0000 1.181
+++ arch/amd64/amd64/pmap.c 15 Aug 2025 11:06:19 -0000
@@ -2155,7 +2155,7 @@ pmap_write_protect(struct pmap *pmap, va
{
pt_entry_t *spte, *epte;
pt_entry_t clear = 0, set = 0;
- vaddr_t blockend;
+ vaddr_t blkendva;
int shootall = 0, shootself;
vaddr_t va;
paddr_t scr3;
@@ -2163,10 +2163,6 @@ pmap_write_protect(struct pmap *pmap, va
scr3 = pmap_map_ptes(pmap);
shootself = (scr3 == 0);
- /* should be ok, but just in case ... */
- sva &= PG_FRAME;
- eva &= PG_FRAME;
-
if (!(prot & PROT_READ))
set |= pg_xo;
if (!(prot & PROT_WRITE))
@@ -2177,10 +2173,11 @@ pmap_write_protect(struct pmap *pmap, va
if ((eva - sva > 32 * PAGE_SIZE) && sva < VM_MIN_KERNEL_ADDRESS)
shootall = 1;
- for (va = sva; va < eva ; va = blockend) {
- blockend = (va & L2_FRAME) + NBPD_L2;
- if (blockend > eva)
- blockend = eva;
+ for (va = sva; va < eva; va = blkendva) {
+ /* determine range of block */
+ blkendva = x86_round_pdr(va + 1);
+ if (blkendva > eva)
+ blkendva = eva;
/*
* XXXCDC: our PTE mappings should never be write-protected!
@@ -2205,7 +2202,7 @@ pmap_write_protect(struct pmap *pmap, va
#endif
spte = &PTE_BASE[pl1_i(va)];
- epte = &PTE_BASE[pl1_i(blockend)];
+ epte = &PTE_BASE[pl1_i(blkendva)];
for (/*null */; spte < epte ; spte++) {
if (!pmap_valid_entry(*spte))
Index: arch/i386/i386/pmap.c
===================================================================
RCS file: /cvs/src/sys/arch/i386/i386/pmap.c,v
retrieving revision 1.228
diff -u -p -r1.228 pmap.c
--- arch/i386/i386/pmap.c 7 Apr 2025 17:37:31 -0000 1.228
+++ arch/i386/i386/pmap.c 15 Aug 2025 11:06:19 -0000
@@ -2111,24 +2111,21 @@ pmap_write_protect_86(struct pmap *pmap,
vm_prot_t prot)
{
pt_entry_t *ptes, *spte, *epte, npte, opte;
- vaddr_t blockend;
+ vaddr_t blkendva;
u_int32_t md_prot;
vaddr_t va;
int shootall = 0;
ptes = pmap_map_ptes_86(pmap); /* locks pmap */
- /* should be ok, but just in case ... */
- sva &= PG_FRAME;
- eva &= PG_FRAME;
-
if ((eva - sva > 32 * PAGE_SIZE) && pmap != pmap_kernel())
shootall = 1;
- for (va = sva; va < eva; va = blockend) {
- blockend = (va & PD_MASK) + NBPD;
- if (blockend > eva)
- blockend = eva;
+ for (va = sva; va < eva; va = blkendva) {
+ /* determine range of block */
+ blkendva = i386_round_pdr(va + 1);
+ if (blkendva > eva)
+ blkendva = eva;
/*
* XXXCDC: our PTE mappings should never be write-protected!
@@ -2155,7 +2152,7 @@ pmap_write_protect_86(struct pmap *pmap,
md_prot |= PG_RW;
spte = &ptes[atop(va)];
- epte = &ptes[atop(blockend)];
+ epte = &ptes[atop(blkendva)];
for (/*null */; spte < epte ; spte++, va += PAGE_SIZE) {
Index: arch/i386/i386/pmapae.c
===================================================================
RCS file: /cvs/src/sys/arch/i386/i386/pmapae.c,v
retrieving revision 1.76
diff -u -p -r1.76 pmapae.c
--- arch/i386/i386/pmapae.c 18 Mar 2025 22:12:12 -0000 1.76
+++ arch/i386/i386/pmapae.c 15 Aug 2025 11:06:19 -0000
@@ -1543,24 +1543,21 @@ pmap_write_protect_pae(struct pmap *pmap
vm_prot_t prot)
{
pt_entry_t *ptes, *spte, *epte, npte, opte;
- vaddr_t blockend;
+ vaddr_t blkendva;
u_int64_t md_prot;
vaddr_t va;
int shootall = 0;
ptes = pmap_map_ptes_pae(pmap); /* locks pmap */
- /* should be ok, but just in case ... */
- sva &= PG_FRAME;
- eva &= PG_FRAME;
-
if ((eva - sva > 32 * PAGE_SIZE) && pmap != pmap_kernel())
shootall = 1;
- for (va = sva; va < eva; va = blockend) {
- blockend = (va & PD_MASK) + NBPD;
- if (blockend > eva)
- blockend = eva;
+ for (va = sva; va < eva; va = blkendva) {
+ /* determine range of block */
+ blkendva = i386_round_pdr(va + 1);
+ if (blkendva > eva)
+ blkendva = eva;
/*
* XXXCDC: our PTE mappings should never be write-protected!
@@ -1589,7 +1586,7 @@ pmap_write_protect_pae(struct pmap *pmap
md_prot |= PG_RW;
spte = &ptes[atop(va)];
- epte = &ptes[atop(blockend)];
+ epte = &ptes[atop(blkendva)];
for (/*null */; spte < epte ; spte++, va += PAGE_SIZE) {