On 30/03/25(Sun) 15:38, Martin Pieuchot wrote:
> On 28/03/25(Fri) 19:53, Miod Vallat wrote:
> > > If this code has never been tested on pmap_kernel() then it is dead code
> > > and I'd rather remove it. Whoever wants to reduce the permission of the
> > > mapping will have to check on all architectures that this is supported.
> >
> > Well it is obvious that, because of the incorrect end address argument,
> > this call to uvm_map_protect() has never done anything, but it would be
> > nice to have the fixed call anyway.
>
> I agree. I'm just not going to do it myself. This is a new feature.
>
> > How about keeping it within
> >
> > /* pmap_write_protect() needs fixing to cope with pmap_kernel() on x86*/
> > #if !defined(__amd64__) && !defined(__i386__)
> > the uvm_map_protect() call
> > #endif
> >
> > so that other platforms, where quick inspection of their pmap code shows
> > they ought to behave correctly, can benefit from the sigcode page being
> > made read-only?
>
> I'm happy to hear that. We all agree they can benefit from such change.
> That said, I'm not going to test pmap features across our architectures.
>
> Can I have an ok for my diff? Or should I drop it?
Guys, I'm going to commit the diff below so I can move forward with the
open UVM bugs.
I leave you the pmap fun since you seem to insist this should be fixed.
Index: kern/kern_exec.c
===================================================================
RCS file: /cvs/src/sys/kern/kern_exec.c,v
diff -u -p -r1.266 kern_exec.c
--- kern/kern_exec.c 15 Aug 2025 04:21:00 -0000 1.266
+++ kern/kern_exec.c 15 Aug 2025 08:07:51 -0000
@@ -875,15 +875,15 @@ exec_sigcode_map(struct process *pr)
int r;
sigobject = uao_create(sz, 0);
- uao_reference(sigobject); /* permanent reference */
-
if ((r = uvm_map(kernel_map, &va, round_page(sz), sigobject,
0, 0, UVM_MAPFLAG(PROT_READ | PROT_WRITE, PROT_READ |
PROT_WRITE,
MAP_INHERIT_SHARE, MADV_RANDOM, 0)))) {
uao_detach(sigobject);
+ sigobject = NULL;
return (ENOMEM);
}
+ uao_reference(sigobject); /* permanent reference */
for (off = 0, left = round_page(sz); left != 0;
off += sigfillsiz) {
size_t chunk = ulmin(left, sigfillsiz);
@@ -891,9 +891,19 @@ exec_sigcode_map(struct process *pr)
left -= chunk;
}
memcpy((caddr_t)va, sigcode, sz);
-
- (void) uvm_map_protect(kernel_map, va, round_page(sz),
+#if notyet
+ /*
+ * This has never been tested on pmap_kernel() and blow up
+ * at least on amd64.
+ */
+ r = uvm_map_protect(kernel_map, va, round_page(va + sz),
PROT_READ, 0, FALSE, FALSE);
+ if (r) {
+ uao_detach(sigobject);
+ sigobject = NULL;
+ return (r);
+ }
+#endif
sigcode_va = va;
sigcode_sz = round_page(sz);
}