Re: Stop using direct syscall(2) from perl(1)
On Sun, Jul 09, 2023 at 01:29:58PM -0700, Andrew Hewus Fresh wrote: > Here is a patch to replace perl(1)'s use of syscall(2) with a dispatcher > that will call the libc function instead. > > I have to do some work on style before this is ready to commit, but it > should be ready for some testing. > > I don't currently plan on committing syscall_emulator.c because we need > to regenerate it each time as I'm not sure how to tie it into sys/kern's > `make syscalls` to only do it when things change. > > Looking for tests from folks who use syscall from perl as well as style > suggestions (like how do I correctly sort headers?) and maybe even > ways to enable additional syscalls. Here's a new version, I think I addressed all of the feedback I got, although may have missed something. Hopefully it is easier to test with sthen@'s fix. Thanks miod@, espie@, sthen@, and especially gkoehler@ for a lot of excellent feedback. I'm sure there's still stuff missing and obvious problems that don't stand out to someone who writes a lot of perl. Starting Saturday I'm traveling for work for the week, so will probably be even less responsive than usual until I'm back. Index: gnu/usr.bin/perl/MANIFEST === RCS file: /home/afresh1/OpenBSD-perl/OP/cvs/src/gnu/usr.bin/perl/MANIFEST,v retrieving revision 1.67 diff -u -p -a -u -p -r1.67 MANIFEST --- gnu/usr.bin/perl/MANIFEST 8 Jul 2023 14:18:35 - 1.67 +++ gnu/usr.bin/perl/MANIFEST 3 Aug 2023 04:34:38 - @@ -6605,6 +6605,7 @@ t/op/svleak.plTest file for svleak.t t/op/svleak.t See if stuff leaks SVs t/op/switch.t See if switches (given/when) work t/op/symbolcache.t See if undef/delete works on stashes with functions +t/op/syscall_emulator.tTests that syscall works via the emulator t/op/sysio.t See if sysread and syswrite work t/op/taint.t See if tainting works t/op/threads.t Misc. tests for perl features with threads Index: gnu/usr.bin/perl/Makefile.SH === RCS file: /home/afresh1/OpenBSD-perl/OP/cvs/src/gnu/usr.bin/perl/Makefile.SH,v retrieving revision 1.60 diff -u -p -a -u -p -r1.60 Makefile.SH --- gnu/usr.bin/perl/Makefile.SH8 Jul 2023 14:18:35 - 1.60 +++ gnu/usr.bin/perl/Makefile.SH3 Aug 2023 04:34:38 - @@ -541,7 +541,7 @@ c1 = av.c scope.c op.c doop.c doio.c dum c2 = perly.c pp.c pp_hot.c pp_ctl.c pp_sys.c regcomp.c regexec.c utf8.c sv.c c3 = taint.c toke.c util.c deb.c run.c builtin.c universal.c pad.c globals.c keywords.c c4 = perlio.c numeric.c mathoms.c locale.c pp_pack.c pp_sort.c caretx.c dquote.c time64.c -c5 = $(mallocsrc) +c5 = $(mallocsrc) syscall_emulator.c !NO!SUBS! @@ -557,7 +557,7 @@ c = $(c1) $(c2) $(c3) $(c4) $(c5) minipe obj1 = $(mallocobj) gv$(OBJ_EXT) toke$(OBJ_EXT) perly$(OBJ_EXT) pad$(OBJ_EXT) regcomp$(OBJ_EXT) dump$(OBJ_EXT) util$(OBJ_EXT) mg$(OBJ_EXT) reentr$(OBJ_EXT) mro_core$(OBJ_EXT) keywords$(OBJ_EXT) builtin$(OBJ_EXT) obj2 = hv$(OBJ_EXT) av$(OBJ_EXT) run$(OBJ_EXT) pp_hot$(OBJ_EXT) sv$(OBJ_EXT) pp$(OBJ_EXT) scope$(OBJ_EXT) pp_ctl$(OBJ_EXT) pp_sys$(OBJ_EXT) -obj3 = doop$(OBJ_EXT) doio$(OBJ_EXT) regexec$(OBJ_EXT) utf8$(OBJ_EXT) taint$(OBJ_EXT) deb$(OBJ_EXT) globals$(OBJ_EXT) perlio$(OBJ_EXT) numeric$(OBJ_EXT) mathoms$(OBJ_EXT) locale$(OBJ_EXT) pp_pack$(OBJ_EXT) pp_sort$(OBJ_EXT) caretx$(OBJ_EXT) dquote$(OBJ_EXT) time64$(OBJ_EXT) +obj3 = doop$(OBJ_EXT) doio$(OBJ_EXT) regexec$(OBJ_EXT) utf8$(OBJ_EXT) taint$(OBJ_EXT) deb$(OBJ_EXT) globals$(OBJ_EXT) perlio$(OBJ_EXT) numeric$(OBJ_EXT) mathoms$(OBJ_EXT) locale$(OBJ_EXT) pp_pack$(OBJ_EXT) pp_sort$(OBJ_EXT) caretx$(OBJ_EXT) dquote$(OBJ_EXT) time64$(OBJ_EXT) syscall_emulator$(OBJ_EXT) # split the objects into 3 exclusive sets: those used by both miniperl and # perl, and those used by just one or the other. Doesn't include the Index: gnu/usr.bin/perl/Makefile.bsd-wrapper === RCS file: /home/afresh1/OpenBSD-perl/OP/cvs/src/gnu/usr.bin/perl/Makefile.bsd-wrapper,v retrieving revision 1.113 diff -u -p -a -u -p -r1.113 Makefile.bsd-wrapper --- gnu/usr.bin/perl/Makefile.bsd-wrapper 15 Feb 2023 01:38:20 - 1.113 +++ gnu/usr.bin/perl/Makefile.bsd-wrapper 3 Aug 2023 04:34:38 - @@ -39,11 +39,18 @@ cleandir: fi cd ${.CURDIR} && ${MAKE} -f Makefile.bsd-wrapper1 cleandir -all: config.sh +all: syscall_emulator.c config.sh cd ${.CURDIR} && exec ${MAKE} -f Makefile.bsd-wrapper1 perl.build cd ${.CURDIR} && exec ${MAKE} -f Makefile.bsd-wrapper1 mansrc.build install: cd ${.CURDIR} && exec ${MAKE} -f Makefile.bsd-wrapper1 install + + +syscall_emulator.c: gen_syscall_emulator.pl syscall_emulator.h /usr/include/sys/syscall.h /usr/include/sys/syscallargs
Re: vmd: handle EAGAIN from imsg_flush
Claudio Jeker writes: > On Thu, Aug 03, 2023 at 04:20:47PM -0400, Dave Voutila wrote: >> Found this while working on some virtio stuff. My original >> implementation as part of the multi-process redesign didn't handle if >> imsg_flush failed due to resource shortages (EAGAIN), so adopt the >> do/while idiom used by other daemons like iked(8). >> >> Manifests with errors from the vm process looking like: >> >> virtio_pci_io: imsg_flush (write) >> vmd: pci i/o access function failed >> >> ok? > > There are way to many imsg_flush() calls in this code. I don't think this > is designed the right way. imsg_flush() should almost never be used. > Busy loop on EAGAIN will bite you further down the road. > Bite me how? This is all for the synchronous imsg channel to the virtio device process, not the async one. What's going to actually perform the flush for me if not imsg_flush? >> diff refs/heads/master refs/heads/vmd-imsg_flush >> commit - 40d57955f2f1a3a65c42ea374f86c74cf879d76d >> commit + 2c42dedb675f013276cdbd47464f656e2451b92c >> blob - 798b5fea6d589d43083c134a040df6272037869f >> blob + ff1ab5fdcb52866f10a14df38d30abde277619df >> --- usr.sbin/vmd/virtio.c >> +++ usr.sbin/vmd/virtio.c >> @@ -825,8 +825,11 @@ virtio_shutdown(struct vmd_vm *vm) >> if (ret == -1) >> fatalx("%s: failed to send shutdown to device", >> __func__); >> -if (imsg_flush(ibuf) == -1) >> -fatalx("%s: imsg_flush", __func__); >> +do { >> +ret = imsg_flush(ibuf); >> +} while (ret == -1 && errno == EAGAIN); >> +if (ret == -1) >> +fatal("%s: imsg_flush", __func__); >> } >> >> /* >> @@ -1132,8 +1135,11 @@ vionet_dump(int fd) >> __func__, dev->vionet.idx); >> return (-1); >> } >> -if (imsg_flush(ibuf) == -1) { >> -log_warnx("%s: imsg_flush", __func__); >> +do { >> +ret = imsg_flush(ibuf); >> +} while (ret == -1 && errno == EAGAIN); >> +if (ret == -1) { >> +log_warn("%s: imsg_flush", __func__); >> return (-1); >> } >> >> @@ -1189,12 +1195,14 @@ vioblk_dump(int fd) >> __func__, dev->vioblk.idx); >> return (-1); >> } >> -if (imsg_flush(ibuf) == -1) { >> -log_warnx("%s: imsg_flush", __func__); >> +do { >> +ret = imsg_flush(ibuf); >> +} while (ret == -1 && errno == EAGAIN); >> +if (ret == -1) { >> +log_warn("%s: imsg_flush", __func__); >> return (-1); >> } >> >> - >> sz = atomicio(read, dev->sync_fd, &temp, sizeof(temp)); >> if (sz != sizeof(temp)) { >> log_warnx("%s: failed to dump vioblk[%d]", __func__, >> @@ -1660,8 +1668,11 @@ virtio_pci_io(int dir, uint16_t reg, uint32_t *data, u >> " device", __func__); >> return (ret); >> } >> -if (imsg_flush(ibuf) == -1) { >> -log_warnx("%s: imsg_flush (write)", __func__); >> +do { >> +ret = imsg_flush(ibuf); >> +} while (ret == -1 && errno == EAGAIN); >> +if (ret == -1) { >> +log_warn("%s: imsg_flush (write)", __func__); >> return (-1); >> } >> } else { >> @@ -1675,8 +1686,11 @@ virtio_pci_io(int dir, uint16_t reg, uint32_t *data, u >> " device", __func__); >> return (ret); >> } >> -if (imsg_flush(ibuf) == -1) { >> -log_warnx("%s: imsg_flush (read)", __func__); >> +do { >> +ret = imsg_flush(ibuf); >> +} while (ret == -1 && errno == EAGAIN); >> +if (ret == -1) { >> +log_warn("%s: imsg_flush (read)", __func__); >> return (-1); >> } >>
sosetopt(): merge SO_SND* with corresponding SO_RCV* cases
The only difference is the socket buffer. As bonus, in the future solock() will be easily replaced by sblock() instead pushing it down to each SO_SND* and SO_RCV* case. Index: sys/kern/uipc_socket.c === RCS file: /cvs/src/sys/kern/uipc_socket.c,v retrieving revision 1.307 diff -u -p -r1.307 uipc_socket.c --- sys/kern/uipc_socket.c 3 Aug 2023 09:49:08 - 1.307 +++ sys/kern/uipc_socket.c 3 Aug 2023 21:20:58 - @@ -1856,6 +1856,9 @@ sosetopt(struct socket *so, int level, i case SO_SNDLOWAT: case SO_RCVLOWAT: { + struct sockbuf *sb = (optname == SO_SNDBUF || + optname == SO_SNDLOWAT ? + &so->so_snd : &so->so_rcv); u_long cnt; if (m == NULL || m->m_len < sizeof (int)) @@ -1867,40 +1870,23 @@ sosetopt(struct socket *so, int level, i solock(so); switch (optname) { case SO_SNDBUF: - if (so->so_snd.sb_state & SS_CANTSENDMORE) { - error = EINVAL; - break; - } - if (sbcheckreserve(cnt, so->so_snd.sb_wat) || - sbreserve(so, &so->so_snd, cnt)) { - error = ENOBUFS; - break; - } - so->so_snd.sb_wat = cnt; - break; - case SO_RCVBUF: - if (so->so_rcv.sb_state & SS_CANTRCVMORE) { + if (sb->sb_state & + (SS_CANTSENDMORE | SS_CANTRCVMORE)) { error = EINVAL; break; } - if (sbcheckreserve(cnt, so->so_rcv.sb_wat) || - sbreserve(so, &so->so_rcv, cnt)) { + if (sbcheckreserve(cnt, sb->sb_wat) || + sbreserve(so, sb, cnt)) { error = ENOBUFS; break; } - so->so_rcv.sb_wat = cnt; + sb->sb_wat = cnt; break; - case SO_SNDLOWAT: - so->so_snd.sb_lowat = - (cnt > so->so_snd.sb_hiwat) ? - so->so_snd.sb_hiwat : cnt; - break; case SO_RCVLOWAT: - so->so_rcv.sb_lowat = - (cnt > so->so_rcv.sb_hiwat) ? - so->so_rcv.sb_hiwat : cnt; + sb->sb_lowat = (cnt > sb->sb_hiwat) ? + sb->sb_hiwat : cnt; break; } sounlock(so); @@ -1910,6 +1896,8 @@ sosetopt(struct socket *so, int level, i case SO_SNDTIMEO: case SO_RCVTIMEO: { + struct sockbuf *sb = (optname == SO_SNDTIMEO ? + &so->so_snd : &so->so_rcv); struct timeval tv; uint64_t nsecs; @@ -1925,14 +1913,7 @@ sosetopt(struct socket *so, int level, i nsecs = INFSLP; solock(so); - switch (optname) { - case SO_SNDTIMEO: - so->so_snd.sb_timeo_nsecs = nsecs; - break; - case SO_RCVTIMEO: - so->so_rcv.sb_timeo_nsecs = nsecs; - break; - } + sb->sb_timeo_nsecs = nsecs; sounlock(so); break; }
Re: vmd: handle EAGAIN from imsg_flush
On Thu, Aug 03, 2023 at 04:20:47PM -0400, Dave Voutila wrote: > Found this while working on some virtio stuff. My original > implementation as part of the multi-process redesign didn't handle if > imsg_flush failed due to resource shortages (EAGAIN), so adopt the > do/while idiom used by other daemons like iked(8). > > Manifests with errors from the vm process looking like: > > virtio_pci_io: imsg_flush (write) > vmd: pci i/o access function failed > > ok? There are way to many imsg_flush() calls in this code. I don't think this is designed the right way. imsg_flush() should almost never be used. Busy loop on EAGAIN will bite you further down the road. > diff refs/heads/master refs/heads/vmd-imsg_flush > commit - 40d57955f2f1a3a65c42ea374f86c74cf879d76d > commit + 2c42dedb675f013276cdbd47464f656e2451b92c > blob - 798b5fea6d589d43083c134a040df6272037869f > blob + ff1ab5fdcb52866f10a14df38d30abde277619df > --- usr.sbin/vmd/virtio.c > +++ usr.sbin/vmd/virtio.c > @@ -825,8 +825,11 @@ virtio_shutdown(struct vmd_vm *vm) > if (ret == -1) > fatalx("%s: failed to send shutdown to device", > __func__); > - if (imsg_flush(ibuf) == -1) > - fatalx("%s: imsg_flush", __func__); > + do { > + ret = imsg_flush(ibuf); > + } while (ret == -1 && errno == EAGAIN); > + if (ret == -1) > + fatal("%s: imsg_flush", __func__); > } > > /* > @@ -1132,8 +1135,11 @@ vionet_dump(int fd) > __func__, dev->vionet.idx); > return (-1); > } > - if (imsg_flush(ibuf) == -1) { > - log_warnx("%s: imsg_flush", __func__); > + do { > + ret = imsg_flush(ibuf); > + } while (ret == -1 && errno == EAGAIN); > + if (ret == -1) { > + log_warn("%s: imsg_flush", __func__); > return (-1); > } > > @@ -1189,12 +1195,14 @@ vioblk_dump(int fd) > __func__, dev->vioblk.idx); > return (-1); > } > - if (imsg_flush(ibuf) == -1) { > - log_warnx("%s: imsg_flush", __func__); > + do { > + ret = imsg_flush(ibuf); > + } while (ret == -1 && errno == EAGAIN); > + if (ret == -1) { > + log_warn("%s: imsg_flush", __func__); > return (-1); > } > > - > sz = atomicio(read, dev->sync_fd, &temp, sizeof(temp)); > if (sz != sizeof(temp)) { > log_warnx("%s: failed to dump vioblk[%d]", __func__, > @@ -1660,8 +1668,11 @@ virtio_pci_io(int dir, uint16_t reg, uint32_t *data, u > " device", __func__); > return (ret); > } > - if (imsg_flush(ibuf) == -1) { > - log_warnx("%s: imsg_flush (write)", __func__); > + do { > + ret = imsg_flush(ibuf); > + } while (ret == -1 && errno == EAGAIN); > + if (ret == -1) { > + log_warn("%s: imsg_flush (write)", __func__); > return (-1); > } > } else { > @@ -1675,8 +1686,11 @@ virtio_pci_io(int dir, uint16_t reg, uint32_t *data, u > " device", __func__); > return (ret); > } > - if (imsg_flush(ibuf) == -1) { > - log_warnx("%s: imsg_flush (read)", __func__); > + do { > + ret = imsg_flush(ibuf); > + } while (ret == -1 && errno == EAGAIN); > + if (ret == -1) { > + log_warn("%s: imsg_flush (read)", __func__); > return (-1); > } > -- :wq Claudio
vmd: handle EAGAIN from imsg_flush
Found this while working on some virtio stuff. My original implementation as part of the multi-process redesign didn't handle if imsg_flush failed due to resource shortages (EAGAIN), so adopt the do/while idiom used by other daemons like iked(8). Manifests with errors from the vm process looking like: virtio_pci_io: imsg_flush (write) vmd: pci i/o access function failed ok? diff refs/heads/master refs/heads/vmd-imsg_flush commit - 40d57955f2f1a3a65c42ea374f86c74cf879d76d commit + 2c42dedb675f013276cdbd47464f656e2451b92c blob - 798b5fea6d589d43083c134a040df6272037869f blob + ff1ab5fdcb52866f10a14df38d30abde277619df --- usr.sbin/vmd/virtio.c +++ usr.sbin/vmd/virtio.c @@ -825,8 +825,11 @@ virtio_shutdown(struct vmd_vm *vm) if (ret == -1) fatalx("%s: failed to send shutdown to device", __func__); - if (imsg_flush(ibuf) == -1) - fatalx("%s: imsg_flush", __func__); + do { + ret = imsg_flush(ibuf); + } while (ret == -1 && errno == EAGAIN); + if (ret == -1) + fatal("%s: imsg_flush", __func__); } /* @@ -1132,8 +1135,11 @@ vionet_dump(int fd) __func__, dev->vionet.idx); return (-1); } - if (imsg_flush(ibuf) == -1) { - log_warnx("%s: imsg_flush", __func__); + do { + ret = imsg_flush(ibuf); + } while (ret == -1 && errno == EAGAIN); + if (ret == -1) { + log_warn("%s: imsg_flush", __func__); return (-1); } @@ -1189,12 +1195,14 @@ vioblk_dump(int fd) __func__, dev->vioblk.idx); return (-1); } - if (imsg_flush(ibuf) == -1) { - log_warnx("%s: imsg_flush", __func__); + do { + ret = imsg_flush(ibuf); + } while (ret == -1 && errno == EAGAIN); + if (ret == -1) { + log_warn("%s: imsg_flush", __func__); return (-1); } - sz = atomicio(read, dev->sync_fd, &temp, sizeof(temp)); if (sz != sizeof(temp)) { log_warnx("%s: failed to dump vioblk[%d]", __func__, @@ -1660,8 +1668,11 @@ virtio_pci_io(int dir, uint16_t reg, uint32_t *data, u " device", __func__); return (ret); } - if (imsg_flush(ibuf) == -1) { - log_warnx("%s: imsg_flush (write)", __func__); + do { + ret = imsg_flush(ibuf); + } while (ret == -1 && errno == EAGAIN); + if (ret == -1) { + log_warn("%s: imsg_flush (write)", __func__); return (-1); } } else { @@ -1675,8 +1686,11 @@ virtio_pci_io(int dir, uint16_t reg, uint32_t *data, u " device", __func__); return (ret); } - if (imsg_flush(ibuf) == -1) { - log_warnx("%s: imsg_flush (read)", __func__); + do { + ret = imsg_flush(ibuf); + } while (ret == -1 && errno == EAGAIN); + if (ret == -1) { + log_warn("%s: imsg_flush (read)", __func__); return (-1); }
Re: sigcontext in signal.h
On Thu, Aug 03, 2023 at 08:11:40PM +0200, Robert Palm wrote: > I am looking at following code from arm64 and riscv64. > > ARM64: > https://github.com/openbsd/src/blob/master/sys/arch/arm64/include/signal.h#L51 > > -- > struct sigcontext { > int __sc_unused; > int sc_mask; /* signal mask to restore */ > > unsigned long sc_sp; > unsigned long sc_lr; > unsigned long sc_elr; > unsigned long sc_spsr; > unsigned long sc_x[30]; > > long sc_cookie; > }; > -- > > RISCV64: > https://github.com/openbsd/src/blob/master/sys/arch/riscv64/include/signal.h#L48 > > -- > struct sigcontext { > int __sc_unused; > int sc_mask; > > __register_t sc_ra; > __register_t sc_sp; > __register_t sc_gp; > __register_t sc_tp; > __register_t sc_t[7]; > __register_t sc_s[12]; > __register_t sc_a[8]; > __register_t sc_sepc; > > /* 64 bit floats as well as integer registers */ > __register_t sc_f[32]; /* floating-point registers */ > __register_t sc_fcsr; /* floating-point control register */ > > long sc_cookie; > }; > > -- > > I would like to know what these registers "mean". > > Maybe someone knows and can tell me ? > > Thank you. > Hi! On my riscv64 qemu instance I looked in /usr/include/machine/_types.h (probably the same as /sys/arch/riscv64/include/_types.h) and this is what it said: /* Register size */ typedef long__register_t; A long in OpenBSD/riscv64 is 8 bytes so 64 bits. riscv64# cat > sizetest.c #include int main(void) { return (sizeof(long)); } riscv64# cc -g -o sizetest sizetest.c riscv64# ./sizetest riscv64# echo $? 8 I noticed in /sys/arch/riscv64/include/reg.h there is a register which is not listed in the riscv64 register list in my literature really there is registers x0 through x31 so 32 long registers, there also is 32 floating point registers if your implementation has that mostly it does when the riscv cpu reports IMAFDQ or similar which is (integer ISA, multiply/divide set, atomic set, floating points set, double floats set, and quad precision floating points). Is this what you want to hear? The number of registers is not wrong but it doesn't have x0 which is a constant value of 0. Perhaps the /sys/arch/riscv64/riscv64/db_interface.c has more about the registers to your liking, near struct db_variable db_regs[]... I got some help from my (by now pre-pandemic era books) written by David Patterson which cover riscv in some detail. Otherwise I just looked at the https://en.wikipedia.org/wiki/RISC-V near Register sets which seems to be an excellent reference. Best Regards, -peter -- Over thirty years experience on Unix-like Operating Systems starting with QNX.
sigcontext in signal.h
I am looking at following code from arm64 and riscv64. ARM64: https://github.com/openbsd/src/blob/master/sys/arch/arm64/include/signal.h#L51 -- struct sigcontext { int __sc_unused; int sc_mask; /* signal mask to restore */ unsigned long sc_sp; unsigned long sc_lr; unsigned long sc_elr; unsigned long sc_spsr; unsigned long sc_x[30]; long sc_cookie; }; -- RISCV64: https://github.com/openbsd/src/blob/master/sys/arch/riscv64/include/signal.h#L48 -- struct sigcontext { int __sc_unused; int sc_mask; __register_t sc_ra; __register_t sc_sp; __register_t sc_gp; __register_t sc_tp; __register_t sc_t[7]; __register_t sc_s[12]; __register_t sc_a[8]; __register_t sc_sepc; /* 64 bit floats as well as integer registers */ __register_t sc_f[32]; /* floating-point registers */ __register_t sc_fcsr; /* floating-point control register */ long sc_cookie; }; -- I would like to know what these registers "mean". Maybe someone knows and can tell me ? Thank you.
Re: add extract example to tar(1) man page
ok! jmc On 3 August 2023 17:16:44 BST, aisha wrote: >On 23/08/03 04:26PM, Jason McIntyre wrote: >> On Thu, Aug 03, 2023 at 11:05:16AM -0400, aisha wrote: >> > On 23/08/03 10:51AM, aisha wrote: >> > > On 23/08/03 09:45AM, Stuart Henderson wrote: >> > > > On 2023/08/03 07:23, Jason McIntyre wrote: >> > > > > On Wed, Aug 02, 2023 at 05:52:02PM -0400, aisha wrote: > >> yes, thanks! ok. >> >> but you should change the text from >> >> Create a gzip(1) compressed archive ... to a file called... >> to >> Create a gzip(1) compressed archive, called XXX, containing the >> files ... >> >> as i suggested, because the syntax "create ... to a file" reads badly. >> > >Thanks, fixed. > >OK? > >Index: tar.1 >=== >RCS file: /cvs/src/bin/pax/tar.1,v >retrieving revision 1.64 >diff -u -p -r1.64 tar.1 >--- tar.1 31 Mar 2022 17:27:14 - 1.64 >+++ tar.1 3 Aug 2023 16:14:11 - >@@ -331,16 +331,16 @@ and > .Pp > .Dl $ tar c bonvole sekve > .Pp >-Output a >+Create a > .Xr gzip 1 >-compressed archive containing the files >+compressed archive, called >+.Pa foriru.tar.gz , >+containing the files > .Pa bonvole > and >-.Pa sekve >-to a file called >-.Pa foriru.tar.gz : >+.Pa sekve : > .Pp >-.Dl $ tar zcf foriru.tar.gz bonvole sekve >+.Dl $ tar czf foriru.tar.gz bonvole sekve > .Pp > Verbosely create an archive, called > .Pa backup.tar.gz , >@@ -349,7 +349,7 @@ of all files matching the shell > function > .Pa *.c : > .Pp >-.Dl $ tar zcvf backup.tar.gz *.c >+.Dl $ tar cvzf backup.tar.gz *.c > .Pp > Verbosely list, but do not extract, all files ending in > .Pa .jpeg >@@ -358,6 +358,13 @@ from a compressed archive named > Note that the glob pattern has been quoted to avoid expansion by the shell: > .Pp > .Dl $ tar tvzf backup.tar.gz '*.jpeg' >+.Pp >+Verbosely extract an archive, called >+.Pa foo.tar.gz , >+to the directory >+.Pa /var/foo : >+.Pp >+.Dl $ tar xvzf foo.tar.gz -C /var/foo > .Pp > For more detailed examples, see > .Xr pax 1 . >
Re: add extract example to tar(1) man page
On 23/08/03 04:26PM, Jason McIntyre wrote: > On Thu, Aug 03, 2023 at 11:05:16AM -0400, aisha wrote: > > On 23/08/03 10:51AM, aisha wrote: > > > On 23/08/03 09:45AM, Stuart Henderson wrote: > > > > On 2023/08/03 07:23, Jason McIntyre wrote: > > > > > On Wed, Aug 02, 2023 at 05:52:02PM -0400, aisha wrote: > yes, thanks! ok. > > but you should change the text from > > Create a gzip(1) compressed archive ... to a file called... > to > Create a gzip(1) compressed archive, called XXX, containing the > files ... > > as i suggested, because the syntax "create ... to a file" reads badly. > Thanks, fixed. OK? Index: tar.1 === RCS file: /cvs/src/bin/pax/tar.1,v retrieving revision 1.64 diff -u -p -r1.64 tar.1 --- tar.1 31 Mar 2022 17:27:14 - 1.64 +++ tar.1 3 Aug 2023 16:14:11 - @@ -331,16 +331,16 @@ and .Pp .Dl $ tar c bonvole sekve .Pp -Output a +Create a .Xr gzip 1 -compressed archive containing the files +compressed archive, called +.Pa foriru.tar.gz , +containing the files .Pa bonvole and -.Pa sekve -to a file called -.Pa foriru.tar.gz : +.Pa sekve : .Pp -.Dl $ tar zcf foriru.tar.gz bonvole sekve +.Dl $ tar czf foriru.tar.gz bonvole sekve .Pp Verbosely create an archive, called .Pa backup.tar.gz , @@ -349,7 +349,7 @@ of all files matching the shell function .Pa *.c : .Pp -.Dl $ tar zcvf backup.tar.gz *.c +.Dl $ tar cvzf backup.tar.gz *.c .Pp Verbosely list, but do not extract, all files ending in .Pa .jpeg @@ -358,6 +358,13 @@ from a compressed archive named Note that the glob pattern has been quoted to avoid expansion by the shell: .Pp .Dl $ tar tvzf backup.tar.gz '*.jpeg' +.Pp +Verbosely extract an archive, called +.Pa foo.tar.gz , +to the directory +.Pa /var/foo : +.Pp +.Dl $ tar xvzf foo.tar.gz -C /var/foo .Pp For more detailed examples, see .Xr pax 1 .
Re: uvm_loadav: don't recompute schedstate_percpu.spc_nrun
Scott Cheloha wrote: > On Thu, Aug 03, 2023 at 09:09:30AM -0600, Theo de Raadt wrote: > > Scott Cheloha wrote: > > > > > > > How about this. Kill the spc_ldavg calculation. Its use is more then > > > > > questionable. The cpu selection code using this is not wroking well > > > > > and > > > > > process stealing will do the rest. > > > > > > This is more or less what I said yesterday. The per-CPU load average > > > is not useful for deciding where to put a thread. > > > > I guess you have not been reading mpi's scheduler diff. The entire idea > > of "placing a thread" is 1980's single-processor flawed. > > Dude, I'm not talking about mpi's patch, I'm talking about what's in > the tree. > > Given the current state of the scheduler, we can throw out spc_ldavg. > It isn't necessary. > > > > Of the variables we > > > have available to consider, only the current length of the runqueue is > > > useful. > > > > No, that concept is also broken. > > Again, I am talking about the current scheduler. > > Said another way: the current approach can limp along just fine using > only the runqueue length. We can get rid of spc_ldavg without > worrying about it. I'm just saying all of us need to recognize that it is impossible to defend the in-tree code. Anways, you said "the current length of the runqueue is useful". It is only useful in choosing a different stupid runqueue.
Re: uvm_loadav: don't recompute schedstate_percpu.spc_nrun
On Thu, Aug 03, 2023 at 09:09:30AM -0600, Theo de Raadt wrote: > Scott Cheloha wrote: > > > > > How about this. Kill the spc_ldavg calculation. Its use is more then > > > > questionable. The cpu selection code using this is not wroking well and > > > > process stealing will do the rest. > > > > This is more or less what I said yesterday. The per-CPU load average > > is not useful for deciding where to put a thread. > > I guess you have not been reading mpi's scheduler diff. The entire idea > of "placing a thread" is 1980's single-processor flawed. Dude, I'm not talking about mpi's patch, I'm talking about what's in the tree. Given the current state of the scheduler, we can throw out spc_ldavg. It isn't necessary. > > Of the variables we > > have available to consider, only the current length of the runqueue is > > useful. > > No, that concept is also broken. Again, I am talking about the current scheduler. Said another way: the current approach can limp along just fine using only the runqueue length. We can get rid of spc_ldavg without worrying about it.
Re: add extract example to tar(1) man page
On Thu, Aug 03, 2023 at 11:05:16AM -0400, aisha wrote: > On 23/08/03 10:51AM, aisha wrote: > > On 23/08/03 09:45AM, Stuart Henderson wrote: > > > On 2023/08/03 07:23, Jason McIntyre wrote: > > > > On Wed, Aug 02, 2023 at 05:52:02PM -0400, aisha wrote: > > > > > Hi, > > > > > Someone - https://www.youtube.com/watch?v=NQ5uD5x8vzg - mentioned > > > > > that our man page for tar(1) doesn't have an extract example, so I > > > > > thought it would be good to add a simple one which highlights a > > > > > common use case. > > > > > > > > > > OK? > > > > > > > > > > > > > hi. > > > > > > > > the examples section is small enough that i suppose it wouldn;t be a > > > > problem to add another one. > > > > > > as it must be specified in a non-obvious place on the command line and > > > it's not currently explained particularly clearly, I'd welcome an example > > > using -C. > > > > > > > it does add another example with a similar set of options though, all in > > > > a different order. i wonder whether we should try and push the action as > > > > the first option, so people can see what we're doing. so "cXXX" when the > > > > example is to create, "tXXX" when listing? then keep the "vzf" options > > > > that are so common in the same order? > > > > > > > > then the second example is probably more helpful as "Create a gzip(1) > > > > compressed archive blah.tgz". > > > > > > > > i know this isn;t what you're posting about, so feel free to leave alone > > > > if you don;t want to tackle that. > > > > > > > > one more thing. you could as easily remove the text "the folder", but > > > > i'd be tempted to use "directory", as that's more standard for our docs, > > > > and how -C itself describes it. > > > > > > strongly agree on options order and "directory". > > > > Thanks for the comments! > > Here's the updated patch. > > > > OK? > > > > Small change to address the backup/-p comments, I've removed the word -backup > from the tar archive name > > OK? > yes, thanks! ok. but you should change the text from Create a gzip(1) compressed archive ... to a file called... to Create a gzip(1) compressed archive, called XXX, containing the files ... as i suggested, because the syntax "create ... to a file" reads badly. jmc > Index: tar.1 > === > RCS file: /cvs/src/bin/pax/tar.1,v > retrieving revision 1.64 > diff -u -p -r1.64 tar.1 > --- tar.1 31 Mar 2022 17:27:14 - 1.64 > +++ tar.1 3 Aug 2023 15:02:49 - > @@ -331,7 +331,7 @@ and > .Pp > .Dl $ tar c bonvole sekve > .Pp > -Output a > +Create a > .Xr gzip 1 > compressed archive containing the files > .Pa bonvole > @@ -340,7 +340,7 @@ and > to a file called > .Pa foriru.tar.gz : > .Pp > -.Dl $ tar zcf foriru.tar.gz bonvole sekve > +.Dl $ tar czf foriru.tar.gz bonvole sekve > .Pp > Verbosely create an archive, called > .Pa backup.tar.gz , > @@ -349,7 +349,7 @@ of all files matching the shell > function > .Pa *.c : > .Pp > -.Dl $ tar zcvf backup.tar.gz *.c > +.Dl $ tar cvzf backup.tar.gz *.c > .Pp > Verbosely list, but do not extract, all files ending in > .Pa .jpeg > @@ -358,6 +358,13 @@ from a compressed archive named > Note that the glob pattern has been quoted to avoid expansion by the shell: > .Pp > .Dl $ tar tvzf backup.tar.gz '*.jpeg' > +.Pp > +Verbosely extract an archive, called > +.Pa foo.tar.gz , > +to the directory > +.Pa /var/foo : > +.Pp > +.Dl $ tar xvzf foo.tar.gz -C /var/foo > .Pp > For more detailed examples, see > .Xr pax 1 .
Re: uvm_loadav: don't recompute schedstate_percpu.spc_nrun
Scott Cheloha wrote: > > > How about this. Kill the spc_ldavg calculation. Its use is more then > > > questionable. The cpu selection code using this is not wroking well and > > > process stealing will do the rest. > > This is more or less what I said yesterday. The per-CPU load average > is not useful for deciding where to put a thread. I guess you have not been reading mpi's scheduler diff. The entire idea of "placing a thread" is 1980's single-processor flawed. > Of the variables we > have available to consider, only the current length of the runqueue is > useful. No, that concept is also broken. On your 8-cpu laptop, the runqueue does not work at all. Typically, the number of available cpu's exceeds the ready-to-run processes. For workloads where the ready process count exceeds the cpus, the processes get put onto the wrong cpu's queues -- and because scheduler code runs so rarely, this is all a waste of time. What actually happens is pretty much all process selection happen based upon a process on a cpu going to sleep, and that cpu finds it's runqueue is empty because other cpu's have stolen it empty, so that cpu proceeds to steal out of another cpu's runqueue. All process progress really depends upon stealing processes, fixing the other cpu's runqueue with locks, and thus ignoring any pre-calculation by the 'scheduler code'. All of this stealing requires big locks, to protect the scheduler code which is occasionally (let's be honest -- rarely?) re-organizing these stupid runqueues, which then get ignored in the typical case. Those locks are so crazy weird, we've been confused for decades about how to improve it. It appears there are no small steps, and we probably need a "Briandead / Dead Alive" lawnmower procedure, and then rebuild afterwards. I think you will soon join the club of people who believe this code from 1980 is so completely unsuitable that not one line of it can be reused.
Re: add extract example to tar(1) man page
On 23/08/03 10:51AM, aisha wrote: > On 23/08/03 09:45AM, Stuart Henderson wrote: > > On 2023/08/03 07:23, Jason McIntyre wrote: > > > On Wed, Aug 02, 2023 at 05:52:02PM -0400, aisha wrote: > > > > Hi, > > > > Someone - https://www.youtube.com/watch?v=NQ5uD5x8vzg - mentioned > > > > that our man page for tar(1) doesn't have an extract example, so I > > > > thought it would be good to add a simple one which highlights a common > > > > use case. > > > > > > > > OK? > > > > > > > > > > hi. > > > > > > the examples section is small enough that i suppose it wouldn;t be a > > > problem to add another one. > > > > as it must be specified in a non-obvious place on the command line and > > it's not currently explained particularly clearly, I'd welcome an example > > using -C. > > > > > it does add another example with a similar set of options though, all in > > > a different order. i wonder whether we should try and push the action as > > > the first option, so people can see what we're doing. so "cXXX" when the > > > example is to create, "tXXX" when listing? then keep the "vzf" options > > > that are so common in the same order? > > > > > > then the second example is probably more helpful as "Create a gzip(1) > > > compressed archive blah.tgz". > > > > > > i know this isn;t what you're posting about, so feel free to leave alone > > > if you don;t want to tackle that. > > > > > > one more thing. you could as easily remove the text "the folder", but > > > i'd be tempted to use "directory", as that's more standard for our docs, > > > and how -C itself describes it. > > > > strongly agree on options order and "directory". > > Thanks for the comments! > Here's the updated patch. > > OK? > Small change to address the backup/-p comments, I've removed the word -backup from the tar archive name OK? Index: tar.1 === RCS file: /cvs/src/bin/pax/tar.1,v retrieving revision 1.64 diff -u -p -r1.64 tar.1 --- tar.1 31 Mar 2022 17:27:14 - 1.64 +++ tar.1 3 Aug 2023 15:02:49 - @@ -331,7 +331,7 @@ and .Pp .Dl $ tar c bonvole sekve .Pp -Output a +Create a .Xr gzip 1 compressed archive containing the files .Pa bonvole @@ -340,7 +340,7 @@ and to a file called .Pa foriru.tar.gz : .Pp -.Dl $ tar zcf foriru.tar.gz bonvole sekve +.Dl $ tar czf foriru.tar.gz bonvole sekve .Pp Verbosely create an archive, called .Pa backup.tar.gz , @@ -349,7 +349,7 @@ of all files matching the shell function .Pa *.c : .Pp -.Dl $ tar zcvf backup.tar.gz *.c +.Dl $ tar cvzf backup.tar.gz *.c .Pp Verbosely list, but do not extract, all files ending in .Pa .jpeg @@ -358,6 +358,13 @@ from a compressed archive named Note that the glob pattern has been quoted to avoid expansion by the shell: .Pp .Dl $ tar tvzf backup.tar.gz '*.jpeg' +.Pp +Verbosely extract an archive, called +.Pa foo.tar.gz , +to the directory +.Pa /var/foo : +.Pp +.Dl $ tar xvzf foo.tar.gz -C /var/foo .Pp For more detailed examples, see .Xr pax 1 .
Re: iked, ibuf_length vs ibuf_size
On Thu, Aug 03, 2023 at 03:42:46PM +0200, Claudio Jeker wrote: > iked has a special version of ibuf_size() called ibuf_length(). In the > long run I want to remove this special case. The problem is that > ibuf_length(NULL) returns 0 while ibuf_size() fails. > Allowing the NULL pointer here results in bad code since it is no longer > obvious if a buffer is initalised or not. > > So here is a first step on cleaning up this mess. It switches all > ibuf_length() calls to ibuf_size() where it is obvious that the argument > is not NULL (e.g. when ibuf_data(buf) is just at the same time). > Also in some cases the check should actually be if buf == NULL since > in those cases the buf is later allocated. (ikev2_pld.c and > ikev2.c::ikev2_sa_responder()). > > Please double check if I did not introduce some error. I agree that all the buffers you pass to ibuf_size() are non-NULL either because they were previously checked, freshly allocated or would lead to a code path where they would already be dereferenced without your diff. The conversions to NULL checks seem correct, so ok tb. One comment for tobhe and markus below > -- > :wq Claudio > > > Index: ca.c > === > RCS file: /cvs/src/sbin/iked/ca.c,v > retrieving revision 1.95 > diff -u -p -r1.95 ca.c > --- ca.c 28 Jun 2023 14:10:24 - 1.95 > +++ ca.c 28 Jul 2023 11:29:25 - > @@ -207,7 +207,7 @@ int > ca_certbundle_add(struct ibuf *buf, struct iked_id *id) > { > uint8_t type = id->id_type; > - size_t len = ibuf_length(id->id_buf); > + size_t len = ibuf_size(id->id_buf); > void*val = ibuf_data(id->id_buf); > > if (id == NULL || That id == NULL check doesn't make much sense given the above dereferences.
Re: uvm_loadav: don't recompute schedstate_percpu.spc_nrun
On Thu, Aug 03, 2023 at 02:38:11PM +0200, Mark Kettenis wrote: > > Date: Thu, 3 Aug 2023 12:56:01 +0200 > > From: Claudio Jeker > > > > On Thu, Aug 03, 2023 at 10:53:24AM +0200, Claudio Jeker wrote: > > > On Thu, Aug 03, 2023 at 10:13:57AM +0200, Martin Pieuchot wrote: > > > > On 02/08/23(Wed) 14:22, Claudio Jeker wrote: > > > > > On Mon, Jul 31, 2023 at 10:21:11AM -0500, Scott Cheloha wrote: > > > > > > On Fri, Jul 28, 2023 at 07:36:41PM -0500, Scott Cheloha wrote: > > > > > > > claudio@ notes that uvm_loadav() pointlessly walks the allproc > > > > > > > list to > > > > > > > recompute schedstate_percpu.spn_nrun for each CPU. > > > > > > > > > > > > > > We can just use the value instead of recomputing it. > > > > > > > > > > > > Whoops, off-by-one. The current load averaging code includes the > > > > > > running thread in the nrun count if it is *not* the idle thread. > > > > > > > > > > Yes, with this the loadavg seems to be consistent and following the > > > > > number > > > > > of running processes. The code seems to behave like before (with all > > > > > its > > > > > quirks). > > > > > > > > > > OK claudio@, this is a good first step. Now I think this code should > > > > > later > > > > > be moved into kern_sched.c or sched_bsd.c and removed from uvm. Not > > > > > sure why > > > > > the load calculation is part of memory management... > > > > > > > > > > On top of this I wonder about the per-CPU load calculation. In my > > > > > opinion > > > > > it is wrong to skip the calculation if the CPU is idle. Because of > > > > > this > > > > > there is no decay for idle CPUs and that feels wrong to me. > > > > > Do we have a userland utility that reports spc_ldavg? > > > > > > > > I don't understand why the SCHED_LOCK() is needed. Since I'm really > > > > against adding new uses for it, could you comment on that? > > > > > > The question is how sloppy do we want to be. This code looks at > > > ci_schedstate (spc_idleproc and spc_nrun) and ci_curproc so the be correct > > > this needs to lock the scheduler. Do we really want that, hell no. > > > > How about this. Kill the spc_ldavg calculation. Its use is more then > > questionable. The cpu selection code using this is not wroking well and > > process stealing will do the rest. This is more or less what I said yesterday. The per-CPU load average is not useful for deciding where to put a thread. Of the variables we have available to consider, only the current length of the runqueue is useful. Go for it, kill it. One nit below. > > Also use sched_cpu_idle to know if a cpu is idle. (This is a neat trick, nice.) > > Index: kern/kern_sched.c > > === > > RCS file: /cvs/src/sys/kern/kern_sched.c,v > > retrieving revision 1.81 > > diff -u -p -r1.81 kern_sched.c > > --- kern/kern_sched.c 27 Jul 2023 17:52:53 - 1.81 > > +++ kern/kern_sched.c 3 Aug 2023 08:41:38 - > > @@ -373,7 +373,6 @@ sched_choosecpu_fork(struct proc *parent > > { > > #ifdef MULTIPROCESSOR > > struct cpu_info *choice = NULL; > > - fixpt_t load, best_load = ~0; > > int run, best_run = INT_MAX; > > struct cpu_info *ci; > > struct cpuset set; > > @@ -407,13 +406,10 @@ sched_choosecpu_fork(struct proc *parent > > while ((ci = cpuset_first(&set)) != NULL) { > > cpuset_del(&set, ci); > > > > - load = ci->ci_schedstate.spc_ldavg; > > run = ci->ci_schedstate.spc_nrun; > > > > - if (choice == NULL || run < best_run || > > - (run == best_run &&load < best_load)) { > > + if (choice == NULL || run < best_run) { > > choice = ci; > > - best_load = load; > > best_run = run; > > } > > } > > @@ -605,11 +601,6 @@ sched_proc_to_cpu_cost(struct cpu_info * > > */ > > if (CPU_IS_PRIMARY(ci)) > > cost += sched_cost_runnable; > > - > > - /* > > -* Higher load on the destination means we don't want to go there. > > -*/ > > - cost += ((sched_cost_load * spc->spc_ldavg) >> FSHIFT); > > > > /* > > * If the proc is on this cpu already, lower the cost by how much > > Index: sys/sched.h > > === > > RCS file: /cvs/src/sys/sys/sched.h,v > > retrieving revision 1.58 > > diff -u -p -r1.58 sched.h > > --- sys/sched.h 25 Jul 2023 18:16:19 - 1.58 > > +++ sys/sched.h 3 Aug 2023 08:42:39 - > > @@ -110,7 +110,6 @@ struct schedstate_percpu { > > struct clockintr *spc_profclock; /* [o] profclock handle */ > > > > u_int spc_nrun; /* procs on the run queues */ > > - fixpt_t spc_ldavg; /* shortest load avg. for this cpu */ > > > > volatile uint32_t spc_whichqs; > > volatile u_int spc_spinning;/* this cpu is currently spinning */ > > Index: uvm/uvm_meter.c > > =
Re: add extract example to tar(1) man page
On 23/08/03 09:45AM, Stuart Henderson wrote: > On 2023/08/03 07:23, Jason McIntyre wrote: > > On Wed, Aug 02, 2023 at 05:52:02PM -0400, aisha wrote: > > > Hi, > > > Someone - https://www.youtube.com/watch?v=NQ5uD5x8vzg - mentioned that > > > our man page for tar(1) doesn't have an extract example, so I thought it > > > would be good to add a simple one which highlights a common use case. > > > > > > OK? > > > > > > > hi. > > > > the examples section is small enough that i suppose it wouldn;t be a > > problem to add another one. > > as it must be specified in a non-obvious place on the command line and > it's not currently explained particularly clearly, I'd welcome an example > using -C. > > > it does add another example with a similar set of options though, all in > > a different order. i wonder whether we should try and push the action as > > the first option, so people can see what we're doing. so "cXXX" when the > > example is to create, "tXXX" when listing? then keep the "vzf" options > > that are so common in the same order? > > > > then the second example is probably more helpful as "Create a gzip(1) > > compressed archive blah.tgz". > > > > i know this isn;t what you're posting about, so feel free to leave alone > > if you don;t want to tackle that. > > > > one more thing. you could as easily remove the text "the folder", but > > i'd be tempted to use "directory", as that's more standard for our docs, > > and how -C itself describes it. > > strongly agree on options order and "directory". Thanks for the comments! Here's the updated patch. OK? Index: tar.1 === RCS file: /cvs/src/bin/pax/tar.1,v retrieving revision 1.64 diff -u -p -r1.64 tar.1 --- tar.1 31 Mar 2022 17:27:14 - 1.64 +++ tar.1 3 Aug 2023 14:46:23 - @@ -331,7 +331,7 @@ and .Pp .Dl $ tar c bonvole sekve .Pp -Output a +Create a .Xr gzip 1 compressed archive containing the files .Pa bonvole @@ -340,7 +340,7 @@ and to a file called .Pa foriru.tar.gz : .Pp -.Dl $ tar zcf foriru.tar.gz bonvole sekve +.Dl $ tar czf foriru.tar.gz bonvole sekve .Pp Verbosely create an archive, called .Pa backup.tar.gz , @@ -349,7 +349,7 @@ of all files matching the shell function .Pa *.c : .Pp -.Dl $ tar zcvf backup.tar.gz *.c +.Dl $ tar cvzf backup.tar.gz *.c .Pp Verbosely list, but do not extract, all files ending in .Pa .jpeg @@ -358,6 +358,13 @@ from a compressed archive named Note that the glob pattern has been quoted to avoid expansion by the shell: .Pp .Dl $ tar tvzf backup.tar.gz '*.jpeg' +.Pp +Verbosely extract an archive, called +.Pa foo-backup.tar.gz , +to the directory +.Pa /var/foo : +.Pp +.Dl $ tar xvzf foo-backup.tar.gz -C /var/foo .Pp For more detailed examples, see .Xr pax 1 .
iked, ibuf_length vs ibuf_size
iked has a special version of ibuf_size() called ibuf_length(). In the long run I want to remove this special case. The problem is that ibuf_length(NULL) returns 0 while ibuf_size() fails. Allowing the NULL pointer here results in bad code since it is no longer obvious if a buffer is initalised or not. So here is a first step on cleaning up this mess. It switches all ibuf_length() calls to ibuf_size() where it is obvious that the argument is not NULL (e.g. when ibuf_data(buf) is just at the same time). Also in some cases the check should actually be if buf == NULL since in those cases the buf is later allocated. (ikev2_pld.c and ikev2.c::ikev2_sa_responder()). Please double check if I did not introduce some error. -- :wq Claudio Index: ca.c === RCS file: /cvs/src/sbin/iked/ca.c,v retrieving revision 1.95 diff -u -p -r1.95 ca.c --- ca.c28 Jun 2023 14:10:24 - 1.95 +++ ca.c28 Jul 2023 11:29:25 - @@ -207,7 +207,7 @@ int ca_certbundle_add(struct ibuf *buf, struct iked_id *id) { uint8_t type = id->id_type; - size_t len = ibuf_length(id->id_buf); + size_t len = ibuf_size(id->id_buf); void*val = ibuf_data(id->id_buf); if (id == NULL || @@ -416,16 +416,16 @@ ca_setcert(struct iked *env, struct iked /* Must send the cert and a valid Id to the ca process */ if (procid == PROC_CERT) { if (id == NULL || id->id_type == IKEV2_ID_NONE || - ibuf_length(id->id_buf) > IKED_ID_SIZE) + ibuf_size(id->id_buf) > IKED_ID_SIZE) return (-1); bzero(&idb, sizeof(idb)); /* Convert to a static Id */ idb.id_type = id->id_type; idb.id_offset = id->id_offset; - idb.id_length = ibuf_length(id->id_buf); + idb.id_length = ibuf_size(id->id_buf); memcpy(&idb.id_data, ibuf_data(id->id_buf), - ibuf_length(id->id_buf)); + ibuf_size(id->id_buf)); iov[iovcnt].iov_base = &idb; iov[iovcnt].iov_len = sizeof(idb); @@ -491,13 +491,13 @@ ca_setreq(struct iked *env, struct iked_ if (ikev2_policy2id(localid, &id, 1) != 0) return (-1); - if (ibuf_length(id.id_buf) > IKED_ID_SIZE) + if (ibuf_size(id.id_buf) > IKED_ID_SIZE) return (-1); bzero(&idb, sizeof(idb)); idb.id_type = id.id_type; idb.id_offset = id.id_offset; - idb.id_length = ibuf_length(id.id_buf); - memcpy(&idb.id_data, ibuf_data(id.id_buf), ibuf_length(id.id_buf)); + idb.id_length = ibuf_size(id.id_buf); + memcpy(&idb.id_data, ibuf_data(id.id_buf), ibuf_size(id.id_buf)); iov[iovcnt].iov_base = &idb; iov[iovcnt].iov_len = sizeof(idb); iovcnt++; @@ -637,7 +637,7 @@ ca_getcert(struct iked *env, struct imsg ret = ca_pubkey_serialize(certkey, &key); if (ret == 0) { ptr = ibuf_data(key.id_buf); - len = ibuf_length(key.id_buf); + len = ibuf_size(key.id_buf); type = key.id_type; break; } @@ -668,7 +668,7 @@ ca_getcert(struct iked *env, struct imsg ret = ca_validate_pubkey(env, &id, NULL, 0, &key); if (ret == 0) { ptr = ibuf_data(key.id_buf); - len = ibuf_length(key.id_buf); + len = ibuf_size(key.id_buf); type = key.id_type; } break; @@ -1060,18 +1060,18 @@ ca_reload(struct iked *env) } } - if (ibuf_length(env->sc_certreq)) { + if (ibuf_size(env->sc_certreq)) { env->sc_certreqtype = IKEV2_CERT_X509_CERT; iov[0].iov_base = &env->sc_certreqtype; iov[0].iov_len = sizeof(env->sc_certreqtype); iovcnt++; iov[1].iov_base = ibuf_data(env->sc_certreq); - iov[1].iov_len = ibuf_length(env->sc_certreq); + iov[1].iov_len = ibuf_size(env->sc_certreq); iovcnt++; log_debug("%s: loaded %zu ca certificate%s", __func__, - ibuf_length(env->sc_certreq) / SHA_DIGEST_LENGTH, - ibuf_length(env->sc_certreq) == SHA_DIGEST_LENGTH ? + ibuf_size(env->sc_certreq) / SHA_DIGEST_LENGTH, + ibuf_size(env->sc_certreq) == SHA_DIGEST_LENGTH ? "" : "s"); (void)proc_composev(&env->sc_ps, PROC_IKEV2, IMSG_CERTREQ, @@ -1252,7 +1252,7 @@ ca_cert_local(struct i
Re: VisionFive 2
> Date: Tue, 01 Aug 2023 23:11:43 +0200 > From: Robert Palm > > I own a VF 2 version 1.2a and can successfully install / boot the machine. > > The inner network port (dwqe1) works at 100 full duplex and receives > ipv4 via DHCP. > > The outer port currently doesn't seem to get an ip, but gets active > and in full-duplex 100. There is a reason why the VisonFive 2 isn't listed as supported on https://www.openbsd.org/riscv64.html There are stll bugs and... > It seems a lot depends on proper .dtb files (which kind users shared > with me). Yes, and it is a total mess. The device trees are being changed as support for the board is upstreamed in the Linux kernel. But their firmware still provides their hacked up device trees that they use with their hacked up vendor kernel. > How did you create the .dtb files ? They're build from: https://github.com/starfive-tech/linux/tree/JH7110_VisionFive2_upstream But that branch keeps getting rebased, and they changed things again, so PCIe stopped working. So I've decided to stick with what I have for development and wait until the device tree bindings have been accepted by the Linux maintainers. Meanwhile, if you're running -current on one of these, expect your setup to break at some point in the future. > Do you plan to update them ? The plan is to provide usable device tree in ports as soon as there is an upstream Linux version that only needs minor patching. > They seem to be quite different to the "official" starfive releases > (which don't work for me with OpenBSD). As I said, that's just the typical unmaintainable vendor crap. > Do you plan more work on the VF2 ? I probably consider it done as soon as I finc the remaining dwqe(4) bugs have been found and fixed. Cheers, Mark
Re: uvm_loadav: don't recompute schedstate_percpu.spc_nrun
> Date: Thu, 3 Aug 2023 12:56:01 +0200 > From: Claudio Jeker > > On Thu, Aug 03, 2023 at 10:53:24AM +0200, Claudio Jeker wrote: > > On Thu, Aug 03, 2023 at 10:13:57AM +0200, Martin Pieuchot wrote: > > > On 02/08/23(Wed) 14:22, Claudio Jeker wrote: > > > > On Mon, Jul 31, 2023 at 10:21:11AM -0500, Scott Cheloha wrote: > > > > > On Fri, Jul 28, 2023 at 07:36:41PM -0500, Scott Cheloha wrote: > > > > > > claudio@ notes that uvm_loadav() pointlessly walks the allproc list > > > > > > to > > > > > > recompute schedstate_percpu.spn_nrun for each CPU. > > > > > > > > > > > > We can just use the value instead of recomputing it. > > > > > > > > > > Whoops, off-by-one. The current load averaging code includes the > > > > > running thread in the nrun count if it is *not* the idle thread. > > > > > > > > Yes, with this the loadavg seems to be consistent and following the > > > > number > > > > of running processes. The code seems to behave like before (with all its > > > > quirks). > > > > > > > > OK claudio@, this is a good first step. Now I think this code should > > > > later > > > > be moved into kern_sched.c or sched_bsd.c and removed from uvm. Not > > > > sure why > > > > the load calculation is part of memory management... > > > > > > > > On top of this I wonder about the per-CPU load calculation. In my > > > > opinion > > > > it is wrong to skip the calculation if the CPU is idle. Because of this > > > > there is no decay for idle CPUs and that feels wrong to me. > > > > Do we have a userland utility that reports spc_ldavg? > > > > > > I don't understand why the SCHED_LOCK() is needed. Since I'm really > > > against adding new uses for it, could you comment on that? > > > > The question is how sloppy do we want to be. This code looks at > > ci_schedstate (spc_idleproc and spc_nrun) and ci_curproc so the be correct > > this needs to lock the scheduler. Do we really want that, hell no. > > How about this. Kill the spc_ldavg calculation. Its use is more then > questionable. The cpu selection code using this is not wroking well and > process stealing will do the rest. > Also use sched_cpu_idle to know if a cpu is idle. Given the direction we're heading, this makes sense. I don't believe spc_loadavg is doing anything useful at the moment. ok kettenis@ > Index: kern/kern_sched.c > === > RCS file: /cvs/src/sys/kern/kern_sched.c,v > retrieving revision 1.81 > diff -u -p -r1.81 kern_sched.c > --- kern/kern_sched.c 27 Jul 2023 17:52:53 - 1.81 > +++ kern/kern_sched.c 3 Aug 2023 08:41:38 - > @@ -373,7 +373,6 @@ sched_choosecpu_fork(struct proc *parent > { > #ifdef MULTIPROCESSOR > struct cpu_info *choice = NULL; > - fixpt_t load, best_load = ~0; > int run, best_run = INT_MAX; > struct cpu_info *ci; > struct cpuset set; > @@ -407,13 +406,10 @@ sched_choosecpu_fork(struct proc *parent > while ((ci = cpuset_first(&set)) != NULL) { > cpuset_del(&set, ci); > > - load = ci->ci_schedstate.spc_ldavg; > run = ci->ci_schedstate.spc_nrun; > > - if (choice == NULL || run < best_run || > - (run == best_run &&load < best_load)) { > + if (choice == NULL || run < best_run) { > choice = ci; > - best_load = load; > best_run = run; > } > } > @@ -605,11 +601,6 @@ sched_proc_to_cpu_cost(struct cpu_info * >*/ > if (CPU_IS_PRIMARY(ci)) > cost += sched_cost_runnable; > - > - /* > - * Higher load on the destination means we don't want to go there. > - */ > - cost += ((sched_cost_load * spc->spc_ldavg) >> FSHIFT); > > /* >* If the proc is on this cpu already, lower the cost by how much > Index: sys/sched.h > === > RCS file: /cvs/src/sys/sys/sched.h,v > retrieving revision 1.58 > diff -u -p -r1.58 sched.h > --- sys/sched.h 25 Jul 2023 18:16:19 - 1.58 > +++ sys/sched.h 3 Aug 2023 08:42:39 - > @@ -110,7 +110,6 @@ struct schedstate_percpu { > struct clockintr *spc_profclock; /* [o] profclock handle */ > > u_int spc_nrun; /* procs on the run queues */ > - fixpt_t spc_ldavg; /* shortest load avg. for this cpu */ > > volatile uint32_t spc_whichqs; > volatile u_int spc_spinning;/* this cpu is currently spinning */ > Index: uvm/uvm_meter.c > === > RCS file: /cvs/src/sys/uvm/uvm_meter.c,v > retrieving revision 1.46 > diff -u -p -r1.46 uvm_meter.c > --- uvm/uvm_meter.c 2 Aug 2023 13:54:45 - 1.46 > +++ uvm/uvm_meter.c 3 Aug 2023 10:12:02 - > @@ -70,7 +70,7 @@ struct loadavg averunnable; > * 5 second intervals. > */ > > -static fixpt_t cexp[3] = { > +static const f
Re: uvm_loadav: don't recompute schedstate_percpu.spc_nrun
On Thu, Aug 03, 2023 at 10:53:24AM +0200, Claudio Jeker wrote: > On Thu, Aug 03, 2023 at 10:13:57AM +0200, Martin Pieuchot wrote: > > On 02/08/23(Wed) 14:22, Claudio Jeker wrote: > > > On Mon, Jul 31, 2023 at 10:21:11AM -0500, Scott Cheloha wrote: > > > > On Fri, Jul 28, 2023 at 07:36:41PM -0500, Scott Cheloha wrote: > > > > > claudio@ notes that uvm_loadav() pointlessly walks the allproc list to > > > > > recompute schedstate_percpu.spn_nrun for each CPU. > > > > > > > > > > We can just use the value instead of recomputing it. > > > > > > > > Whoops, off-by-one. The current load averaging code includes the > > > > running thread in the nrun count if it is *not* the idle thread. > > > > > > Yes, with this the loadavg seems to be consistent and following the number > > > of running processes. The code seems to behave like before (with all its > > > quirks). > > > > > > OK claudio@, this is a good first step. Now I think this code should later > > > be moved into kern_sched.c or sched_bsd.c and removed from uvm. Not sure > > > why > > > the load calculation is part of memory management... > > > > > > On top of this I wonder about the per-CPU load calculation. In my opinion > > > it is wrong to skip the calculation if the CPU is idle. Because of this > > > there is no decay for idle CPUs and that feels wrong to me. > > > Do we have a userland utility that reports spc_ldavg? > > > > I don't understand why the SCHED_LOCK() is needed. Since I'm really > > against adding new uses for it, could you comment on that? > > The question is how sloppy do we want to be. This code looks at > ci_schedstate (spc_idleproc and spc_nrun) and ci_curproc so the be correct > this needs to lock the scheduler. Do we really want that, hell no. How about this. Kill the spc_ldavg calculation. Its use is more then questionable. The cpu selection code using this is not wroking well and process stealing will do the rest. Also use sched_cpu_idle to know if a cpu is idle. -- :wq Claudio Index: kern/kern_sched.c === RCS file: /cvs/src/sys/kern/kern_sched.c,v retrieving revision 1.81 diff -u -p -r1.81 kern_sched.c --- kern/kern_sched.c 27 Jul 2023 17:52:53 - 1.81 +++ kern/kern_sched.c 3 Aug 2023 08:41:38 - @@ -373,7 +373,6 @@ sched_choosecpu_fork(struct proc *parent { #ifdef MULTIPROCESSOR struct cpu_info *choice = NULL; - fixpt_t load, best_load = ~0; int run, best_run = INT_MAX; struct cpu_info *ci; struct cpuset set; @@ -407,13 +406,10 @@ sched_choosecpu_fork(struct proc *parent while ((ci = cpuset_first(&set)) != NULL) { cpuset_del(&set, ci); - load = ci->ci_schedstate.spc_ldavg; run = ci->ci_schedstate.spc_nrun; - if (choice == NULL || run < best_run || - (run == best_run &&load < best_load)) { + if (choice == NULL || run < best_run) { choice = ci; - best_load = load; best_run = run; } } @@ -605,11 +601,6 @@ sched_proc_to_cpu_cost(struct cpu_info * */ if (CPU_IS_PRIMARY(ci)) cost += sched_cost_runnable; - - /* -* Higher load on the destination means we don't want to go there. -*/ - cost += ((sched_cost_load * spc->spc_ldavg) >> FSHIFT); /* * If the proc is on this cpu already, lower the cost by how much Index: sys/sched.h === RCS file: /cvs/src/sys/sys/sched.h,v retrieving revision 1.58 diff -u -p -r1.58 sched.h --- sys/sched.h 25 Jul 2023 18:16:19 - 1.58 +++ sys/sched.h 3 Aug 2023 08:42:39 - @@ -110,7 +110,6 @@ struct schedstate_percpu { struct clockintr *spc_profclock; /* [o] profclock handle */ u_int spc_nrun; /* procs on the run queues */ - fixpt_t spc_ldavg; /* shortest load avg. for this cpu */ volatile uint32_t spc_whichqs; volatile u_int spc_spinning;/* this cpu is currently spinning */ Index: uvm/uvm_meter.c === RCS file: /cvs/src/sys/uvm/uvm_meter.c,v retrieving revision 1.46 diff -u -p -r1.46 uvm_meter.c --- uvm/uvm_meter.c 2 Aug 2023 13:54:45 - 1.46 +++ uvm/uvm_meter.c 3 Aug 2023 10:12:02 - @@ -70,7 +70,7 @@ struct loadavg averunnable; * 5 second intervals. */ -static fixpt_t cexp[3] = { +static const fixpt_t cexp[3] = { 0.9200444146293232 * FSCALE,/* exp(-1/12) */ 0.9834714538216174 * FSCALE,/* exp(-1/60) */ 0.9944598480048967 * FSCALE,/* exp(-1/180) */ @@ -98,27 +98,16 @@ uvm_meter(void) static void uvm_loadav(struct loadavg *avg) { + extern struct cpuset sched_idle_cpus; CPU_INFO_ITERATOR cii; struct cpu_info *ci; -
Re: uvm_loadav: don't recompute schedstate_percpu.spc_nrun
On Thu, Aug 03, 2023 at 10:13:57AM +0200, Martin Pieuchot wrote: > On 02/08/23(Wed) 14:22, Claudio Jeker wrote: > > On Mon, Jul 31, 2023 at 10:21:11AM -0500, Scott Cheloha wrote: > > > On Fri, Jul 28, 2023 at 07:36:41PM -0500, Scott Cheloha wrote: > > > > claudio@ notes that uvm_loadav() pointlessly walks the allproc list to > > > > recompute schedstate_percpu.spn_nrun for each CPU. > > > > > > > > We can just use the value instead of recomputing it. > > > > > > Whoops, off-by-one. The current load averaging code includes the > > > running thread in the nrun count if it is *not* the idle thread. > > > > Yes, with this the loadavg seems to be consistent and following the number > > of running processes. The code seems to behave like before (with all its > > quirks). > > > > OK claudio@, this is a good first step. Now I think this code should later > > be moved into kern_sched.c or sched_bsd.c and removed from uvm. Not sure why > > the load calculation is part of memory management... > > > > On top of this I wonder about the per-CPU load calculation. In my opinion > > it is wrong to skip the calculation if the CPU is idle. Because of this > > there is no decay for idle CPUs and that feels wrong to me. > > Do we have a userland utility that reports spc_ldavg? > > I don't understand why the SCHED_LOCK() is needed. Since I'm really > against adding new uses for it, could you comment on that? The question is how sloppy do we want to be. This code looks at ci_schedstate (spc_idleproc and spc_nrun) and ci_curproc so the be correct this needs to lock the scheduler. Do we really want that, hell no. > > > Index: uvm_meter.c > > > === > > > RCS file: /cvs/src/sys/uvm/uvm_meter.c,v > > > retrieving revision 1.44 > > > diff -u -p -r1.44 uvm_meter.c > > > --- uvm_meter.c 21 Jun 2023 21:16:21 - 1.44 > > > +++ uvm_meter.c 31 Jul 2023 15:20:37 - > > > @@ -102,43 +102,29 @@ uvm_loadav(struct loadavg *avg) > > > { > > > CPU_INFO_ITERATOR cii; > > > struct cpu_info *ci; > > > - int i, nrun; > > > - struct proc *p; > > > - int nrun_cpu[MAXCPUS]; > > > + struct schedstate_percpu *spc; > > > + u_int i, nrun = 0, nrun_cpu; > > > + int s; > > > > > > - nrun = 0; > > > - memset(nrun_cpu, 0, sizeof(nrun_cpu)); > > > > > > - LIST_FOREACH(p, &allproc, p_list) { > > > - switch (p->p_stat) { > > > - case SSTOP: > > > - case SSLEEP: > > > - break; > > > - case SRUN: > > > - case SONPROC: > > > - if (p == p->p_cpu->ci_schedstate.spc_idleproc) > > > - continue; > > > - /* FALLTHROUGH */ > > > - case SIDL: > > > - nrun++; > > > - if (p->p_cpu) > > > - nrun_cpu[CPU_INFO_UNIT(p->p_cpu)]++; > > > - } > > > + SCHED_LOCK(s); > > > + CPU_INFO_FOREACH(cii, ci) { > > > + spc = &ci->ci_schedstate; > > > + nrun_cpu = spc->spc_nrun; > > > + if (ci->ci_curproc != spc->spc_idleproc) > > > + nrun_cpu++; > > > + if (nrun_cpu == 0) > > > + continue; > > > + spc->spc_ldavg = (cexp[0] * spc->spc_ldavg + > > > + nrun_cpu * FSCALE * > > > + (FSCALE - cexp[0])) >> FSHIFT; > > > + nrun += nrun_cpu; > > > } > > > + SCHED_UNLOCK(s); > > > > > > for (i = 0; i < 3; i++) { > > > avg->ldavg[i] = (cexp[i] * avg->ldavg[i] + > > > nrun * FSCALE * (FSCALE - cexp[i])) >> FSHIFT; > > > - } > > > - > > > - CPU_INFO_FOREACH(cii, ci) { > > > - struct schedstate_percpu *spc = &ci->ci_schedstate; > > > - > > > - if (nrun_cpu[CPU_INFO_UNIT(ci)] == 0) > > > - continue; > > > - spc->spc_ldavg = (cexp[0] * spc->spc_ldavg + > > > - nrun_cpu[CPU_INFO_UNIT(ci)] * FSCALE * > > > - (FSCALE - cexp[0])) >> FSHIFT; > > > } > > > } > > > > > > > -- > > :wq Claudio > > > -- :wq Claudio
Re: add extract example to tar(1) man page
On 2023/08/03 07:23, Jason McIntyre wrote: > On Wed, Aug 02, 2023 at 05:52:02PM -0400, aisha wrote: > > Hi, > > Someone - https://www.youtube.com/watch?v=NQ5uD5x8vzg - mentioned that > > our man page for tar(1) doesn't have an extract example, so I thought it > > would be good to add a simple one which highlights a common use case. > > > > OK? > > > > hi. > > the examples section is small enough that i suppose it wouldn;t be a > problem to add another one. as it must be specified in a non-obvious place on the command line and it's not currently explained particularly clearly, I'd welcome an example using -C. > it does add another example with a similar set of options though, all in > a different order. i wonder whether we should try and push the action as > the first option, so people can see what we're doing. so "cXXX" when the > example is to create, "tXXX" when listing? then keep the "vzf" options > that are so common in the same order? > > then the second example is probably more helpful as "Create a gzip(1) > compressed archive blah.tgz". > > i know this isn;t what you're posting about, so feel free to leave alone > if you don;t want to tackle that. > > one more thing. you could as easily remove the text "the folder", but > i'd be tempted to use "directory", as that's more standard for our docs, > and how -C itself describes it. strongly agree on options order and "directory". > jmc > > > Index: tar.1 > > === > > RCS file: /cvs/src/bin/pax/tar.1,v > > retrieving revision 1.64 > > diff -u -p -r1.64 tar.1 > > --- tar.1 31 Mar 2022 17:27:14 - 1.64 > > +++ tar.1 2 Aug 2023 21:47:12 - > > @@ -359,6 +359,13 @@ Note that the glob pattern has been quot > > .Pp > > .Dl $ tar tvzf backup.tar.gz '*.jpeg' > > .Pp > > +Verbosely extract an archive, called > > +.Pa foo-backup.tar.gz , > > +to the folder > > +.Pa /var/foo : > > +.Pp > > +.Dl $ tar xzvf foo-backup.tar.gz -C /var/foo > > +.Pp > > For more detailed examples, see > > .Xr pax 1 . > > .Sh DIAGNOSTICS > > >
Re: [v2]: uvm_meter, schedcpu: make uvm_meter() an independent timeout
On 02/08/23(Wed) 18:27, Claudio Jeker wrote: > On Wed, Aug 02, 2023 at 10:15:20AM -0500, Scott Cheloha wrote: > > Now that the proc0 wakeup(9) is gone we can retry the other part of > > the uvm_meter() patch. > > > > uvm_meter() is meant to run every 5 seconds, but for historical > > reasons it is called from schedcpu() and it is scheduled against the > > UTC clock. schedcpu() and uvm_meter() have different periods, so > > uvm_meter() ought to be a separate timeout. uvm_meter() is started > > alongside schedcpu() so the two will still run in sync. > > > > v1: https://marc.info/?l=openbsd-tech&m=168710929409153&w=2 > > > > ok? > > I would refer if uvm_meter is killed and the load calcualtion moved to the > scheduler. Me too. > > Index: sys/uvm/uvm_meter.c > > === > > RCS file: /cvs/src/sys/uvm/uvm_meter.c,v > > retrieving revision 1.46 > > diff -u -p -r1.46 uvm_meter.c > > --- sys/uvm/uvm_meter.c 2 Aug 2023 13:54:45 - 1.46 > > +++ sys/uvm/uvm_meter.c 2 Aug 2023 15:13:49 - > > @@ -85,10 +85,12 @@ void uvmexp_read(struct uvmexp *); > > * uvm_meter: calculate load average > > */ > > void > > -uvm_meter(void) > > +uvm_meter(void *unused) > > { > > - if ((gettime() % 5) == 0) > > - uvm_loadav(&averunnable); > > + static struct timeout to = TIMEOUT_INITIALIZER(uvm_meter, NULL); > > + > > + timeout_add_sec(&to, 5); > > + uvm_loadav(&averunnable); > > } > > > > /* > > Index: sys/uvm/uvm_extern.h > > === > > RCS file: /cvs/src/sys/uvm/uvm_extern.h,v > > retrieving revision 1.170 > > diff -u -p -r1.170 uvm_extern.h > > --- sys/uvm/uvm_extern.h21 Jun 2023 21:16:21 - 1.170 > > +++ sys/uvm/uvm_extern.h2 Aug 2023 15:13:49 - > > @@ -414,7 +414,7 @@ voiduvmspace_free(struct vmspace *); > > struct vmspace *uvmspace_share(struct process *); > > intuvm_share(vm_map_t, vaddr_t, vm_prot_t, > > vm_map_t, vaddr_t, vsize_t); > > -void uvm_meter(void); > > +void uvm_meter(void *); > > intuvm_sysctl(int *, u_int, void *, size_t *, > > void *, size_t, struct proc *); > > struct vm_page *uvm_pagealloc(struct uvm_object *, > > Index: sys/kern/sched_bsd.c > > === > > RCS file: /cvs/src/sys/kern/sched_bsd.c,v > > retrieving revision 1.78 > > diff -u -p -r1.78 sched_bsd.c > > --- sys/kern/sched_bsd.c25 Jul 2023 18:16:19 - 1.78 > > +++ sys/kern/sched_bsd.c2 Aug 2023 15:13:50 - > > @@ -235,7 +235,6 @@ schedcpu(void *arg) > > } > > SCHED_UNLOCK(s); > > } > > - uvm_meter(); > > wakeup(&lbolt); > > timeout_add_sec(to, 1); > > } > > @@ -688,6 +687,7 @@ scheduler_start(void) > > > > rrticks_init = hz / 10; > > schedcpu(&schedcpu_to); > > + uvm_meter(NULL); > > > > #ifndef SMALL_KERNEL > > if (perfpolicy == PERFPOL_AUTO) > > Index: share/man/man9/uvm_init.9 > > === > > RCS file: /cvs/src/share/man/man9/uvm_init.9,v > > retrieving revision 1.7 > > diff -u -p -r1.7 uvm_init.9 > > --- share/man/man9/uvm_init.9 21 Jun 2023 21:16:21 - 1.7 > > +++ share/man/man9/uvm_init.9 2 Aug 2023 15:13:50 - > > @@ -168,7 +168,7 @@ argument is ignored. > > .Ft void > > .Fn uvm_kernacc "caddr_t addr" "size_t len" "int rw" > > .Ft void > > -.Fn uvm_meter > > +.Fn uvm_meter "void *arg" > > .Ft int > > .Fn uvm_sysctl "int *name" "u_int namelen" "void *oldp" "size_t *oldlenp" > > "void *newp " "size_t newlen" "struct proc *p" > > .Ft int > > @@ -212,7 +212,7 @@ access, in the kernel address space. > > .Pp > > The > > .Fn uvm_meter > > -function calculates the load average and wakes up the swapper if necessary. > > +timeout updates system load averages every five seconds. > > .Pp > > The > > .Fn uvm_sysctl > > -- > :wq Claudio >
Re: uvm_loadav: don't recompute schedstate_percpu.spc_nrun
On 02/08/23(Wed) 14:22, Claudio Jeker wrote: > On Mon, Jul 31, 2023 at 10:21:11AM -0500, Scott Cheloha wrote: > > On Fri, Jul 28, 2023 at 07:36:41PM -0500, Scott Cheloha wrote: > > > claudio@ notes that uvm_loadav() pointlessly walks the allproc list to > > > recompute schedstate_percpu.spn_nrun for each CPU. > > > > > > We can just use the value instead of recomputing it. > > > > Whoops, off-by-one. The current load averaging code includes the > > running thread in the nrun count if it is *not* the idle thread. > > Yes, with this the loadavg seems to be consistent and following the number > of running processes. The code seems to behave like before (with all its > quirks). > > OK claudio@, this is a good first step. Now I think this code should later > be moved into kern_sched.c or sched_bsd.c and removed from uvm. Not sure why > the load calculation is part of memory management... > > On top of this I wonder about the per-CPU load calculation. In my opinion > it is wrong to skip the calculation if the CPU is idle. Because of this > there is no decay for idle CPUs and that feels wrong to me. > Do we have a userland utility that reports spc_ldavg? I don't understand why the SCHED_LOCK() is needed. Since I'm really against adding new uses for it, could you comment on that? > > Index: uvm_meter.c > > === > > RCS file: /cvs/src/sys/uvm/uvm_meter.c,v > > retrieving revision 1.44 > > diff -u -p -r1.44 uvm_meter.c > > --- uvm_meter.c 21 Jun 2023 21:16:21 - 1.44 > > +++ uvm_meter.c 31 Jul 2023 15:20:37 - > > @@ -102,43 +102,29 @@ uvm_loadav(struct loadavg *avg) > > { > > CPU_INFO_ITERATOR cii; > > struct cpu_info *ci; > > - int i, nrun; > > - struct proc *p; > > - int nrun_cpu[MAXCPUS]; > > + struct schedstate_percpu *spc; > > + u_int i, nrun = 0, nrun_cpu; > > + int s; > > > > - nrun = 0; > > - memset(nrun_cpu, 0, sizeof(nrun_cpu)); > > > > - LIST_FOREACH(p, &allproc, p_list) { > > - switch (p->p_stat) { > > - case SSTOP: > > - case SSLEEP: > > - break; > > - case SRUN: > > - case SONPROC: > > - if (p == p->p_cpu->ci_schedstate.spc_idleproc) > > - continue; > > - /* FALLTHROUGH */ > > - case SIDL: > > - nrun++; > > - if (p->p_cpu) > > - nrun_cpu[CPU_INFO_UNIT(p->p_cpu)]++; > > - } > > + SCHED_LOCK(s); > > + CPU_INFO_FOREACH(cii, ci) { > > + spc = &ci->ci_schedstate; > > + nrun_cpu = spc->spc_nrun; > > + if (ci->ci_curproc != spc->spc_idleproc) > > + nrun_cpu++; > > + if (nrun_cpu == 0) > > + continue; > > + spc->spc_ldavg = (cexp[0] * spc->spc_ldavg + > > + nrun_cpu * FSCALE * > > + (FSCALE - cexp[0])) >> FSHIFT; > > + nrun += nrun_cpu; > > } > > + SCHED_UNLOCK(s); > > > > for (i = 0; i < 3; i++) { > > avg->ldavg[i] = (cexp[i] * avg->ldavg[i] + > > nrun * FSCALE * (FSCALE - cexp[i])) >> FSHIFT; > > - } > > - > > - CPU_INFO_FOREACH(cii, ci) { > > - struct schedstate_percpu *spc = &ci->ci_schedstate; > > - > > - if (nrun_cpu[CPU_INFO_UNIT(ci)] == 0) > > - continue; > > - spc->spc_ldavg = (cexp[0] * spc->spc_ldavg + > > - nrun_cpu[CPU_INFO_UNIT(ci)] * FSCALE * > > - (FSCALE - cexp[0])) >> FSHIFT; > > } > > } > > > > -- > :wq Claudio >
Re: add extract example to tar(1) man page
On Thu, Aug 03, 2023 at 08:29:31AM +0200, Peter J. Philipp wrote: > On Thu, Aug 03, 2023 at 07:23:45AM +0100, Jason McIntyre wrote: > > On Wed, Aug 02, 2023 at 05:52:02PM -0400, aisha wrote: > > > Hi, > > > Someone - https://www.youtube.com/watch?v=NQ5uD5x8vzg - mentioned that > > > our man page for tar(1) doesn't have an extract example, so I thought it > > > would be good to add a simple one which highlights a common use case. > > > > > > OK? > > > > > > > hi. > > > > the examples section is small enough that i suppose it wouldn;t be a > > problem to add another one. > > > > it does add another example with a similar set of options though, all in > > a different order. i wonder whether we should try and push the action as > > the first option, so people can see what we're doing. so "cXXX" when the > > example is to create, "tXXX" when listing? then keep the "vzf" options > > that are so common in the same order? > > I wouldn't restore a .tgz without the -p flag. Particularily on backups. > Perhaps the OP forgot to add it. > > Best Regards, > -peter Someone privately strongly disagreed on this. So I retract this. Best, -peter