Re: rewritten vxlan(4)
On Thu, Mar 04, 2021 at 03:36:19PM +1000, David Gwynne wrote: > as the subject says, this is a rewrite of vxlan(4). > > vxlan(4) relies on bridge(4) to implement learning, but i want to be > able to remove bridge(4) one day. while working on veb(4), i wrote > the guts of a learning bridge implementation that is now used by veb(4), > bpe(4), and nvgre(4). that learning bridge code is now also used by > vxlan(4). > > this means that a few of the modes that the manpage talks about are > different now. because vxlan doesnt need a bridge for learning, there's > no "multicast mode" anymore, it just does "dynamic mode" out of the box > when configured with a multicast destination address. there's no > multipoint mode now too. > > another thing that's always bothered me about vxlan(4) is how it occupies > the "udp namespace" and gets how it steals packets from the udp stack. > the new code actually creates and bind udp sockets to handle the > vxlan packets. this means userland can't collide with a vxlan interface, > and you get to see that the port is in use in things like netstat. e.g.: > > dlg@ikkaku ~$ ifconfig vxlan0 > vxlan0: flags=8843 mtu 1500 > lladdr fe:e1:ba:d1:17:2a > index 11 llprio 3 > encap: vnetid none parent aggr0 txprio 0 rxprio outer > groups: vxlan > tunnel: inet 192.0.2.36 port 4789 --> 239.0.0.1 ttl 1 nodf > Addresses (max cache: 100, timeout: 240): > inet 100.64.1.36 netmask 0xff00 broadcast 100.64.1.255 > dlg@ikkaku ~$ netstat -na -f inet -p udp > Active Internet connections (including servers) > Proto Recv-Q Send-Q Local Address Foreign Address > udp 0 0 130.102.96.36.29742129.250.35.250.123 > udp 0 0 130.102.96.36.8965 162.159.200.123.123 > udp 0 0 130.102.96.36.13189162.159.200.1.123 > udp 0 0 130.102.96.36.46580220.158.215.20.123 > udp 0 0 130.102.96.36.23109103.38.121.36.123 > udp 0 0 239.0.0.1.4789 *.* > udp 0 0 192.0.2.36.4789*.* > > ive also added loop prevention, ie, sending an interfaces vxlan > packets over itself should fail rather than panic now. here's an updated diff with a few fixes. Index: netinet/udp_usrreq.c === RCS file: /cvs/src/sys/netinet/udp_usrreq.c,v retrieving revision 1.262 diff -u -p -r1.262 udp_usrreq.c --- netinet/udp_usrreq.c22 Aug 2020 17:54:57 - 1.262 +++ netinet/udp_usrreq.c5 Mar 2021 06:22:43 - @@ -112,11 +112,6 @@ #include #endif -#include "vxlan.h" -#if NVXLAN > 0 -#include -#endif - /* * UDP protocol implementation. * Per RFC 768, August, 1980. @@ -350,15 +345,6 @@ udp_input(struct mbuf **mp, int *offp, i break; #endif /* INET6 */ } - -#if NVXLAN > 0 - if (vxlan_enable > 0 && -#if NPF > 0 - !(m->m_pkthdr.pf.flags & PF_TAG_DIVERTED) && -#endif - vxlan_lookup(m, uh, iphlen, , ) != 0) - return IPPROTO_DONE; -#endif if (m->m_flags & (M_BCAST|M_MCAST)) { struct inpcb *last; Index: net/if_vxlan.c === RCS file: /cvs/src/sys/net/if_vxlan.c,v retrieving revision 1.82 diff -u -p -r1.82 if_vxlan.c --- net/if_vxlan.c 25 Feb 2021 02:48:21 - 1.82 +++ net/if_vxlan.c 5 Mar 2021 06:22:43 - @@ -1,7 +1,7 @@ -/* $OpenBSD: if_vxlan.c,v 1.82 2021/02/25 02:48:21 dlg Exp $ */ +/* $OpenBSD$ */ /* - * Copyright (c) 2013 Reyk Floeter + * Copyright (c) 2021 David Gwynne * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above @@ -17,493 +17,759 @@ */ #include "bpfilter.h" -#include "vxlan.h" -#include "vlan.h" #include "pf.h" -#include "bridge.h" #include #include +#include #include #include -#include #include +#include +#include +#include +#include +#include + +#include +#include #include #include +#include #include +#include #include - -#if NBPFILTER > 0 -#include -#endif +#include #include #include #include #include -#include #include -#include #include +#include -#if NPF > 0 -#include +#ifdef INET6 +#include +#include +#include #endif -#if NBRIDGE > 0 +/* for bridge stuff */ #include +#include + +#if NBPFILTER > 0 +#include #endif -#include +/* + * The protocol. + */ + +#define VXLANMTU 1492 +#define VXLAN_PORT 4789 + +struct vxlan_header { + uint32_tvxlan_flags; +#define VXLAN_F_I (1U << 27) + uint32_tvxlan_id; +#define VXLAN_VNI_SHIFT8 +#defineVXLAN_VNI_MASK (0xff << VXLAN_VNI_SHIFT) +}; + +#define
Re: systat(1) sticky help
Hi, I have reworked my proposed interface for sticky displays in systat following earlier feedback. Thanks for that! It helped me rethink the interface carefully. In the current version, sticky mode has its own toggle: ^T. From a study of the source and the man page it does not seem to be bound to anything and, on my system at least, does not conflict with anything else. I could add a dedicated command for it as well if that's useful. The current version supports a sticky display for the view list, view mode plus refresh interval, and available orderings. I have tried the hotkeys for the various views as documented in the man page (as well as some of the undocumented ones) and they seem to work. I have also tried to make sure that the sticky displays don't interfere with error messages. In future I would like to try to clean up the view hotkeys, which conflict each other in some cases, and seem to be undocumented. I found them useful; at least the ones that work. Please have a look, thanks! Regards, Anindya Index: engine.c === RCS file: /cvs/src/usr.bin/systat/engine.c,v retrieving revision 1.27 diff -u -p -r1.27 engine.c --- engine.c6 Feb 2021 06:19:28 - 1.27 +++ engine.c5 Mar 2021 02:35:36 - @@ -70,6 +70,8 @@ volatile sig_atomic_t gotsig_alarm = 0; int need_update = 0; int need_sort = 0; int separate_thousands = 0; +enum sticky_mode sticky_mode = STICKY_NONE; +int sticky = 0; SCREEN *screen; @@ -1139,7 +1141,15 @@ command_set(struct command *cmd, const c cmdbuf[0] = 0; } } - message_set(NULL); + + if (curr_message != NULL) { + if (!sticky) { + message_set(NULL); + sticky_mode = STICKY_NONE; + } else if (sticky_mode == STICKY_NONE) + message_set(NULL); + } + curr_cmd = cmd; need_update = 1; return prev; @@ -1165,7 +1175,10 @@ print_cmdline(void) attroff(A_STANDOUT); printw("%s", cmdbuf); } else if (curr_message) { - mvprintw(home_line, 0, "> %s", curr_message); + if (sticky) + mvprintw(home_line, 0, "*> %s", curr_message); + else + mvprintw(home_line, 0, "> %s", curr_message); } clrtoeol(); } @@ -1233,8 +1246,12 @@ keyboard(void) if (curr_mgr->key_fn(ch)) return; - if (curr_message != NULL) { - if (ch > 0) { + if (curr_message != NULL && ch > 0) { + if (!sticky) { + message_set(NULL); + sticky_mode = STICKY_NONE; + need_update = 1; + } else if (sticky_mode == STICKY_NONE) { message_set(NULL); need_update = 1; } @@ -1359,8 +1376,24 @@ engine_loop(int countmax) if (need_update) { erase(); if (!averageonly || - (averageonly && count == countmax - 1)) + (averageonly && count == countmax - 1)) { disp_update(); + if (interactive && sticky) { + switch (sticky_mode) { + case STICKY_NONE: + break; + case STICKY_HELP: + show_help(); + break; + case STICKY_VIEW: + show_view(); + break; + case STICKY_ORDER: + show_order(); + break; + } + } + } end_page(); need_update = 0; if (countmax && ++count >= countmax) Index: engine.h === RCS file: /cvs/src/usr.bin/systat/engine.h,v retrieving revision 1.12 diff -u -p -r1.12 engine.h --- engine.h12 Jan 2020 20:51:08 - 1.12 +++ engine.h5 Mar 2021 02:35:36 - @@ -36,6 +36,7 @@ #define CTRL_L 12 #define CTRL_N 14 #define CTRL_P 16 +#define CTRL_T 20 #define CTRL_V 22 #define META_V 246 @@ -95,9 +96,9 @@ struct command { void ( *exec)(const char *); }; +enum sticky_mode {STICKY_NONE, STICKY_HELP, STICKY_VIEW, STICKY_ORDER}; void tb_start(void); - void tb_end(void); int
use 64bit ethernet addresses in carp(4)
this passes the destination ethernet address from the network packet as a uint64_t from ether_input into carp_input, so it can use it to see if a carp interface should take the packet. it's been working on amd64 and sparc64. anyone else want to try? Index: netinet/ip_carp.c === RCS file: /cvs/src/sys/netinet/ip_carp.c,v retrieving revision 1.352 diff -u -p -r1.352 ip_carp.c --- netinet/ip_carp.c 8 Feb 2021 12:30:10 - 1.352 +++ netinet/ip_carp.c 5 Mar 2021 04:42:27 - @@ -260,7 +260,7 @@ voidcarp_update_lsmask(struct carp_soft intcarp_new_vhost(struct carp_softc *, int, int); void carp_destroy_vhosts(struct carp_softc *); void carp_del_all_timeouts(struct carp_softc *); -intcarp_vhe_match(struct carp_softc *, uint8_t *); +intcarp_vhe_match(struct carp_softc *, uint64_t); struct if_clone carp_cloner = IF_CLONE_INITIALIZER("carp", carp_clone_create, carp_clone_destroy); @@ -1345,6 +1345,7 @@ carp_ourether(struct ifnet *ifp, uint8_t struct carp_softc *sc; struct srp_ref sr; int match = 0; + uint64_t dst = ether_addr_to_e64((struct ether_addr *)ena); KASSERT(ifp->if_type == IFT_ETHER); @@ -1352,7 +1353,7 @@ carp_ourether(struct ifnet *ifp, uint8_t if ((sc->sc_if.if_flags & (IFF_UP|IFF_RUNNING)) != (IFF_UP|IFF_RUNNING)) continue; - if (carp_vhe_match(sc, ena)) { + if (carp_vhe_match(sc, dst)) { match = 1; break; } @@ -1363,29 +1364,27 @@ carp_ourether(struct ifnet *ifp, uint8_t } int -carp_vhe_match(struct carp_softc *sc, uint8_t *ena) +carp_vhe_match(struct carp_softc *sc, uint64_t dst) { struct carp_vhost_entry *vhe; struct srp_ref sr; - int match = 0; + int active = 0; vhe = SRPL_FIRST(, >carp_vhosts); - match = (vhe->state == MASTER || sc->sc_balancing >= CARP_BAL_IP) && - !memcmp(ena, sc->sc_ac.ac_enaddr, ETHER_ADDR_LEN); + active = (vhe->state == MASTER || sc->sc_balancing >= CARP_BAL_IP); SRPL_LEAVE(); - return (match); + return (active && (dst == + ether_addr_to_e64((struct ether_addr *)sc->sc_ac.ac_enaddr))); } struct mbuf * -carp_input(struct ifnet *ifp0, struct mbuf *m) +carp_input(struct ifnet *ifp0, struct mbuf *m, uint64_t dst) { - struct ether_header *eh; struct srpl *cif; struct carp_softc *sc; struct srp_ref sr; - eh = mtod(m, struct ether_header *); cif = >if_carp; SRPL_FOREACH(sc, , cif, sc_list) { @@ -1393,7 +1392,7 @@ carp_input(struct ifnet *ifp0, struct mb (IFF_UP|IFF_RUNNING)) continue; - if (carp_vhe_match(sc, eh->ether_dhost)) { + if (carp_vhe_match(sc, dst)) { /* * These packets look like layer 2 multicast but they * are unicast at layer 3. With help of the tag the @@ -1417,7 +1416,7 @@ carp_input(struct ifnet *ifp0, struct mb if (sc == NULL) { SRPL_LEAVE(); - if (!ETHER_IS_MULTICAST(eh->ether_dhost)) + if (!ETH64_IS_MULTICAST(dst)) return (m); /* Index: netinet/ip_carp.h === RCS file: /cvs/src/sys/netinet/ip_carp.h,v retrieving revision 1.50 diff -u -p -r1.50 ip_carp.h --- netinet/ip_carp.h 24 Jul 2020 18:17:15 - 1.50 +++ netinet/ip_carp.h 5 Mar 2021 04:42:27 - @@ -209,7 +209,7 @@ carp_strict_addr_chk(struct ifnet *ifp_a ifp_a->if_carpdevidx == ifp_b->if_carpdevidx)); } -struct mbuf*carp_input(struct ifnet *, struct mbuf *); +struct mbuf*carp_input(struct ifnet *, struct mbuf *, uint64_t); int carp_proto_input(struct mbuf **, int *, int, int); voidcarp_carpdev_state(void *); Index: net/if_ethersubr.c === RCS file: /cvs/src/sys/net/if_ethersubr.c,v retrieving revision 1.272 diff -u -p -r1.272 if_ethersubr.c --- net/if_ethersubr.c 5 Mar 2021 03:51:41 - 1.272 +++ net/if_ethersubr.c 5 Mar 2021 04:42:27 - @@ -460,7 +460,7 @@ ether_input(struct ifnet *ifp, struct mb */ if (ifp->if_type != IFT_CARP && !SRPL_EMPTY_LOCKED(>if_carp)) { - m = carp_input(ifp, m); + m = carp_input(ifp, m, dst); if (m == NULL) return;
Re: rpki-client: unchecked str(n)dup
On Thu, Mar 04, 2021 at 04:25:53PM +0100, Theo Buehler wrote: > On Thu, Mar 04, 2021 at 04:10:12PM +0100, Claudio Jeker wrote: > > On Thu, Mar 04, 2021 at 03:53:44PM +0100, Theo Buehler wrote: > > > The first two seem obvious oversights. The ones in rsync_base_uri() > > > would end up silently ignored: > > > queue_add_from_cert > > > repo_lookup > > > rsync_base_uri > > > > > > Index: http.c > > > === > > > RCS file: /cvs/src/usr.sbin/rpki-client/http.c,v > > > retrieving revision 1.4 > > > diff -u -p -r1.4 http.c > > > --- http.c4 Mar 2021 14:24:54 - 1.4 > > > +++ http.c4 Mar 2021 14:46:20 - > > > @@ -807,7 +807,8 @@ http_parse_header(struct http_connection > > > } else if (strncasecmp(cp, LAST_MODIFIED, > > > sizeof(LAST_MODIFIED) - 1) == 0) { > > > cp += sizeof(LAST_MODIFIED) - 1; > > > - conn->last_modified = strdup(cp); > > > + if ((conn->last_modified = strdup(cp)) == NULL) > > > + err(1, NULL); > > > > I did not make this a fatal error since the code works fine even if > > last_modified is NULL. This is just an extra data point to help pick up > > work on the next run (which is currently unused). > > > > I think both versions are correct. I don't mind if you commit this. > > Alright, if it's deliberate, I'll leave it as it is. I should probably add a comment there. On the other hand if that strdup() failed I doubt the program will make all that more progress before blowing up on some other malloc call. Lets just be consistent and do the check there. OK claudio@ > > > > > } > > > > > > return 1; > > > Index: main.c > > > === > > > RCS file: /cvs/src/usr.sbin/rpki-client/main.c,v > > > retrieving revision 1.112 > > > diff -u -p -r1.112 main.c > > > --- main.c4 Mar 2021 14:24:17 - 1.112 > > > +++ main.c4 Mar 2021 14:46:20 - > > > @@ -590,9 +590,10 @@ queue_add_tal(struct entityq *q, const c > > > buf = tal_read_file(file); > > > > > > /* Record tal for later reporting */ > > > - if (stats.talnames == NULL) > > > - stats.talnames = strdup(file); > > > - else { > > > + if (stats.talnames == NULL) { > > > + if ((stats.talnames = strdup(file)) == NULL) > > > + err(1, NULL); > > > + } else { > > > char *tmp; > > > if (asprintf(, "%s %s", stats.talnames, file) == -1) > > > err(1, NULL); > > > > Hmm, I thought the asprintf call below was also unchecked since this is > > again optional data. On the other hand if the code already fails here to > > allocate a few bytes then I don't expect it to actually get much further. > > > > Since the else case now checks the asprintf return value I think it is > > fine to do the same for the strdup case. OK claudio@ > > I'll do that one for consistency. > > > > > > Index: rsync.c > > > === > > > RCS file: /cvs/src/usr.sbin/rpki-client/rsync.c,v > > > retrieving revision 1.21 > > > diff -u -p -r1.21 rsync.c > > > --- rsync.c 4 Mar 2021 14:24:17 - 1.21 > > > +++ rsync.c 4 Mar 2021 14:46:20 - > > > @@ -55,6 +55,7 @@ char * > > > rsync_base_uri(const char *uri) > > > { > > > const char *host, *module, *rest; > > > + char *base_uri; > > > > > > /* Case-insensitive rsync URI. */ > > > if (strncasecmp(uri, "rsync://", 8) != 0) { > > > @@ -82,13 +83,18 @@ rsync_base_uri(const char *uri) > > > > > > /* The path component is optional. */ > > > if ((rest = strchr(module, '/')) == NULL) { > > > - return strdup(uri); > > > + if ((base_uri = strdup(uri)) == NULL) > > > + err(1, NULL); > > > + return base_uri; > > > } else if (rest == module) { > > > warnx("%s: zero-length module", uri); > > > return NULL; > > > } > > > > > > - return strndup(uri, rest - uri); > > > + if ((base_uri = strndup(uri, rest - uri)) == NULL) > > > + err(1, NULL); > > > + > > > + return base_uri; > > > } > > > > > > static void > > > > > > > As you already noticed the NULL returns from strdup and strndup are > > handled by the caller. Do we really need this extra complexity? > > One might argue the complexity is higher without the checks... Also, > the comment /* bad repository URI */ in queue_add_from_cert is inacurrate. > I don't insist, as there is no actual harm, it's just odd. Yeah, it is one of those where I'm constantly changing my mind about it. As you noticed the NULL return does no harm. Maybe the comments need to be slightly adjusted. -- :wq Claudio
Re: rpki-client: unchecked str(n)dup
On Thu, Mar 04, 2021 at 04:10:12PM +0100, Claudio Jeker wrote: > On Thu, Mar 04, 2021 at 03:53:44PM +0100, Theo Buehler wrote: > > The first two seem obvious oversights. The ones in rsync_base_uri() > > would end up silently ignored: > > queue_add_from_cert > > repo_lookup > > rsync_base_uri > > > > Index: http.c > > === > > RCS file: /cvs/src/usr.sbin/rpki-client/http.c,v > > retrieving revision 1.4 > > diff -u -p -r1.4 http.c > > --- http.c 4 Mar 2021 14:24:54 - 1.4 > > +++ http.c 4 Mar 2021 14:46:20 - > > @@ -807,7 +807,8 @@ http_parse_header(struct http_connection > > } else if (strncasecmp(cp, LAST_MODIFIED, > > sizeof(LAST_MODIFIED) - 1) == 0) { > > cp += sizeof(LAST_MODIFIED) - 1; > > - conn->last_modified = strdup(cp); > > + if ((conn->last_modified = strdup(cp)) == NULL) > > + err(1, NULL); > > I did not make this a fatal error since the code works fine even if > last_modified is NULL. This is just an extra data point to help pick up > work on the next run (which is currently unused). > > I think both versions are correct. I don't mind if you commit this. Alright, if it's deliberate, I'll leave it as it is. > > > } > > > > return 1; > > Index: main.c > > === > > RCS file: /cvs/src/usr.sbin/rpki-client/main.c,v > > retrieving revision 1.112 > > diff -u -p -r1.112 main.c > > --- main.c 4 Mar 2021 14:24:17 - 1.112 > > +++ main.c 4 Mar 2021 14:46:20 - > > @@ -590,9 +590,10 @@ queue_add_tal(struct entityq *q, const c > > buf = tal_read_file(file); > > > > /* Record tal for later reporting */ > > - if (stats.talnames == NULL) > > - stats.talnames = strdup(file); > > - else { > > + if (stats.talnames == NULL) { > > + if ((stats.talnames = strdup(file)) == NULL) > > + err(1, NULL); > > + } else { > > char *tmp; > > if (asprintf(, "%s %s", stats.talnames, file) == -1) > > err(1, NULL); > > Hmm, I thought the asprintf call below was also unchecked since this is > again optional data. On the other hand if the code already fails here to > allocate a few bytes then I don't expect it to actually get much further. > > Since the else case now checks the asprintf return value I think it is > fine to do the same for the strdup case. OK claudio@ I'll do that one for consistency. > > > Index: rsync.c > > === > > RCS file: /cvs/src/usr.sbin/rpki-client/rsync.c,v > > retrieving revision 1.21 > > diff -u -p -r1.21 rsync.c > > --- rsync.c 4 Mar 2021 14:24:17 - 1.21 > > +++ rsync.c 4 Mar 2021 14:46:20 - > > @@ -55,6 +55,7 @@ char * > > rsync_base_uri(const char *uri) > > { > > const char *host, *module, *rest; > > + char *base_uri; > > > > /* Case-insensitive rsync URI. */ > > if (strncasecmp(uri, "rsync://", 8) != 0) { > > @@ -82,13 +83,18 @@ rsync_base_uri(const char *uri) > > > > /* The path component is optional. */ > > if ((rest = strchr(module, '/')) == NULL) { > > - return strdup(uri); > > + if ((base_uri = strdup(uri)) == NULL) > > + err(1, NULL); > > + return base_uri; > > } else if (rest == module) { > > warnx("%s: zero-length module", uri); > > return NULL; > > } > > > > - return strndup(uri, rest - uri); > > + if ((base_uri = strndup(uri, rest - uri)) == NULL) > > + err(1, NULL); > > + > > + return base_uri; > > } > > > > static void > > > > As you already noticed the NULL returns from strdup and strndup are > handled by the caller. Do we really need this extra complexity? One might argue the complexity is higher without the checks... Also, the comment /* bad repository URI */ in queue_add_from_cert is inacurrate. I don't insist, as there is no actual harm, it's just odd. > > -- > :wq Claudio
Re: rpki-client: unchecked str(n)dup
On Thu, Mar 04, 2021 at 03:53:44PM +0100, Theo Buehler wrote: > The first two seem obvious oversights. The ones in rsync_base_uri() > would end up silently ignored: > queue_add_from_cert > repo_lookup > rsync_base_uri > > Index: http.c > === > RCS file: /cvs/src/usr.sbin/rpki-client/http.c,v > retrieving revision 1.4 > diff -u -p -r1.4 http.c > --- http.c4 Mar 2021 14:24:54 - 1.4 > +++ http.c4 Mar 2021 14:46:20 - > @@ -807,7 +807,8 @@ http_parse_header(struct http_connection > } else if (strncasecmp(cp, LAST_MODIFIED, > sizeof(LAST_MODIFIED) - 1) == 0) { > cp += sizeof(LAST_MODIFIED) - 1; > - conn->last_modified = strdup(cp); > + if ((conn->last_modified = strdup(cp)) == NULL) > + err(1, NULL); I did not make this a fatal error since the code works fine even if last_modified is NULL. This is just an extra data point to help pick up work on the next run (which is currently unused). I think both versions are correct. I don't mind if you commit this. > } > > return 1; > Index: main.c > === > RCS file: /cvs/src/usr.sbin/rpki-client/main.c,v > retrieving revision 1.112 > diff -u -p -r1.112 main.c > --- main.c4 Mar 2021 14:24:17 - 1.112 > +++ main.c4 Mar 2021 14:46:20 - > @@ -590,9 +590,10 @@ queue_add_tal(struct entityq *q, const c > buf = tal_read_file(file); > > /* Record tal for later reporting */ > - if (stats.talnames == NULL) > - stats.talnames = strdup(file); > - else { > + if (stats.talnames == NULL) { > + if ((stats.talnames = strdup(file)) == NULL) > + err(1, NULL); > + } else { > char *tmp; > if (asprintf(, "%s %s", stats.talnames, file) == -1) > err(1, NULL); Hmm, I thought the asprintf call below was also unchecked since this is again optional data. On the other hand if the code already fails here to allocate a few bytes then I don't expect it to actually get much further. Since the else case now checks the asprintf return value I think it is fine to do the same for the strdup case. OK claudio@ > Index: rsync.c > === > RCS file: /cvs/src/usr.sbin/rpki-client/rsync.c,v > retrieving revision 1.21 > diff -u -p -r1.21 rsync.c > --- rsync.c 4 Mar 2021 14:24:17 - 1.21 > +++ rsync.c 4 Mar 2021 14:46:20 - > @@ -55,6 +55,7 @@ char * > rsync_base_uri(const char *uri) > { > const char *host, *module, *rest; > + char *base_uri; > > /* Case-insensitive rsync URI. */ > if (strncasecmp(uri, "rsync://", 8) != 0) { > @@ -82,13 +83,18 @@ rsync_base_uri(const char *uri) > > /* The path component is optional. */ > if ((rest = strchr(module, '/')) == NULL) { > - return strdup(uri); > + if ((base_uri = strdup(uri)) == NULL) > + err(1, NULL); > + return base_uri; > } else if (rest == module) { > warnx("%s: zero-length module", uri); > return NULL; > } > > - return strndup(uri, rest - uri); > + if ((base_uri = strndup(uri, rest - uri)) == NULL) > + err(1, NULL); > + > + return base_uri; > } > > static void > As you already noticed the NULL returns from strdup and strndup are handled by the caller. Do we really need this extra complexity? -- :wq Claudio
rpki-client: unchecked str(n)dup
The first two seem obvious oversights. The ones in rsync_base_uri() would end up silently ignored: queue_add_from_cert repo_lookup rsync_base_uri Index: http.c === RCS file: /cvs/src/usr.sbin/rpki-client/http.c,v retrieving revision 1.4 diff -u -p -r1.4 http.c --- http.c 4 Mar 2021 14:24:54 - 1.4 +++ http.c 4 Mar 2021 14:46:20 - @@ -807,7 +807,8 @@ http_parse_header(struct http_connection } else if (strncasecmp(cp, LAST_MODIFIED, sizeof(LAST_MODIFIED) - 1) == 0) { cp += sizeof(LAST_MODIFIED) - 1; - conn->last_modified = strdup(cp); + if ((conn->last_modified = strdup(cp)) == NULL) + err(1, NULL); } return 1; Index: main.c === RCS file: /cvs/src/usr.sbin/rpki-client/main.c,v retrieving revision 1.112 diff -u -p -r1.112 main.c --- main.c 4 Mar 2021 14:24:17 - 1.112 +++ main.c 4 Mar 2021 14:46:20 - @@ -590,9 +590,10 @@ queue_add_tal(struct entityq *q, const c buf = tal_read_file(file); /* Record tal for later reporting */ - if (stats.talnames == NULL) - stats.talnames = strdup(file); - else { + if (stats.talnames == NULL) { + if ((stats.talnames = strdup(file)) == NULL) + err(1, NULL); + } else { char *tmp; if (asprintf(, "%s %s", stats.talnames, file) == -1) err(1, NULL); Index: rsync.c === RCS file: /cvs/src/usr.sbin/rpki-client/rsync.c,v retrieving revision 1.21 diff -u -p -r1.21 rsync.c --- rsync.c 4 Mar 2021 14:24:17 - 1.21 +++ rsync.c 4 Mar 2021 14:46:20 - @@ -55,6 +55,7 @@ char * rsync_base_uri(const char *uri) { const char *host, *module, *rest; + char *base_uri; /* Case-insensitive rsync URI. */ if (strncasecmp(uri, "rsync://", 8) != 0) { @@ -82,13 +83,18 @@ rsync_base_uri(const char *uri) /* The path component is optional. */ if ((rest = strchr(module, '/')) == NULL) { - return strdup(uri); + if ((base_uri = strdup(uri)) == NULL) + err(1, NULL); + return base_uri; } else if (rest == module) { warnx("%s: zero-length module", uri); return NULL; } - return strndup(uri, rest - uri); + if ((base_uri = strndup(uri, rest - uri)) == NULL) + err(1, NULL); + + return base_uri; } static void
Re: work with 64bit ethernet addresses in ether_input()
On Thu, Mar 04, 2021 at 10:06:24PM +1000, David Gwynne wrote: > this applies the tricks with addresses from veb and etherbridge > code to the normal ethernet input processing. it seems to make > things a bit faster. some tests have shown a 15% improvement in > forwarding performance with this diff. > > ive been running with it for the last week on sparc64 and amd64 > without issue. > > ok? Fine with me. Surprised about the preformance difference. I would assume that one memcmp() is about the same as one ether_addr_to_e64() call. > Index: if_ethersubr.c > === > RCS file: /cvs/src/sys/net/if_ethersubr.c,v > retrieving revision 1.271 > diff -u -p -r1.271 if_ethersubr.c > --- if_ethersubr.c26 Feb 2021 01:12:37 - 1.271 > +++ if_ethersubr.c27 Feb 2021 01:24:44 - > @@ -382,6 +382,7 @@ ether_input(struct ifnet *ifp, struct mb > struct arpcom *ac; > const struct ether_brport *eb; > unsigned int sdelim = 0; > + uint64_t dst, self; > > /* Drop short frames */ > if (m->m_len < ETHER_HDR_LEN) > @@ -450,7 +451,9 @@ ether_input(struct ifnet *ifp, struct mb >*/ > > eh = mtod(m, struct ether_header *); > - if (memcmp(ac->ac_enaddr, eh->ether_dhost, ETHER_ADDR_LEN) != 0) { > + dst = ether_addr_to_e64((struct ether_addr *)eh->ether_dhost); > + self = ether_addr_to_e64((struct ether_addr *)ac->ac_enaddr); > + if (dst != self) { > #if NCARP > 0 > /* >* If it's not for this port, it could be for carp(4). > @@ -468,7 +471,7 @@ ether_input(struct ifnet *ifp, struct mb > /* >* If not, it must be multicast or broadcast to go further. >*/ > - if (!ETHER_IS_MULTICAST(eh->ether_dhost)) > + if (!ETH64_IS_MULTICAST(dst)) > goto dropanyway; > > /* > @@ -476,15 +479,13 @@ ether_input(struct ifnet *ifp, struct mb >* if it came from us. >*/ > if ((ifp->if_flags & IFF_SIMPLEX) == 0) { > - if (memcmp(ac->ac_enaddr, eh->ether_shost, > - ETHER_ADDR_LEN) == 0) > + uint64_t src = ether_addr_to_e64( > + (struct ether_addr *)eh->ether_shost); > + if (self == src) > goto dropanyway; > } > > - if (ETHER_IS_BROADCAST(eh->ether_dhost)) > - m->m_flags |= M_BCAST; > - else > - m->m_flags |= M_MCAST; > + SET(m->m_flags, ETH64_IS_BROADCAST(dst) ? M_BCAST : M_MCAST); > ifp->if_imcasts++; > } > > -- :wq Claudio
Re: Kill SINGLE_PTRACE
On Thu, Mar 04, 2021 at 11:06:21AM +0100, Martin Pieuchot wrote: > On 04/03/21(Thu) 10:36, Claudio Jeker wrote: > > On Thu, Mar 04, 2021 at 10:28:50AM +0100, Martin Pieuchot wrote: > > > SINGLE_PTRACE has almost the same semantic as SINGLE_SUSPEND. The > > > difference is that there's no need to wait for other threads to be > > > parked. > > > > > > Diff below changes single_thread_set() to be explicit when waiting is > > > required. This allows us to get rid of SINGLE_PTRACE now and soon to > > > use SINGLE_SUSPEND around proc_stop(), even when the thread is not being > > > traced. > > > > > > ok? > > > > > > > > > > @@ -2000,14 +2000,12 @@ single_thread_check(struct proc *p, int > > > * where the other threads should stop: > > > * - SINGLE_SUSPEND: stop wherever they are, will later either be told > > > to exit > > > *(by setting to SINGLE_EXIT) or be released (via > > > single_thread_clear()) > > > - * - SINGLE_PTRACE: stop wherever they are, will wait for them to stop > > > - *later (via single_thread_wait()) and released as with > > > SINGLE_SUSPEND > > > * - SINGLE_UNWIND: just unwind to kernel boundary, will be told to exit > > > *or released as with SINGLE_SUSPEND > > > * - SINGLE_EXIT: unwind to kernel boundary and exit > > > */ > > > int > > > -single_thread_set(struct proc *p, enum single_thread_mode mode, int deep) > > > +single_thread_set(struct proc *p, enum single_thread_mode mode, int wait) > > > { > > > struct process *pr = p->p_p; > > > struct proc *q; > > > @@ -2016,7 +2014,7 @@ single_thread_set(struct proc *p, enum s > > > KASSERT(curproc == p); > > > > > > SCHED_LOCK(s); > > > - error = single_thread_check_locked(p, deep, s); > > > + error = single_thread_check_locked(p, (mode == SINGLE_UNWIND), s); > > > > Either the comment above or the code itself are not correct. > > SINGLE_EXIT is also supposed to unwind according to comment. > > The comment documents what sibling threads are supposed to do once the > current one has called single_thread_set() with a given SINGLE_* option. > > Sibling threads will continue to execute until the next parking point > where single_thread_check() are. Parking points are divided in two > categories. In the "deep" ones unwinding is preferred for UNWIND and > EXIT, in the others only context switching occurs. > > Every single_thread_set() call is in itself a parking point to prevent > races. The only "deep" parking point is the one in sys_execve() for > obvious reasons. Actually this is where I got confused. This is the place where "deep" is the wrong word. The fact that SINGLE_UNWIND will abort the single_thread_set() if another thread is in the process to run single threaded should probably be added as a comment here. In the end this is here to prevent a race between two threads calling execve() at the same time. > So maybe we should rename SINGLE_UNWIND into SINGLE_EXEC, would that be > clearer? If we go this road we might want to rename SINGLE_SUSPEND to > SINGLE_STOP to better describe the reason for parking sibling threads. No, the current names are OK. I prefer to know what the end result is (UNWIND vs EXIT vs SUSPEND) than from where it was called. Especially since sys_execve() uses both UNWIND and EXIT. -- :wq Claudio
Re: slacd(8): Implement RFC 8981 (revised RFC 4941, IPv6 Temporary Address Extensions) (revised patch)
Works fine here, OK florian On Wed, Mar 03, 2021 at 08:50:59PM -0300, Fernando Gont wrote: > This revised patch adresses a minor issue pointed out by Florian (avoid > floating-point math). At this point this is unnecessary, since the > IPv6 temporary address lifetimes are not configurable. > > P.S.: Patch also available at: > https://www.gont.com.ar/files/fgont-patch-rfc8981-v0.3.diff > > Thanks, > Fernando > > > > > diff --git engine.c engine.c > index 4160d798261..3ddf0303dd9 100644 > --- engine.c > +++ engine.c > @@ -88,11 +88,15 @@ > #define RTR_SOLICITATION_INTERVAL 4 > #define MAX_RTR_SOLICITATIONS 3 > > -/* constants for RFC 4941 autoconf privacy extension */ > -#define PRIV_MAX_DESYNC_FACTOR 600 /* 10 minutes */ > +/* > + * Constants for RFC 8981 autoconf privacy extensions > + * > + * PRIV_PREFERRED_LIFETIME > (PRIV_MAX_DESYNC_FACTOR + PRIV_REGEN_ADVANCE) > + */ > #define PRIV_VALID_LIFETIME 172800 /* 2 days */ > #define PRIV_PREFERRED_LIFETIME 86400 /* 1 day */ > -#define PRIV_REGEN_ADVANCE 5 /* 5 seconds */ > +#define PRIV_MAX_DESYNC_FACTOR 34560 /* PRIV_PREFERRED_LIFETIME * > 0.4 */ > +#define PRIV_REGEN_ADVANCE 5 /* 5 seconds */ > > enum if_state { > IF_DOWN, > @@ -198,6 +202,7 @@ struct address_proposal { > uint8_t prefix_len; > uint32_t vltime; > uint32_t pltime; > + uint32_t desync_factor; > uint8_t soiikey[SLAACD_SOIIKEY_LEN]; > uint32_t mtu; > }; > @@ -327,8 +332,6 @@ static struct imsgev *iev_frontend; > static struct imsgev *iev_main; > int64_t proposal_id; > > -uint32_t desync_factor; > - > void > engine_sig_handler(int sig, short event, void *arg) > { > @@ -399,8 +402,6 @@ engine(int debug, int verbose) > > LIST_INIT(_interfaces); > > - desync_factor = arc4random_uniform(PRIV_MAX_DESYNC_FACTOR); > - > event_dispatch(); > > engine_shutdown(); > @@ -1858,14 +1859,18 @@ update_iface_ra_prefix(struct slaacd_iface *iface, > struct radv *ra, > > if (addr_proposal->privacy) { > struct timespec now; > - int64_t ltime; > + int64_t ltime, mtime; > > if (clock_gettime(CLOCK_MONOTONIC, )) > fatal("clock_gettime"); > > - ltime = MINIMUM(addr_proposal->created.tv_sec + > - PRIV_PREFERRED_LIFETIME - desync_factor, > - now.tv_sec + prefix->pltime) - now.tv_sec; > + mtime = addr_proposal->created.tv_sec + > + PRIV_PREFERRED_LIFETIME - > + addr_proposal->desync_factor; > + > + ltime = MINIMUM(mtime, now.tv_sec + prefix->pltime) - > + now.tv_sec; > + > pltime = ltime > 0 ? ltime : 0; > > ltime = MINIMUM(addr_proposal->created.tv_sec + > @@ -1873,7 +1878,7 @@ update_iface_ra_prefix(struct slaacd_iface *iface, > struct radv *ra, > now.tv_sec; > vltime = ltime > 0 ? ltime : 0; > > - if (pltime > PRIV_REGEN_ADVANCE) > + if ((mtime - now.tv_sec) > PRIV_REGEN_ADVANCE) > found_privacy = 1; > } else { > pltime = prefix->pltime; > @@ -1919,11 +1924,11 @@ update_iface_ra_prefix(struct slaacd_iface *iface, > struct radv *ra, > > /* privacy addresses do not depend on eui64 */ > if (!found_privacy && iface->autoconfprivacy) { > - if (prefix->pltime < desync_factor) { > + if (prefix->pltime < PRIV_REGEN_ADVANCE) { > log_warnx("%s: pltime from %s is too small: %d < %d; " > "not generating privacy address", __func__, > sin6_to_str(>from), prefix->pltime, > - desync_factor); > + PRIV_REGEN_ADVANCE); > } else > /* new privacy proposal */ > gen_address_proposal(iface, ra, prefix, 1); > @@ -2055,8 +2060,11 @@ gen_address_proposal(struct slaacd_iface *iface, > struct radv *ra, struct > if (privacy) { > addr_proposal->vltime = MINIMUM(prefix->vltime, > PRIV_VALID_LIFETIME); > + addr_proposal->desync_factor = > + arc4random_uniform(PRIV_MAX_DESYNC_FACTOR); > + > addr_proposal->pltime = MINIMUM(prefix->pltime, > - PRIV_PREFERRED_LIFETIME - desync_factor); > + PRIV_PREFERRED_LIFETIME - addr_proposal->desync_factor); > }
Re: Read `ps_single' once
> Date: Thu, 4 Mar 2021 11:19:23 +0100 > From: Martin Pieuchot > > On 04/03/21(Thu) 11:01, Mark Kettenis wrote: > > > Date: Thu, 4 Mar 2021 10:54:48 +0100 > > > From: Patrick Wildt > > > > > > Am Thu, Mar 04, 2021 at 10:42:24AM +0100 schrieb Mark Kettenis: > > > > > Date: Thu, 4 Mar 2021 10:34:24 +0100 > > > > > From: Martin Pieuchot > > > > > > > > > > Running t/rw/msleep(9) w/o KERNEL_LOCK() implies that a thread can > > > > > change the value of `ps_single' while one of its siblings might be > > > > > dereferencing it. > > > > > > > > > > To prevent inconsistencies in the code executed by sibling thread, the > > > > > diff below makes sure `ps_single' is dereferenced only once in various > > > > > parts of the kernel. > > > > > > > > > > ok? > > > > > > > > I think that means that ps_single has to be declared "volatile". > > > > > > Isn't there the READ_ONCE(x) macro, that does exactly that? > > > > Not a big fan of READ_ONCE() and WRITE_ONCE(), but apparently those > > are needed to comply with the alpha memory model. At least in some > > cases... > > Updated diff using READ_ONCE(), ok? If you use READ_ONCE() you shoul also use WRITE_ONCE() everywhere where you modify ps_single isn't it? That is one of the reasons I dislike these macros... > Index: kern/kern_exit.c > === > RCS file: /cvs/src/sys/kern/kern_exit.c,v > retrieving revision 1.196 > diff -u -p -r1.196 kern_exit.c > --- kern/kern_exit.c 15 Feb 2021 09:35:59 - 1.196 > +++ kern/kern_exit.c 4 Mar 2021 10:15:22 - > @@ -274,6 +274,8 @@ exit1(struct proc *p, int xexit, int xsi >*/ > if (qr->ps_flags & PS_TRACED && > !(qr->ps_flags & PS_EXITING)) { > + struct proc *st; > + > process_untrace(qr); > > /* > @@ -281,9 +283,9 @@ exit1(struct proc *p, int xexit, int xsi >* direct the signal to the active >* thread to avoid deadlock. >*/ > - if (qr->ps_single) > - ptsignal(qr->ps_single, SIGKILL, > - STHREAD); > + st = READ_ONCE(qr->ps_single); > + if (st != NULL) > + ptsignal(st, SIGKILL, STHREAD); > else > prsignal(qr, SIGKILL); > } else { > @@ -510,7 +512,7 @@ dowait4(struct proc *q, pid_t pid, int * > { > int nfound; > struct process *pr; > - struct proc *p; > + struct proc *p, *st; > int error; > > if (pid == 0) > @@ -541,10 +543,11 @@ loop: > proc_finish_wait(q, p); > return (0); > } > + > + st = READ_ONCE(pr->ps_single); > if (pr->ps_flags & PS_TRACED && > - (pr->ps_flags & PS_WAITED) == 0 && pr->ps_single && > - pr->ps_single->p_stat == SSTOP && > - (pr->ps_single->p_flag & P_SUSPSINGLE) == 0) { > + (pr->ps_flags & PS_WAITED) == 0 && st != NULL && > + st->p_stat == SSTOP && (st->p_flag & P_SUSPSINGLE) == 0) { > if (single_thread_wait(pr, 0)) > goto loop; > > Index: kern/sys_process.c > === > RCS file: /cvs/src/sys/kern/sys_process.c,v > retrieving revision 1.86 > diff -u -p -r1.86 sys_process.c > --- kern/sys_process.c8 Feb 2021 10:51:02 - 1.86 > +++ kern/sys_process.c4 Mar 2021 10:15:57 - > @@ -273,7 +273,7 @@ sys_ptrace(struct proc *p, void *v, regi > int > ptrace_ctrl(struct proc *p, int req, pid_t pid, caddr_t addr, int data) > { > - struct proc *t; /* target thread */ > + struct proc *st, *t;/* target thread */ > struct process *tr; /* target process */ > int error = 0; > int s; > @@ -433,8 +433,9 @@ ptrace_ctrl(struct proc *p, int req, pid >* from where it stopped." >*/ > > - if (pid < THREAD_PID_OFFSET && tr->ps_single) > - t = tr->ps_single; > + st = READ_ONCE(tr->ps_single); > + if (pid < THREAD_PID_OFFSET && st != NULL) > + t = st; > > /* If the address parameter is not (int *)1, set the pc. */ > if ((int *)addr != (int *)1) > @@ -464,8 +465,9 @@ ptrace_ctrl(struct proc *p, int req, pid >* from where it stopped." >*/ > > - if (pid < THREAD_PID_OFFSET && tr->ps_single) > - t =
Re: Read `ps_single' once
On 04/03/21(Thu) 11:01, Mark Kettenis wrote: > > Date: Thu, 4 Mar 2021 10:54:48 +0100 > > From: Patrick Wildt > > > > Am Thu, Mar 04, 2021 at 10:42:24AM +0100 schrieb Mark Kettenis: > > > > Date: Thu, 4 Mar 2021 10:34:24 +0100 > > > > From: Martin Pieuchot > > > > > > > > Running t/rw/msleep(9) w/o KERNEL_LOCK() implies that a thread can > > > > change the value of `ps_single' while one of its siblings might be > > > > dereferencing it. > > > > > > > > To prevent inconsistencies in the code executed by sibling thread, the > > > > diff below makes sure `ps_single' is dereferenced only once in various > > > > parts of the kernel. > > > > > > > > ok? > > > > > > I think that means that ps_single has to be declared "volatile". > > > > Isn't there the READ_ONCE(x) macro, that does exactly that? > > Not a big fan of READ_ONCE() and WRITE_ONCE(), but apparently those > are needed to comply with the alpha memory model. At least in some > cases... Updated diff using READ_ONCE(), ok? Index: kern/kern_exit.c === RCS file: /cvs/src/sys/kern/kern_exit.c,v retrieving revision 1.196 diff -u -p -r1.196 kern_exit.c --- kern/kern_exit.c15 Feb 2021 09:35:59 - 1.196 +++ kern/kern_exit.c4 Mar 2021 10:15:22 - @@ -274,6 +274,8 @@ exit1(struct proc *p, int xexit, int xsi */ if (qr->ps_flags & PS_TRACED && !(qr->ps_flags & PS_EXITING)) { + struct proc *st; + process_untrace(qr); /* @@ -281,9 +283,9 @@ exit1(struct proc *p, int xexit, int xsi * direct the signal to the active * thread to avoid deadlock. */ - if (qr->ps_single) - ptsignal(qr->ps_single, SIGKILL, - STHREAD); + st = READ_ONCE(qr->ps_single); + if (st != NULL) + ptsignal(st, SIGKILL, STHREAD); else prsignal(qr, SIGKILL); } else { @@ -510,7 +512,7 @@ dowait4(struct proc *q, pid_t pid, int * { int nfound; struct process *pr; - struct proc *p; + struct proc *p, *st; int error; if (pid == 0) @@ -541,10 +543,11 @@ loop: proc_finish_wait(q, p); return (0); } + + st = READ_ONCE(pr->ps_single); if (pr->ps_flags & PS_TRACED && - (pr->ps_flags & PS_WAITED) == 0 && pr->ps_single && - pr->ps_single->p_stat == SSTOP && - (pr->ps_single->p_flag & P_SUSPSINGLE) == 0) { + (pr->ps_flags & PS_WAITED) == 0 && st != NULL && + st->p_stat == SSTOP && (st->p_flag & P_SUSPSINGLE) == 0) { if (single_thread_wait(pr, 0)) goto loop; Index: kern/sys_process.c === RCS file: /cvs/src/sys/kern/sys_process.c,v retrieving revision 1.86 diff -u -p -r1.86 sys_process.c --- kern/sys_process.c 8 Feb 2021 10:51:02 - 1.86 +++ kern/sys_process.c 4 Mar 2021 10:15:57 - @@ -273,7 +273,7 @@ sys_ptrace(struct proc *p, void *v, regi int ptrace_ctrl(struct proc *p, int req, pid_t pid, caddr_t addr, int data) { - struct proc *t; /* target thread */ + struct proc *st, *t;/* target thread */ struct process *tr; /* target process */ int error = 0; int s; @@ -433,8 +433,9 @@ ptrace_ctrl(struct proc *p, int req, pid * from where it stopped." */ - if (pid < THREAD_PID_OFFSET && tr->ps_single) - t = tr->ps_single; + st = READ_ONCE(tr->ps_single); + if (pid < THREAD_PID_OFFSET && st != NULL) + t = st; /* If the address parameter is not (int *)1, set the pc. */ if ((int *)addr != (int *)1) @@ -464,8 +465,9 @@ ptrace_ctrl(struct proc *p, int req, pid * from where it stopped." */ - if (pid < THREAD_PID_OFFSET && tr->ps_single) - t = tr->ps_single; + st = READ_ONCE(tr->ps_single); + if (pid < THREAD_PID_OFFSET && st != NULL) + t = st; #ifdef PT_STEP /* @@ -495,8 +497,9 @@ ptrace_ctrl(struct proc *p, int req, pid break; case PT_KILL: - if (pid < THREAD_PID_OFFSET &&
Re: Kill SINGLE_PTRACE
On 04/03/21(Thu) 10:36, Claudio Jeker wrote: > On Thu, Mar 04, 2021 at 10:28:50AM +0100, Martin Pieuchot wrote: > > SINGLE_PTRACE has almost the same semantic as SINGLE_SUSPEND. The > > difference is that there's no need to wait for other threads to be > > parked. > > > > Diff below changes single_thread_set() to be explicit when waiting is > > required. This allows us to get rid of SINGLE_PTRACE now and soon to > > use SINGLE_SUSPEND around proc_stop(), even when the thread is not being > > traced. > > > > ok? > > > > > > @@ -2000,14 +2000,12 @@ single_thread_check(struct proc *p, int > > * where the other threads should stop: > > * - SINGLE_SUSPEND: stop wherever they are, will later either be told to > > exit > > *(by setting to SINGLE_EXIT) or be released (via > > single_thread_clear()) > > - * - SINGLE_PTRACE: stop wherever they are, will wait for them to stop > > - *later (via single_thread_wait()) and released as with SINGLE_SUSPEND > > * - SINGLE_UNWIND: just unwind to kernel boundary, will be told to exit > > *or released as with SINGLE_SUSPEND > > * - SINGLE_EXIT: unwind to kernel boundary and exit > > */ > > int > > -single_thread_set(struct proc *p, enum single_thread_mode mode, int deep) > > +single_thread_set(struct proc *p, enum single_thread_mode mode, int wait) > > { > > struct process *pr = p->p_p; > > struct proc *q; > > @@ -2016,7 +2014,7 @@ single_thread_set(struct proc *p, enum s > > KASSERT(curproc == p); > > > > SCHED_LOCK(s); > > - error = single_thread_check_locked(p, deep, s); > > + error = single_thread_check_locked(p, (mode == SINGLE_UNWIND), s); > > Either the comment above or the code itself are not correct. > SINGLE_EXIT is also supposed to unwind according to comment. The comment documents what sibling threads are supposed to do once the current one has called single_thread_set() with a given SINGLE_* option. Sibling threads will continue to execute until the next parking point where single_thread_check() are. Parking points are divided in two categories. In the "deep" ones unwinding is preferred for UNWIND and EXIT, in the others only context switching occurs. Every single_thread_set() call is in itself a parking point to prevent races. The only "deap" parking point is the one in sys_execve() for obvious reasons. So maybe we should rename SINGLE_UNWIND into SINGLE_EXEC, would that be clearer? If we go this road we might want to rename SINGLE_SUSPEND to SINGLE_STOP to better describe the reason for parking sibling threads.
Re: Read `ps_single' once
> Date: Thu, 4 Mar 2021 10:54:48 +0100 > From: Patrick Wildt > > Am Thu, Mar 04, 2021 at 10:42:24AM +0100 schrieb Mark Kettenis: > > > Date: Thu, 4 Mar 2021 10:34:24 +0100 > > > From: Martin Pieuchot > > > > > > Running t/rw/msleep(9) w/o KERNEL_LOCK() implies that a thread can > > > change the value of `ps_single' while one of its siblings might be > > > dereferencing it. > > > > > > To prevent inconsistencies in the code executed by sibling thread, the > > > diff below makes sure `ps_single' is dereferenced only once in various > > > parts of the kernel. > > > > > > ok? > > > > I think that means that ps_single has to be declared "volatile". > > Isn't there the READ_ONCE(x) macro, that does exactly that? Not a big fan of READ_ONCE() and WRITE_ONCE(), but apparently those are needed to comply with the alpha memory model. At least in some cases... > > > Index: kern/kern_exit.c > > > === > > > RCS file: /cvs/src/sys/kern/kern_exit.c,v > > > retrieving revision 1.196 > > > diff -u -p -r1.196 kern_exit.c > > > --- kern/kern_exit.c 15 Feb 2021 09:35:59 - 1.196 > > > +++ kern/kern_exit.c 4 Mar 2021 09:29:27 - > > > @@ -274,6 +274,8 @@ exit1(struct proc *p, int xexit, int xsi > > >*/ > > > if (qr->ps_flags & PS_TRACED && > > > !(qr->ps_flags & PS_EXITING)) { > > > + struct proc *st; > > > + > > > process_untrace(qr); > > > > > > /* > > > @@ -281,9 +283,9 @@ exit1(struct proc *p, int xexit, int xsi > > >* direct the signal to the active > > >* thread to avoid deadlock. > > >*/ > > > - if (qr->ps_single) > > > - ptsignal(qr->ps_single, SIGKILL, > > > - STHREAD); > > > + st = qr->ps_single; > > > + if (st != NULL) > > > + ptsignal(st, SIGKILL, STHREAD); > > > else > > > prsignal(qr, SIGKILL); > > > } else { > > > @@ -510,7 +512,7 @@ dowait4(struct proc *q, pid_t pid, int * > > > { > > > int nfound; > > > struct process *pr; > > > - struct proc *p; > > > + struct proc *p, *st; > > > int error; > > > > > > if (pid == 0) > > > @@ -541,10 +543,11 @@ loop: > > > proc_finish_wait(q, p); > > > return (0); > > > } > > > + > > > + st = pr->ps_single; > > > if (pr->ps_flags & PS_TRACED && > > > - (pr->ps_flags & PS_WAITED) == 0 && pr->ps_single && > > > - pr->ps_single->p_stat == SSTOP && > > > - (pr->ps_single->p_flag & P_SUSPSINGLE) == 0) { > > > + (pr->ps_flags & PS_WAITED) == 0 && st != NULL && > > > + st->p_stat == SSTOP && (st->p_flag & P_SUSPSINGLE) == 0) { > > > if (single_thread_wait(pr, 0)) > > > goto loop; > > > > > > Index: kern/sys_process.c > > > === > > > RCS file: /cvs/src/sys/kern/sys_process.c,v > > > retrieving revision 1.86 > > > diff -u -p -r1.86 sys_process.c > > > --- kern/sys_process.c8 Feb 2021 10:51:02 - 1.86 > > > +++ kern/sys_process.c4 Mar 2021 09:29:27 - > > > @@ -273,7 +273,7 @@ sys_ptrace(struct proc *p, void *v, regi > > > int > > > ptrace_ctrl(struct proc *p, int req, pid_t pid, caddr_t addr, int data) > > > { > > > - struct proc *t; /* target thread */ > > > + struct proc *st, *t;/* target thread */ > > > struct process *tr; /* target process */ > > > int error = 0; > > > int s; > > > @@ -433,8 +433,9 @@ ptrace_ctrl(struct proc *p, int req, pid > > >* from where it stopped." > > >*/ > > > > > > - if (pid < THREAD_PID_OFFSET && tr->ps_single) > > > - t = tr->ps_single; > > > + st = tr->ps_single; > > > + if (pid < THREAD_PID_OFFSET && st != NULL) > > > + t = st; > > > > > > /* If the address parameter is not (int *)1, set the pc. */ > > > if ((int *)addr != (int *)1) > > > @@ -464,8 +465,9 @@ ptrace_ctrl(struct proc *p, int req, pid > > >* from where it stopped." > > >*/ > > > > > > - if (pid < THREAD_PID_OFFSET && tr->ps_single) > > > - t = tr->ps_single; > > > + st = tr->ps_single; > > > + if (pid < THREAD_PID_OFFSET && st != NULL) > > > + t = st; > > > > > > #ifdef PT_STEP > > > /* > > > @@ -495,8 +497,9 @@ ptrace_ctrl(struct proc *p, int req, pid > > > break; > > > > > > case
Re: Read `ps_single' once
Am Thu, Mar 04, 2021 at 10:42:24AM +0100 schrieb Mark Kettenis: > > Date: Thu, 4 Mar 2021 10:34:24 +0100 > > From: Martin Pieuchot > > > > Running t/rw/msleep(9) w/o KERNEL_LOCK() implies that a thread can > > change the value of `ps_single' while one of its siblings might be > > dereferencing it. > > > > To prevent inconsistencies in the code executed by sibling thread, the > > diff below makes sure `ps_single' is dereferenced only once in various > > parts of the kernel. > > > > ok? > > I think that means that ps_single has to be declared "volatile". Isn't there the READ_ONCE(x) macro, that does exactly that? > > Index: kern/kern_exit.c > > === > > RCS file: /cvs/src/sys/kern/kern_exit.c,v > > retrieving revision 1.196 > > diff -u -p -r1.196 kern_exit.c > > --- kern/kern_exit.c15 Feb 2021 09:35:59 - 1.196 > > +++ kern/kern_exit.c4 Mar 2021 09:29:27 - > > @@ -274,6 +274,8 @@ exit1(struct proc *p, int xexit, int xsi > > */ > > if (qr->ps_flags & PS_TRACED && > > !(qr->ps_flags & PS_EXITING)) { > > + struct proc *st; > > + > > process_untrace(qr); > > > > /* > > @@ -281,9 +283,9 @@ exit1(struct proc *p, int xexit, int xsi > > * direct the signal to the active > > * thread to avoid deadlock. > > */ > > - if (qr->ps_single) > > - ptsignal(qr->ps_single, SIGKILL, > > - STHREAD); > > + st = qr->ps_single; > > + if (st != NULL) > > + ptsignal(st, SIGKILL, STHREAD); > > else > > prsignal(qr, SIGKILL); > > } else { > > @@ -510,7 +512,7 @@ dowait4(struct proc *q, pid_t pid, int * > > { > > int nfound; > > struct process *pr; > > - struct proc *p; > > + struct proc *p, *st; > > int error; > > > > if (pid == 0) > > @@ -541,10 +543,11 @@ loop: > > proc_finish_wait(q, p); > > return (0); > > } > > + > > + st = pr->ps_single; > > if (pr->ps_flags & PS_TRACED && > > - (pr->ps_flags & PS_WAITED) == 0 && pr->ps_single && > > - pr->ps_single->p_stat == SSTOP && > > - (pr->ps_single->p_flag & P_SUSPSINGLE) == 0) { > > + (pr->ps_flags & PS_WAITED) == 0 && st != NULL && > > + st->p_stat == SSTOP && (st->p_flag & P_SUSPSINGLE) == 0) { > > if (single_thread_wait(pr, 0)) > > goto loop; > > > > Index: kern/sys_process.c > > === > > RCS file: /cvs/src/sys/kern/sys_process.c,v > > retrieving revision 1.86 > > diff -u -p -r1.86 sys_process.c > > --- kern/sys_process.c 8 Feb 2021 10:51:02 - 1.86 > > +++ kern/sys_process.c 4 Mar 2021 09:29:27 - > > @@ -273,7 +273,7 @@ sys_ptrace(struct proc *p, void *v, regi > > int > > ptrace_ctrl(struct proc *p, int req, pid_t pid, caddr_t addr, int data) > > { > > - struct proc *t; /* target thread */ > > + struct proc *st, *t;/* target thread */ > > struct process *tr; /* target process */ > > int error = 0; > > int s; > > @@ -433,8 +433,9 @@ ptrace_ctrl(struct proc *p, int req, pid > > * from where it stopped." > > */ > > > > - if (pid < THREAD_PID_OFFSET && tr->ps_single) > > - t = tr->ps_single; > > + st = tr->ps_single; > > + if (pid < THREAD_PID_OFFSET && st != NULL) > > + t = st; > > > > /* If the address parameter is not (int *)1, set the pc. */ > > if ((int *)addr != (int *)1) > > @@ -464,8 +465,9 @@ ptrace_ctrl(struct proc *p, int req, pid > > * from where it stopped." > > */ > > > > - if (pid < THREAD_PID_OFFSET && tr->ps_single) > > - t = tr->ps_single; > > + st = tr->ps_single; > > + if (pid < THREAD_PID_OFFSET && st != NULL) > > + t = st; > > > > #ifdef PT_STEP > > /* > > @@ -495,8 +497,9 @@ ptrace_ctrl(struct proc *p, int req, pid > > break; > > > > case PT_KILL: > > - if (pid < THREAD_PID_OFFSET && tr->ps_single) > > - t = tr->ps_single; > > + st = tr->ps_single; > > + if (pid < THREAD_PID_OFFSET && st != NULL) > > + t = st; > > > > /* just send the process a KILL signal. */ > >
single_thread_clear() w/o KERNEL_LOCK()
single_thread_clear() manipulates the same data structures as single_thread_set() and, as such, doesn't need the KERNEL_LOCK(). However cursig() does need some sort of serialization to ensure that per-process data structures like signals, flags and traced-signum stay consistent. So the diff below move the assertion up in preparation for more mp work. ok? Index: kern/kern_sig.c === RCS file: /cvs/src/sys/kern/kern_sig.c,v retrieving revision 1.274 diff -u -p -r1.274 kern_sig.c --- kern/kern_sig.c 4 Mar 2021 09:02:37 - 1.274 +++ kern/kern_sig.c 4 Mar 2021 09:35:47 - @@ -1182,6 +1182,8 @@ cursig(struct proc *p) int dolock = (p->p_flag & P_SINTR) == 0; int s; + KERNEL_ASSERT_LOCKED(); + sigpending = (p->p_siglist | pr->ps_siglist); if (sigpending == 0) return 0; @@ -1225,11 +1227,7 @@ cursig(struct proc *p) if (dolock) SCHED_UNLOCK(s); - if (dolock) - KERNEL_LOCK(); single_thread_clear(p, 0); - if (dolock) - KERNEL_UNLOCK(); /* * If we are no longer being traced, or the parent @@ -2128,7 +2126,6 @@ single_thread_clear(struct proc *p, int KASSERT(pr->ps_single == p); KASSERT(curproc == p); - KERNEL_ASSERT_LOCKED(); SCHED_LOCK(s); pr->ps_single = NULL;
Re: Read `ps_single' once
> Date: Thu, 4 Mar 2021 10:34:24 +0100 > From: Martin Pieuchot > > Running t/rw/msleep(9) w/o KERNEL_LOCK() implies that a thread can > change the value of `ps_single' while one of its siblings might be > dereferencing it. > > To prevent inconsistencies in the code executed by sibling thread, the > diff below makes sure `ps_single' is dereferenced only once in various > parts of the kernel. > > ok? I think that means that ps_single has to be declared "volatile". > Index: kern/kern_exit.c > === > RCS file: /cvs/src/sys/kern/kern_exit.c,v > retrieving revision 1.196 > diff -u -p -r1.196 kern_exit.c > --- kern/kern_exit.c 15 Feb 2021 09:35:59 - 1.196 > +++ kern/kern_exit.c 4 Mar 2021 09:29:27 - > @@ -274,6 +274,8 @@ exit1(struct proc *p, int xexit, int xsi >*/ > if (qr->ps_flags & PS_TRACED && > !(qr->ps_flags & PS_EXITING)) { > + struct proc *st; > + > process_untrace(qr); > > /* > @@ -281,9 +283,9 @@ exit1(struct proc *p, int xexit, int xsi >* direct the signal to the active >* thread to avoid deadlock. >*/ > - if (qr->ps_single) > - ptsignal(qr->ps_single, SIGKILL, > - STHREAD); > + st = qr->ps_single; > + if (st != NULL) > + ptsignal(st, SIGKILL, STHREAD); > else > prsignal(qr, SIGKILL); > } else { > @@ -510,7 +512,7 @@ dowait4(struct proc *q, pid_t pid, int * > { > int nfound; > struct process *pr; > - struct proc *p; > + struct proc *p, *st; > int error; > > if (pid == 0) > @@ -541,10 +543,11 @@ loop: > proc_finish_wait(q, p); > return (0); > } > + > + st = pr->ps_single; > if (pr->ps_flags & PS_TRACED && > - (pr->ps_flags & PS_WAITED) == 0 && pr->ps_single && > - pr->ps_single->p_stat == SSTOP && > - (pr->ps_single->p_flag & P_SUSPSINGLE) == 0) { > + (pr->ps_flags & PS_WAITED) == 0 && st != NULL && > + st->p_stat == SSTOP && (st->p_flag & P_SUSPSINGLE) == 0) { > if (single_thread_wait(pr, 0)) > goto loop; > > Index: kern/sys_process.c > === > RCS file: /cvs/src/sys/kern/sys_process.c,v > retrieving revision 1.86 > diff -u -p -r1.86 sys_process.c > --- kern/sys_process.c8 Feb 2021 10:51:02 - 1.86 > +++ kern/sys_process.c4 Mar 2021 09:29:27 - > @@ -273,7 +273,7 @@ sys_ptrace(struct proc *p, void *v, regi > int > ptrace_ctrl(struct proc *p, int req, pid_t pid, caddr_t addr, int data) > { > - struct proc *t; /* target thread */ > + struct proc *st, *t;/* target thread */ > struct process *tr; /* target process */ > int error = 0; > int s; > @@ -433,8 +433,9 @@ ptrace_ctrl(struct proc *p, int req, pid >* from where it stopped." >*/ > > - if (pid < THREAD_PID_OFFSET && tr->ps_single) > - t = tr->ps_single; > + st = tr->ps_single; > + if (pid < THREAD_PID_OFFSET && st != NULL) > + t = st; > > /* If the address parameter is not (int *)1, set the pc. */ > if ((int *)addr != (int *)1) > @@ -464,8 +465,9 @@ ptrace_ctrl(struct proc *p, int req, pid >* from where it stopped." >*/ > > - if (pid < THREAD_PID_OFFSET && tr->ps_single) > - t = tr->ps_single; > + st = tr->ps_single; > + if (pid < THREAD_PID_OFFSET && st != NULL) > + t = st; > > #ifdef PT_STEP > /* > @@ -495,8 +497,9 @@ ptrace_ctrl(struct proc *p, int req, pid > break; > > case PT_KILL: > - if (pid < THREAD_PID_OFFSET && tr->ps_single) > - t = tr->ps_single; > + st = tr->ps_single; > + if (pid < THREAD_PID_OFFSET && st != NULL) > + t = st; > > /* just send the process a KILL signal. */ > data = SIGKILL; > @@ -536,6 +539,7 @@ int > ptrace_kstate(struct proc *p, int req, pid_t pid, void *addr) > { > struct process *tr; /* target process */ > + struct proc *st; > struct ptrace_event
Re: Kill SINGLE_PTRACE
On Thu, Mar 04, 2021 at 10:28:50AM +0100, Martin Pieuchot wrote: > SINGLE_PTRACE has almost the same semantic as SINGLE_SUSPEND. The > difference is that there's no need to wait for other threads to be > parked. > > Diff below changes single_thread_set() to be explicit when waiting is > required. This allows us to get rid of SINGLE_PTRACE now and soon to > use SINGLE_SUSPEND around proc_stop(), even when the thread is not being > traced. > > ok? > > @@ -2000,14 +2000,12 @@ single_thread_check(struct proc *p, int > * where the other threads should stop: > * - SINGLE_SUSPEND: stop wherever they are, will later either be told to > exit > *(by setting to SINGLE_EXIT) or be released (via single_thread_clear()) > - * - SINGLE_PTRACE: stop wherever they are, will wait for them to stop > - *later (via single_thread_wait()) and released as with SINGLE_SUSPEND > * - SINGLE_UNWIND: just unwind to kernel boundary, will be told to exit > *or released as with SINGLE_SUSPEND > * - SINGLE_EXIT: unwind to kernel boundary and exit > */ > int > -single_thread_set(struct proc *p, enum single_thread_mode mode, int deep) > +single_thread_set(struct proc *p, enum single_thread_mode mode, int wait) > { > struct process *pr = p->p_p; > struct proc *q; > @@ -2016,7 +2014,7 @@ single_thread_set(struct proc *p, enum s > KASSERT(curproc == p); > > SCHED_LOCK(s); > - error = single_thread_check_locked(p, deep, s); > + error = single_thread_check_locked(p, (mode == SINGLE_UNWIND), s); Either the comment above or the code itself are not correct. SINGLE_EXIT is also supposed to unwind according to comment. -- :wq Claudio
Read `ps_single' once
Running t/rw/msleep(9) w/o KERNEL_LOCK() implies that a thread can change the value of `ps_single' while one of its siblings might be dereferencing it. To prevent inconsistencies in the code executed by sibling thread, the diff below makes sure `ps_single' is dereferenced only once in various parts of the kernel. ok? Index: kern/kern_exit.c === RCS file: /cvs/src/sys/kern/kern_exit.c,v retrieving revision 1.196 diff -u -p -r1.196 kern_exit.c --- kern/kern_exit.c15 Feb 2021 09:35:59 - 1.196 +++ kern/kern_exit.c4 Mar 2021 09:29:27 - @@ -274,6 +274,8 @@ exit1(struct proc *p, int xexit, int xsi */ if (qr->ps_flags & PS_TRACED && !(qr->ps_flags & PS_EXITING)) { + struct proc *st; + process_untrace(qr); /* @@ -281,9 +283,9 @@ exit1(struct proc *p, int xexit, int xsi * direct the signal to the active * thread to avoid deadlock. */ - if (qr->ps_single) - ptsignal(qr->ps_single, SIGKILL, - STHREAD); + st = qr->ps_single; + if (st != NULL) + ptsignal(st, SIGKILL, STHREAD); else prsignal(qr, SIGKILL); } else { @@ -510,7 +512,7 @@ dowait4(struct proc *q, pid_t pid, int * { int nfound; struct process *pr; - struct proc *p; + struct proc *p, *st; int error; if (pid == 0) @@ -541,10 +543,11 @@ loop: proc_finish_wait(q, p); return (0); } + + st = pr->ps_single; if (pr->ps_flags & PS_TRACED && - (pr->ps_flags & PS_WAITED) == 0 && pr->ps_single && - pr->ps_single->p_stat == SSTOP && - (pr->ps_single->p_flag & P_SUSPSINGLE) == 0) { + (pr->ps_flags & PS_WAITED) == 0 && st != NULL && + st->p_stat == SSTOP && (st->p_flag & P_SUSPSINGLE) == 0) { if (single_thread_wait(pr, 0)) goto loop; Index: kern/sys_process.c === RCS file: /cvs/src/sys/kern/sys_process.c,v retrieving revision 1.86 diff -u -p -r1.86 sys_process.c --- kern/sys_process.c 8 Feb 2021 10:51:02 - 1.86 +++ kern/sys_process.c 4 Mar 2021 09:29:27 - @@ -273,7 +273,7 @@ sys_ptrace(struct proc *p, void *v, regi int ptrace_ctrl(struct proc *p, int req, pid_t pid, caddr_t addr, int data) { - struct proc *t; /* target thread */ + struct proc *st, *t;/* target thread */ struct process *tr; /* target process */ int error = 0; int s; @@ -433,8 +433,9 @@ ptrace_ctrl(struct proc *p, int req, pid * from where it stopped." */ - if (pid < THREAD_PID_OFFSET && tr->ps_single) - t = tr->ps_single; + st = tr->ps_single; + if (pid < THREAD_PID_OFFSET && st != NULL) + t = st; /* If the address parameter is not (int *)1, set the pc. */ if ((int *)addr != (int *)1) @@ -464,8 +465,9 @@ ptrace_ctrl(struct proc *p, int req, pid * from where it stopped." */ - if (pid < THREAD_PID_OFFSET && tr->ps_single) - t = tr->ps_single; + st = tr->ps_single; + if (pid < THREAD_PID_OFFSET && st != NULL) + t = st; #ifdef PT_STEP /* @@ -495,8 +497,9 @@ ptrace_ctrl(struct proc *p, int req, pid break; case PT_KILL: - if (pid < THREAD_PID_OFFSET && tr->ps_single) - t = tr->ps_single; + st = tr->ps_single; + if (pid < THREAD_PID_OFFSET && st != NULL) + t = st; /* just send the process a KILL signal. */ data = SIGKILL; @@ -536,6 +539,7 @@ int ptrace_kstate(struct proc *p, int req, pid_t pid, void *addr) { struct process *tr; /* target process */ + struct proc *st; struct ptrace_event *pe = addr; int error; @@ -582,9 +586,9 @@ ptrace_kstate(struct proc *p, int req, p tr->ps_ptmask = pe->pe_set_event; break; case PT_GET_PROCESS_STATE: - if (tr->ps_single) -
Kill SINGLE_PTRACE
SINGLE_PTRACE has almost the same semantic as SINGLE_SUSPEND. The difference is that there's no need to wait for other threads to be parked. Diff below changes single_thread_set() to be explicit when waiting is required. This allows us to get rid of SINGLE_PTRACE now and soon to use SINGLE_SUSPEND around proc_stop(), even when the thread is not being traced. ok? Index: kern/kern_exec.c === RCS file: /cvs/src/sys/kern/kern_exec.c,v retrieving revision 1.219 diff -u -p -r1.219 kern_exec.c --- kern/kern_exec.c15 Oct 2020 16:31:11 - 1.219 +++ kern/kern_exec.c4 Mar 2021 09:17:09 - @@ -432,7 +432,7 @@ sys_execve(struct proc *p, void *v, regi * we're committed: any further errors will kill the process, so * kill the other threads now. */ - single_thread_set(p, SINGLE_EXIT, 0); + single_thread_set(p, SINGLE_EXIT, 1); /* * Prepare vmspace for remapping. Note that uvmspace_exec can replace Index: kern/kern_exit.c === RCS file: /cvs/src/sys/kern/kern_exit.c,v retrieving revision 1.196 diff -u -p -r1.196 kern_exit.c --- kern/kern_exit.c15 Feb 2021 09:35:59 - 1.196 +++ kern/kern_exit.c4 Mar 2021 09:17:10 - @@ -136,7 +136,7 @@ exit1(struct proc *p, int xexit, int xsi } else { /* nope, multi-threaded */ if (flags == EXIT_NORMAL) - single_thread_set(p, SINGLE_EXIT, 0); + single_thread_set(p, SINGLE_EXIT, 1); else if (flags == EXIT_THREAD) single_thread_check(p, 0); } Index: kern/kern_sig.c === RCS file: /cvs/src/sys/kern/kern_sig.c,v retrieving revision 1.274 diff -u -p -r1.274 kern_sig.c --- kern/kern_sig.c 4 Mar 2021 09:02:37 - 1.274 +++ kern/kern_sig.c 4 Mar 2021 09:17:10 - @@ -1490,7 +1490,7 @@ sigexit(struct proc *p, int signum) /* if there are other threads, pause them */ if (P_HASSIBLING(p)) - single_thread_set(p, SINGLE_SUSPEND, 0); + single_thread_set(p, SINGLE_SUSPEND, 1); if (coredump(p) == 0) signum |= WCOREFLAG; @@ -2000,14 +2000,12 @@ single_thread_check(struct proc *p, int * where the other threads should stop: * - SINGLE_SUSPEND: stop wherever they are, will later either be told to exit *(by setting to SINGLE_EXIT) or be released (via single_thread_clear()) - * - SINGLE_PTRACE: stop wherever they are, will wait for them to stop - *later (via single_thread_wait()) and released as with SINGLE_SUSPEND * - SINGLE_UNWIND: just unwind to kernel boundary, will be told to exit *or released as with SINGLE_SUSPEND * - SINGLE_EXIT: unwind to kernel boundary and exit */ int -single_thread_set(struct proc *p, enum single_thread_mode mode, int deep) +single_thread_set(struct proc *p, enum single_thread_mode mode, int wait) { struct process *pr = p->p_p; struct proc *q; @@ -2016,7 +2014,7 @@ single_thread_set(struct proc *p, enum s KASSERT(curproc == p); SCHED_LOCK(s); - error = single_thread_check_locked(p, deep, s); + error = single_thread_check_locked(p, (mode == SINGLE_UNWIND), s); if (error) { SCHED_UNLOCK(s); return error; @@ -2024,7 +2022,6 @@ single_thread_set(struct proc *p, enum s switch (mode) { case SINGLE_SUSPEND: - case SINGLE_PTRACE: break; case SINGLE_UNWIND: atomic_setbits_int(>ps_flags, PS_SINGLEUNWIND); @@ -2063,8 +2060,7 @@ single_thread_set(struct proc *p, enum s /* if it's not interruptible, then just have to wait */ if (q->p_flag & P_SINTR) { /* merely need to suspend? just stop it */ - if (mode == SINGLE_SUSPEND || - mode == SINGLE_PTRACE) { + if (mode == SINGLE_SUSPEND) { q->p_stat = SSTOP; break; } @@ -2089,7 +2085,7 @@ single_thread_set(struct proc *p, enum s } SCHED_UNLOCK(s); - if (mode != SINGLE_PTRACE) + if (wait) single_thread_wait(pr, 1); return 0;