rcctl(8): use modern ksh semantics
Hello, I've been perusing some of the ksh scripts within /usr/sbin/ and noticed some differences in coding style and shell syntax usage. Is there an "official" OpenBSD ksh style guide or a list of recommendations? For example, syspatch, sysupgrade and sysmerge all use double square brackets '[[' exclusively and the '((' arithmetic operator where appropriate. Conversely, /usr/sbin/rcctl seems to use double brackets only for pattern matching/comparisons, and uses the single square bracket '[' almost exclusively. Is there a specific reason for this? Are there external requirements/goals that I'm oblivious to? To test the waters, I've included a diff below that brings rcctl's usage of comparison operators in line with that of the other shell scripts in /usr/sbin/. This diff should apply cleanly against current; I grabbed my copy of rcctl.sh off of the github mirror a few hours ago. Does this diff look reasonable? Or should I stop tinkering? These changes passed my rudimentary testing and also got a clean bill of health from shellcheck. Regards, Jordan --- a/rcctl.sh Sat May 1 12:07:06 2021 +++ b/rcctl.sh Sat May 1 16:33:30 2021 @@ -40,7 +40,7 @@ needs_root() { - [ "$(id -u)" -ne 0 ] && _rc_err "${0##*/}: \"$*\" needs root privileges" + (($(id -u) != 0)) && _rc_err "${0##*/}: \"$*\" needs root privileges" } rcctl_err() @@ -55,17 +55,17 @@ cd /etc/rc.d && set -- * for _s; do [[ ${_s} == +([[:alnum:]_]) ]] || continue - [ ! -d "${_s}" ] && echo "${_s}" + [[ ! -d ${_s} ]] && echo "${_s}" done } pkg_scripts_append() { local _svc=$1 - [ -n "${_svc}" ] || return + [[ -n ${_svc} ]] || return rcconf_edit_begin - if [ -z "${pkg_scripts}" ]; then + if [[ -z ${pkg_scripts} ]]; then echo pkg_scripts="${_svc}" >>${_TMP_RCCONF} elif ! echo ${pkg_scripts} | grep -qw -- ${_svc}; then grep -v "^pkg_scripts.*=" /etc/rc.conf.local >${_TMP_RCCONF} @@ -77,7 +77,7 @@ pkg_scripts_order() { local _svcs="$*" - [ -n "${_svcs}" ] || return + [[ -n ${_svcs} ]] || return needs_root ${action} local _pkg_scripts _svc @@ -99,9 +99,9 @@ pkg_scripts_rm() { local _svc=$1 - [ -n "${_svc}" ] || return + [[ -n ${_svc} ]] || return - [ -z "${pkg_scripts}" ] && return + [[ -z ${pkg_scripts} ]] && return rcconf_edit_begin sed "/^pkg_scripts[[:>:]]/{s/[[:<:]]${_svc}[[:>:]]//g @@ -129,7 +129,7 @@ rcctl_err "cannot modify ${_TMP_RCCONF}" cat ${_TMP_RCCONF} >/etc/rc.conf.local || \ rcctl_err "cannot append to /etc/rc.conf.local" - if [ ! -s /etc/rc.conf.local ]; then + if [[ ! -s /etc/rc.conf.local ]]; then rm /etc/rc.conf.local || \ rcctl_err "cannot remove /etc/rc.conf.local" fi @@ -142,19 +142,19 @@ local _svc=$1 _rc_check_name "${_svc}" || return - [ -x "/etc/rc.d/${_svc}" ] && return + [[ -x /etc/rc.d/${_svc} ]] && return svc_is_special ${_svc} } svc_is_base() { local _svc=$1 - [ -n "${_svc}" ] || return + [[ -n ${_svc} ]] || return local _cached _ret _cached=$(eval echo \${cached_svc_is_base_${_svc}}) - [ "${_cached}" ] && return "${_cached}" + [[ -n ${_cached} ]] && return "${_cached}" grep -qw "^${_svc}_flags" /etc/rc.conf _ret=$? @@ -166,14 +166,14 @@ svc_is_meta() { local _svc=$1 - [ -n "${_svc}" ] || return + [[ -n ${_svc} ]] || return local _cached _ret _cached=$(eval echo \${cached_svc_is_meta_${_svc}}) - [ "${_cached}" ] && return "${_cached}" + [[ -n ${_cached} ]] && return "${_cached}" - [ -r "/etc/rc.d/${_svc}" ] && ! grep -qw "^rc_cmd" /etc/rc.d/${_svc} + [[ -r /etc/rc.d/${_svc} ]] && ! grep -qw "^rc_cmd" /etc/rc.d/${_svc} _ret=$? set -A cached_svc_is_meta_${_svc} -- ${_ret} @@ -183,12 +183,12 @@ svc_is_special() { local _svc=$1 - [ -n "${_svc}" ] || return + [[ -n ${_svc} ]] || return local _cached _ret _cached=$(eval echo \${cached_svc_is_special_${_svc}}) - [ "${_cached}" ] && return "${_cached}" + [[ -n ${_cached} ]] && return "${_cached}" echo ${_special_svcs} | grep -qw -- ${_svc} _ret=$? @@ -200,7 +200,7 @@ svc_ls() { local _lsarg=$1 - [ -n "${_lsarg}" ] || return + [[ -n ${_lsarg} ]] || return # we do not want to return the "status" nor the rc.d(8) script retcode local _ret=0 _on _svc _started @@ -222,8 +222,8 @@ off|on) for _svc in $(svc_ls all); do svc_get ${_svc} status && _on=1 - [ "${_lsarg}" = "on" -a -n "${_on}" ] || \ - [ "${_lsarg}" = "off" -a -z "${_on}" ] && \ + [[ ${_lsarg} == on && -n ${_on} ]] || \ + [[ ${_lsarg} == off && -z ${_on} ]] && \ echo ${_svc} unset _on done @@ -231,8 +231,8 @@ started|stopped)
Re: Fix assigning array variables in ksh
On 2/21/21 11:04 AM, Vadim Zhukov wrote: > Hello all. > > This continues the 'Another potential ksh bug?' thread on misc@: > https://marc.info/?l=openbsd-misc&m=160736700220621&w=2 > This present is a bit too late for Christmas, but at least the Day of > Red Army is coming soon. I'm sure noone is against this patch then. > > The code in typeset() function, which is responsible for almost all > shell variable tweaking, contains a bug: in case of array it passes > "foo=var" instead of only variable name ("foo"), if the value was > ever given. This results in, e.g., a bug reported by Jordan Geoghegan. > Generally speaking, we had creating a (temporary) variable via global() > in vpbase, and of course it didn't get the read-only marker. > > This fix is to use 'tvar' variable, which always contains shell > variable name, without value. > > As a bonus, this fixes the ifs.t:IFS-null-1 test (previously failing). > I'm too lazy to check why. :-) > > Okay anyone? > > -- > WBR, > Vadim Zhukov > > > Index: regress/bin/ksh/obsd-regress.t > === > RCS file: /cvs/src/regress/bin/ksh/obsd-regress.t,v > retrieving revision 1.10 > diff -u -p -r1.10 obsd-regress.t > --- regress/bin/ksh/obsd-regress.t8 Dec 2018 21:03:51 - 1.10 > +++ regress/bin/ksh/obsd-regress.t21 Feb 2021 18:51:54 - > @@ -503,3 +503,16 @@ description: > stdin: > kill -s SIGINFO $$ > --- > + > +name: overwrite-ro-array > +description: > + do not allow to override first element of a read-only array > + via the non-array access. > +stdin: > + arr[0]=foo > + readonly arr > + arr=bar > +expected-exit: e == 1 > +expected-stderr-pattern: > + /: arr: is read only$/ > +--- > Index: bin/ksh/var.c > === > RCS file: /cvs/src/bin/ksh/var.c,v > retrieving revision 1.71 > diff -u -p -r1.71 var.c > --- bin/ksh/var.c 21 Feb 2020 18:21:23 - 1.71 > +++ bin/ksh/var.c 21 Feb 2021 18:51:54 - > @@ -644,7 +644,7 @@ typeset(const char *var, int set, int cl > global(tvar); > set &= ~(LOCAL|LOCAL_COPY); > > - vpbase = (vp->flag & ARRAY) ? global(arrayname(var)) : vp; > + vpbase = (vp->flag & ARRAY) ? global(arrayname(tvar)) : vp; > > /* only allow export flag to be set. at&t ksh allows any attribute to >* be changed, which means it can be truncated or modified > Hi Vadim, I just tried this patch out and it appears to fix the issue -- thanks! I tried running a few large shell scripts that make use of arrays and everything worked as expected. I don't have the authority to give you an ok, but I would certainly like to see this patch accepted. Regards, Jordan
Re: grep: add --null flag
On 1/26/21 12:13 PM, Stuart Henderson wrote: > On 2021/01/26 11:18, Jordan Geoghegan wrote: >> >> On 1/26/21 5:47 AM, Stuart Henderson wrote: >>> On 2021/01/25 00:53, Sebastian Benoit wrote: >>>> Sebastian Benoit(be...@openbsd.org) on 2021.01.25 00:27:05 +0100: >>>>> Theo de Raadt(dera...@openbsd.org) on 2021.01.24 16:01:32 -0700: >>>>>> Stuart Henderson wrote: >>>>>> >>>>>>> On 2021/01/24 12:10, Theo de Raadt wrote: >>>>>>>> I completely despise that the option is called "--null". >>>>>>>> >>>>>>>> Someone was a complete idiot. >>>>>>> gnu grep has both --null and -z for this (why do they do that?!). >>>>>>> If it's added as --null it should be added as -z too. >>>>>>> >>>>>>> Looking at Debian codesearch most things using it as --null use other >>>>>>> long options that we don't have. Maybe just adding as -z would be >>>>>>> enough. It does seem a useful and fairly widely supported feature. >>>>>> Yes, maybe just add -z. >>>>> Actually it's "-Z, --null". The lowercase -z in gnu grep is >>>>> >>>>>-z, --null-data >>>>> Treat input and output data as sequences of lines, each >>>>> terminated by a zero byte (the ASCII NUL character) instead of >>>>> a newline. Like the -Z or --null option, this option can be >>>>> used with commands like sort-z to process arbitrary file >>>>> names. >>>> And we already have -z for "force grep to behave as zgrep". >>> sigh.. OK I guess we are better skipping the short flag and just use --null >>> >>>> Diff below with tedu@ suggestion and changed manpage text. >>> ok with me. if I was committing I would whine about the stupid flag in >>> commit log ;) >>> >>> [snip] >> Why is a short flag not being included? '-z' and '-0' are both available in >> our grep. > So after sorting out the caps mixups in the quoted text above: > > ggrep:-z --null-data > -Z --null > grep: -Z zgrep mode > > i.e. the flag which is used by other implementations for the same feature is > already used in our grep for a different non-standard feature. Using a short > flag for a related but different feature is even worse. > > -0 would the obvious "nice" option but now I see why ggrep didn't use > it for the feature: it's not actually available. It currently specifies > "print 0 lines of context". That's the same as plain grep with ours, but > with ggrep it prints -- denoting skipped lines: > > $ printf ' 1\n 2\n 2\n 1\n 2\n 1\n' | ggrep -0 2 > 2 > 2 > -- > 2 > > While it doesn't seem entirely sensible that someone should rely on > that, it does currently "work" (I could almost understand someone setting > a variable for lines of context, setting it to 0 to disable context, and > using "| grep -$whatever"). And dumping NULs into some processing > pipeline that isn't prepared for them is also not sensible. > > Moral of the story: don't rely on extensions to standard tools, write > your shell scripts in perl instead! > Okay, thanks for explaining that Stuart, I guess I didn't realize the depths of GNU's depravity. I'm surprised somebody didn't stop and think to themselves when writing this tidbit in the ggrep manpage for the '-Z, --null' option: "This option (-Z, --null) can be used with commands like find -print0, perl -0, sort -z, and xargs -0 ..." One of those things is not like the others.
Re: grep: add --null flag
On 1/26/21 5:47 AM, Stuart Henderson wrote: > On 2021/01/25 00:53, Sebastian Benoit wrote: >> Sebastian Benoit(be...@openbsd.org) on 2021.01.25 00:27:05 +0100: >>> Theo de Raadt(dera...@openbsd.org) on 2021.01.24 16:01:32 -0700: Stuart Henderson wrote: > On 2021/01/24 12:10, Theo de Raadt wrote: >> I completely despise that the option is called "--null". >> >> Someone was a complete idiot. > gnu grep has both --null and -z for this (why do they do that?!). > If it's added as --null it should be added as -z too. > > Looking at Debian codesearch most things using it as --null use other > long options that we don't have. Maybe just adding as -z would be > enough. It does seem a useful and fairly widely supported feature. Yes, maybe just add -z. >>> Actually it's "-Z, --null". The lowercase -z in gnu grep is >>> >>>-z, --null-data >>> Treat input and output data as sequences of lines, each >>> terminated by a zero byte (the ASCII NUL character) instead of >>> a newline. Like the -Z or --null option, this option can be >>> used with commands like sort-z to process arbitrary file >>> names. >> And we already have -z for "force grep to behave as zgrep". > sigh.. OK I guess we are better skipping the short flag and just use --null > >> Diff below with tedu@ suggestion and changed manpage text. > ok with me. if I was committing I would whine about the stupid flag in > commit log ;) > > [snip] Why is a short flag not being included? '-z' and '-0' are both available in our grep. Jordan
Re: grep: add --null flag
On 1/24/21 3:53 PM, Sebastian Benoit wrote: > And we already have -z for "force grep to behave as zgrep". Maybe I'm misunderstanding what you're saying, but '-Z' (uppercase) appears to be the flag for "Force grep to behave as zgrep." The '-z' (lowercase) is currently unused/invalid in our grep. FWIW, I agree that the '--null' long option naming is annoying and gross. There does appear to be somewhat of a precedent for use of '-0' with grep-like tools. The ripgrep tool forgoes '-z/-Z' and instead opts for '-0' and the '--null' long option. Regards, Jordan
Re: [PATCH] octeon.html dead links
On 12/30/20 4:59 AM, Stuart Henderson wrote: > On 2020/12/30 00:12, Jordan Geoghegan wrote: >> Hello, >> >> I noticed some dead links on the octeon.html page for the Portwell and Rhino >> devices. >> >> The Portwell link has been pointed to archive.org and the Rhino links now >> point to correct URL after Rhino Labs changed their website around. > Thanks, committed. Thanks Stuart, my first OpenBSD patch! > >> I've attached the patch rather that inlining it as I'm almost certain that >> Thunderbird will mangle the long lines. > It's possible but annoying: > https://www.kernel.org/doc/html/latest/process/email-clients.html#thunderbird-gui Thanks for the link. I followed those instructions, so hopefully this mail I'm sending you doesn't mangle the long lines. Thunderbird keeps on insisting on reverting to "Western" text encoding, but I know I've sent fancy umlauts and accent aigu etc without issue. Not sure what that's about *shrug*. // > > >> Regards, >> >> Jordan >> diff --git a/octeon.html b/octeon.html >> index 7c812cae8..4926cc67c 100644 >> --- a/octeon.html >> +++ b/octeon.html >> @@ -59,14 +59,14 @@ local disk on machines lacking a CF slot. >> OpenBSD/octeon supports the following machines: >> >> >> -> href="http://www.portwell.com/products/detail.php?CUSTCHAR1=CAM-0100";> >> +> href="https://web.archive.org/web/20180202202403/http://www.portwell.com/products/detail.php?CUSTCHAR1=CAM-0100";> >> Portwell CAM-0100 >> D-Link >> > href="https://web.archive.org/web/20140803060102/http://www.dlink.com/us/en/business-solutions/security/services-routers/dsr-500-the-d-link-services-router";> >> DSR-500, >> > href="https://web.archive.org/web/20170422065451/http://us.dlink.com/us/en/business-solutions/security/services-routers/dsr-500n-wireless-n-unified-services-router.html";> >> DSR-500N >> >> -> href="http://www.rhinolabsinc.com/rhino-shasta-enterprise-grade-network-appliance/";>Rhino >> Labs Inc. SDNA Shasta >> -> href="http://rhinolabsinc.com/rhino-sdna7130-networking-appliance/";>Rhino >> Labs Inc. SDNA-7130 >> +http://www.rhinolabsinc.com/sdna-shasta/";>Rhino Labs Inc. SDNA >> Shasta >> +http://www.rhinolabsinc.com/sdna-7130/";>Rhino Labs Inc. >> SDNA-7130 >> https://www.ui.com/edgemax/edgerouter-lite/";>Ubiquiti Networks >> EdgeRouter LITE >> https://www.ui.com/edgemax/edgerouter-poe/";>Ubiquiti Networks
[PATCH] octeon.html dead links
Hello, I noticed some dead links on the octeon.html page for the Portwell and Rhino devices. The Portwell link has been pointed to archive.org and the Rhino links now point to correct URL after Rhino Labs changed their website around. I've attached the patch rather that inlining it as I'm almost certain that Thunderbird will mangle the long lines. Regards, Jordan diff --git a/octeon.html b/octeon.html index 7c812cae8..4926cc67c 100644 --- a/octeon.html +++ b/octeon.html @@ -59,14 +59,14 @@ local disk on machines lacking a CF slot. OpenBSD/octeon supports the following machines: -http://www.portwell.com/products/detail.php?CUSTCHAR1=CAM-0100";> +https://web.archive.org/web/20180202202403/http://www.portwell.com/products/detail.php?CUSTCHAR1=CAM-0100";> Portwell CAM-0100 D-Link https://web.archive.org/web/20140803060102/http://www.dlink.com/us/en/business-solutions/security/services-routers/dsr-500-the-d-link-services-router";> DSR-500, https://web.archive.org/web/20170422065451/http://us.dlink.com/us/en/business-solutions/security/services-routers/dsr-500n-wireless-n-unified-services-router.html";> DSR-500N -http://www.rhinolabsinc.com/rhino-shasta-enterprise-grade-network-appliance/";>Rhino Labs Inc. SDNA Shasta -http://rhinolabsinc.com/rhino-sdna7130-networking-appliance/";>Rhino Labs Inc. SDNA-7130 +http://www.rhinolabsinc.com/sdna-shasta/";>Rhino Labs Inc. SDNA Shasta +http://www.rhinolabsinc.com/sdna-7130/";>Rhino Labs Inc. SDNA-7130 https://www.ui.com/edgemax/edgerouter-lite/";>Ubiquiti Networks EdgeRouter LITE https://www.ui.com/edgemax/edgerouter-poe/";>Ubiquiti Networks
Re: dig vs ipv6 (scoped) addresses
On 12/18/20 5:04 PM, Jordan Geoghegan wrote: On 12/17/20 3:15 AM, Otto Moerbeek wrote: Hi, as noted on misc dig does not like to talk to local link addresses. This fixes that case. While investigating I also found another bug: selecting v6 or v4 addresses only from resolv.conf via the -4 or -6 command line argument does not work as expected. This fixes both. I did not test binding to a src address with this. This is likely as broken as it was before. -Otto Index: dig.c === RCS file: /cvs/src/usr.bin/dig/dig.c,v retrieving revision 1.18 diff -u -p -r1.18 dig.c --- dig.c 15 Sep 2020 11:47:42 - 1.18 +++ dig.c 17 Dec 2020 11:06:49 - @@ -1358,7 +1358,7 @@ dash_option(char *option, char *next, di } else srcport = 0; if (have_ipv6 && inet_pton(AF_INET6, value, &in6) == 1) - isc_sockaddr_fromin6(&bind_address, &in6, srcport); + isc_sockaddr_fromin6(&bind_address, &in6, srcport, 0); else if (have_ipv4 && inet_pton(AF_INET, value, &in4) == 1) isc_sockaddr_fromin(&bind_address, &in4, srcport); else Index: dighost.c === RCS file: /cvs/src/usr.bin/dig/dighost.c,v retrieving revision 1.34 diff -u -p -r1.34 dighost.c --- dighost.c 15 Sep 2020 11:47:42 - 1.34 +++ dighost.c 17 Dec 2020 11:06:49 - @@ -540,7 +540,7 @@ get_addresses(const char *hostname, in_p struct sockaddr_in6 *sin6; sin6 = (struct sockaddr_in6 *)tmpai->ai_addr; isc_sockaddr_fromin6(&addrs[i], &sin6->sin6_addr, - dstport); + dstport, sin6->sin6_scope_id); } i++; @@ -972,7 +972,7 @@ parse_netprefix(struct sockaddr_storage if (inet_pton(AF_INET6, buf, &in6) == 1) { parsed = 1; - isc_sockaddr_fromin6(sa, &in6, 0); + isc_sockaddr_fromin6(sa, &in6, 0, 0); if (prefix_length > 128) prefix_length = 128; } else if (inet_pton(AF_INET, buf, &in4) == 1) { Index: lib/isc/sockaddr.c === RCS file: /cvs/src/usr.bin/dig/lib/isc/sockaddr.c,v retrieving revision 1.14 diff -u -p -r1.14 sockaddr.c --- lib/isc/sockaddr.c 28 Nov 2020 06:33:55 - 1.14 +++ lib/isc/sockaddr.c 17 Dec 2020 11:06:49 - @@ -223,7 +223,7 @@ isc_sockaddr_anyofpf(struct sockaddr_sto void isc_sockaddr_fromin6(struct sockaddr_storage *sockaddr, const struct in6_addr *ina6, - in_port_t port) + in_port_t port, uint32_t scope) { struct sockaddr_in6 *sin6 = (struct sockaddr_in6 *) sockaddr; memset(sockaddr, 0, sizeof(*sockaddr)); @@ -231,6 +231,7 @@ isc_sockaddr_fromin6(struct sockaddr_sto sin6->sin6_len = sizeof(*sin6); sin6->sin6_addr = *ina6; sin6->sin6_port = htons(port); + sin6->sin6_scope_id = scope; } int Index: lib/isc/include/isc/sockaddr.h === RCS file: /cvs/src/usr.bin/dig/lib/isc/include/isc/sockaddr.h,v retrieving revision 1.7 diff -u -p -r1.7 sockaddr.h --- lib/isc/include/isc/sockaddr.h 15 Sep 2020 11:47:42 - 1.7 +++ lib/isc/include/isc/sockaddr.h 17 Dec 2020 11:06:49 - @@ -91,7 +91,7 @@ isc_sockaddr_fromin(struct sockaddr_stor void isc_sockaddr_fromin6(struct sockaddr_storage *sockaddr, const struct in6_addr *ina6, - in_port_t port); + in_port_t port, uint32_t scope); /*%< * Construct an struct sockaddr_storage from an IPv6 address and port. */ Index: lib/lwres/lwconfig.c === RCS file: /cvs/src/usr.bin/dig/lib/lwres/lwconfig.c,v retrieving revision 1.6 diff -u -p -r1.6 lwconfig.c --- lib/lwres/lwconfig.c 25 Feb 2020 05:00:43 - 1.6 +++ lib/lwres/lwconfig.c 17 Dec 2020 11:06:49 - @@ -231,7 +231,7 @@ lwres_conf_parsenameserver(lwres_conf_t res = lwres_create_addr(word, &address, 1); use_ipv4 = confdata->flags & LWRES_USEIPV4; - use_ipv6 = confdata->flags & LWRES_USEIPV4; + use_ipv6 = confdata->flags & LWRES_USEIPV6; if (res == LWRES_R_SUCCESS && ((address.family == LWRES_ADDRTYPE_V4 && use_ipv4) || (address.family == LWRES_ADDRTYPE_V6 && use_ipv6))) { Hi Otto, I just gave this diff a go and everything seems to be working fine. Dig is now using the proper DNS set in my resolv.conf: $ ./dig openbsd.org ; <<>> dig 9.10.8-P1 <<>> openbsd.org ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 3506 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY:
Re: dig vs ipv6 (scoped) addresses
On 12/17/20 3:15 AM, Otto Moerbeek wrote: Hi, as noted on misc dig does not like to talk to local link addresses. This fixes that case. While investigating I also found another bug: selecting v6 or v4 addresses only from resolv.conf via the -4 or -6 command line argument does not work as expected. This fixes both. I did not test binding to a src address with this. This is likely as broken as it was before. -Otto Index: dig.c === RCS file: /cvs/src/usr.bin/dig/dig.c,v retrieving revision 1.18 diff -u -p -r1.18 dig.c --- dig.c 15 Sep 2020 11:47:42 - 1.18 +++ dig.c 17 Dec 2020 11:06:49 - @@ -1358,7 +1358,7 @@ dash_option(char *option, char *next, di } else srcport = 0; if (have_ipv6 && inet_pton(AF_INET6, value, &in6) == 1) - isc_sockaddr_fromin6(&bind_address, &in6, srcport); + isc_sockaddr_fromin6(&bind_address, &in6, srcport, 0); else if (have_ipv4 && inet_pton(AF_INET, value, &in4) == 1) isc_sockaddr_fromin(&bind_address, &in4, srcport); else Index: dighost.c === RCS file: /cvs/src/usr.bin/dig/dighost.c,v retrieving revision 1.34 diff -u -p -r1.34 dighost.c --- dighost.c 15 Sep 2020 11:47:42 - 1.34 +++ dighost.c 17 Dec 2020 11:06:49 - @@ -540,7 +540,7 @@ get_addresses(const char *hostname, in_p struct sockaddr_in6 *sin6; sin6 = (struct sockaddr_in6 *)tmpai->ai_addr; isc_sockaddr_fromin6(&addrs[i], &sin6->sin6_addr, -dstport); +dstport, sin6->sin6_scope_id); } i++; @@ -972,7 +972,7 @@ parse_netprefix(struct sockaddr_storage if (inet_pton(AF_INET6, buf, &in6) == 1) { parsed = 1; - isc_sockaddr_fromin6(sa, &in6, 0); + isc_sockaddr_fromin6(sa, &in6, 0, 0); if (prefix_length > 128) prefix_length = 128; } else if (inet_pton(AF_INET, buf, &in4) == 1) { Index: lib/isc/sockaddr.c === RCS file: /cvs/src/usr.bin/dig/lib/isc/sockaddr.c,v retrieving revision 1.14 diff -u -p -r1.14 sockaddr.c --- lib/isc/sockaddr.c 28 Nov 2020 06:33:55 - 1.14 +++ lib/isc/sockaddr.c 17 Dec 2020 11:06:49 - @@ -223,7 +223,7 @@ isc_sockaddr_anyofpf(struct sockaddr_sto void isc_sockaddr_fromin6(struct sockaddr_storage *sockaddr, const struct in6_addr *ina6, -in_port_t port) +in_port_t port, uint32_t scope) { struct sockaddr_in6 *sin6 = (struct sockaddr_in6 *) sockaddr; memset(sockaddr, 0, sizeof(*sockaddr)); @@ -231,6 +231,7 @@ isc_sockaddr_fromin6(struct sockaddr_sto sin6->sin6_len = sizeof(*sin6); sin6->sin6_addr = *ina6; sin6->sin6_port = htons(port); + sin6->sin6_scope_id = scope; } int Index: lib/isc/include/isc/sockaddr.h === RCS file: /cvs/src/usr.bin/dig/lib/isc/include/isc/sockaddr.h,v retrieving revision 1.7 diff -u -p -r1.7 sockaddr.h --- lib/isc/include/isc/sockaddr.h 15 Sep 2020 11:47:42 - 1.7 +++ lib/isc/include/isc/sockaddr.h 17 Dec 2020 11:06:49 - @@ -91,7 +91,7 @@ isc_sockaddr_fromin(struct sockaddr_stor void isc_sockaddr_fromin6(struct sockaddr_storage *sockaddr, const struct in6_addr *ina6, -in_port_t port); +in_port_t port, uint32_t scope); /*%< * Construct an struct sockaddr_storage from an IPv6 address and port. */ Index: lib/lwres/lwconfig.c === RCS file: /cvs/src/usr.bin/dig/lib/lwres/lwconfig.c,v retrieving revision 1.6 diff -u -p -r1.6 lwconfig.c --- lib/lwres/lwconfig.c25 Feb 2020 05:00:43 - 1.6 +++ lib/lwres/lwconfig.c17 Dec 2020 11:06:49 - @@ -231,7 +231,7 @@ lwres_conf_parsenameserver(lwres_conf_t res = lwres_create_addr(word, &address, 1); use_ipv4 = confdata->flags & LWRES_USEIPV4; - use_ipv6 = confdata->flags & LWRES_USEIPV4; + use_ipv6 = confdata->flags & LWRES_USEIPV6; if (res == LWRES_R_SUCCESS && ((address.family == LWRES_ADDRTYPE_V4 && use_ipv4) || (address.family == LWRES_ADDRTYPE_V6 && use_ipv6))) { Hi Otto, I just gave this diff a go and everything seems to be working fine. Dig is now using the proper DNS set in my resolv.conf: $ ./dig openbsd.org ; <<>> dig 9.10.8-P1 <<>> openbsd.org ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 3506 ;; flags: qr rd ra
Re: find -exec util {} arg + confusion
On 11/16/20 12:15 AM, Paul de Weerd wrote: Hi Andreas, On Mon, Nov 16, 2020 at 08:53:36AM +0100, Andreas Kusalananda Kähäri wrote: | On Thu, Nov 12, 2020 at 08:51:22PM +0100, Paul de Weerd wrote: | > Hi all, | > | > I misread find(1) and did: | > | > [weerdpom] $ find path/to/cam -name \*.JPG -exec cp {} path/to/store + | > find: -exec no terminating ";" or "+" | | Not really what you're asking for, but... | | What you seem to want to do can be done with | | find path/to/cam -name '*.JPG' -exec sh -c 'cp "$@" path/to/store' sh {} + Thanks, I've solved this in the past with small scripts in my homedir or going to `find | xargs -J`. I'll add your suggestion to my list. | Or, with GNU coreutils installed, | | find path/to/cam -name '*.JPG' -exec gcp -t path/to/store {} + Ugh, installing GNU stuff for something like this... :) Besides, the problem is more generic than just cp(1). Do all GNU tools that (by default) have the target argument as the last argument support -t? I mean, I know cp, mv and ln do, but do they all? | I quite like Alexander's proposed smaller diff. Making it more clear to the user that + must follow {} is the thing I'd like to achieve, so yeah :) No strong feelings over his vs my diff. Cheers, Paul Hi Paul, I've stumbled on this issue a number of times myself. The easiest workaround I've found is to do something like this: find /tmp/1 -type f -name "*" -print0 | xargs -0 -I{} cp {} /tmp/2/ No GNU stuff needed. Regards, Jordan
Re: Unbound 1.12.0
On 2020-10-13 07:25, Stuart Henderson wrote: On 2020/10/11 15:37, Renaud Allard wrote: On 10/10/2020 22:05, Stuart Henderson wrote: Here's an update to the recently released version of Unbound. Much of the additional code is for DoH and is unused here as it requires the nghttp2 library. nghttp2 seems to be MIT licensed. Could it be possible to import it in base? Personally I am not particularly interested in pushing a library that is already in ports into base.. At this point I would prefer to handle it as a simple update, then consider 1) whether there should be a separate "ports unbound" with things that can't (or can't easily) be handled in base - e.g. this, python, dnstap or 2) whether unbound should just move back to ports completely. Not that my opinion matters, but I really enjoy having a nice relatively "stripped down" unbound in base. I like knowing the OpenBSD folks have stripped out a lot of the cruft. I shouldn't need Python and an HTTP parsing library just to run a modest little local resolver -- Just my two cents. Regards, Jordan
Re: doas: improve error message
On 2020-10-08 16:39, Klemens Nanni wrote: On Thu, Oct 08, 2020 at 04:23:53PM -0700, Jordan Geoghegan wrote: This improved error message would have been useful a few months ago where I had a number of end-users of one of my scripts get confused due to the cryptic error messages spit out by doas. The diff does not change behaviour or output for end-users on the command line; instead it changes syslog messages which by default are only readable by root. As admin going through logs of multiple hosts (in a centralised place) the proposed change clarifies that whatever was TRIED to be executed never actually DID execute. Hi Klemens and Theo, Yes, I'm aware it was related to syslog output. The end-user situation I mentioned involved a script run via cron job and folks were getting confused by weird messages in their syslog due to using the script wrong. Regards, Jordan
Re: doas: improve error message
Hi Klemens, I'm not a dev, so I can't give you an OK, but I just wanted to say that I certainly support this change. This improved error message would have been useful a few months ago where I had a number of end-users of one of my scripts get confused due to the cryptic error messages spit out by doas. Regards, Jordan On 2020-10-08 16:09, Klemens Nanni wrote: In case `cmd' and `args' in doas.conf(5) do not match, the generated log message is unclear and might be read as if the command executed but failed, i.e. returned non-zero: # cat /etc/doas.conf permit nopass kn cmd echo args foo $ doas echo foo foo $ doas echo bar doas: Operation not permitted The corresponding syslog(3) messages from /var/log/secure: Oct 9 01:05:14 eru doas: kn ran command echo foo as root from /home/kn Oct 9 01:05:20 eru doas: failed command for kn: echo bar The following reads unambiguous and better matches the EPERM wording: Oct 9 01:05:20 eru doas: command not permitted for kn: echo bar Feedback? OK? Index: doas.c === RCS file: /cvs/src/usr.bin/doas/doas.c,v retrieving revision 1.82 diff -u -p -r1.82 doas.c --- doas.c 18 Oct 2019 17:15:45 - 1.82 +++ doas.c 8 Oct 2020 22:59:45 - @@ -396,7 +396,7 @@ main(int argc, char **argv) if (!permit(uid, groups, ngroups, &rule, target, cmd, (const char **)argv + 1)) { syslog(LOG_AUTHPRIV | LOG_NOTICE, - "failed command for %s: %s", mypw->pw_name, cmdline); + "command not permitted for %s: %s", mypw->pw_name, cmdline); errc(1, EPERM, NULL); }
Re: [Patch] Change httpd's handling of request "Host:" headers
On 2020-08-10 08:49, Jeremie Courreges-Anglas wrote: On Sun, Aug 09 2020, Ross L Richardson wrote: At present, if a request contains no "Host:" header [HTTP pre-1.1] or if the supplied header does not match any of the servers configured in httpd.conf, the request is directed to the first server. This isn't documented, AFAICT. For example, if httpd.conf has just one server server "www.example.com" then we currently get $ printf "HEAD / HTTP/1.0\r\nHost: www.openbsd.org\r\n\r\n" \ | nc www.example.com www | sed 1q HTTP/1.0 200 OK This behaviour strikes me as wrong (or at least sub-optimal) in the case of non-matching "Host:" headers. The simplistic patch below changes things to return a 404 status if no matching server is found. [If status code 400 (bad request) is preferred, "goto fail;" could be used.] Justification: - This seems more correct, and is consistent with the "fail closed" approach. - There is a net gain in functionality, as use of glob/patterns wildcards can easily re-establish the current behaviour. In contrast, there's no way at present to disable the implicit match-anything behaviour. The first server in my httpd config uses "root "/nonexistent". This results in proper 404 replies, so there is a way to disable the current behavior. I probably inferred this from the examples in the manpage. I've been using the following at the top of my httpd.conf for a while now and it seems to be working fine: server "default" { listen on * port 80 block return 400 } This should also prevent httpd from wasting cycles looking for non-existent locations on the filesystem. My gut feeling is that the existing behavior is useful (you can copy/paste the existing example in the manpage and serve files right away under multiple host names) and I see no reason to break it. Did you check whether this breaks existing mirrors? If this is adopted, it should be document in current.html A followup patch could merge this if statement with the one above it. Several other issues exist in "Host:" header handling. Ross -- Index: server_http.c === RCS file: /cvs/src/usr.sbin/httpd/server_http.c,v retrieving revision 1.140 diff -u -p -r1.140 server_http.c --- server_http.c 3 Aug 2020 10:59:53 - 1.140 +++ server_http.c 9 Aug 2020 04:37:08 - @@ -1200,7 +1200,7 @@ server_response(struct httpd *httpd, str struct server_config*srv_conf = &srv->srv_conf; struct kv *kv, key, *host; struct str_find sm; - int portval = -1, ret; + int hostmatch = 0, portval = -1, ret; char*hostval, *query; const char *errstr = NULL; @@ -1277,16 +1277,20 @@ server_response(struct httpd *httpd, str /* Replace host configuration */ clt->clt_srv_conf = srv_conf; srv_conf = NULL; + hostmatch = 1; break; } } } - if (srv_conf != NULL) { + if (host == NULL) { /* Use the actual server IP address */ if (server_http_host(&clt->clt_srv_ss, hostname, sizeof(hostname)) == NULL) goto fail; + } else if (!hostmatch) { + server_abort_http(clt, 404, "not found"); + return (-1); } else { /* Host header was valid and found */ if (strlcpy(hostname, host->kv_value, sizeof(hostname)) >=
mktestdata.sh make effective use of character classes
Hello, This is my first diff I've submitted here, so I imagine there's a dozen things I'm missing/clueless about. I've been making an effort to start browsing the tree and reading the code regularly. I came across this file, I know it's a regress test, so there may be some context/history I'm ignorant to. All I know is, is that it is cleaner to use '[:space:]' rather than ' \n' unless you are intentionally ignoring tabs and '\r' etc. I also know that '[[:alpha:]] is a lot cleaner than [a-zA-Z] and could also avoid potential issues with localization (I'm not an expert on that, just what I was told). I know OpenSSH portable runs on some wacky systems that may not support character classes, but if this test is just for OpenBSD then all should be well. Index: regress/usr.bin/ssh/unittests/sshkey/mktestdata.sh === RCS file: /cvs/src/regress/usr.bin/ssh/unittests/sshkey/mktestdata.sh,v retrieving revision 1.11 diff -u -p -u -r1.11 mktestdata.sh --- regress/usr.bin/ssh/unittests/sshkey/mktestdata.sh 19 Jun 2020 03:48:49 - 1.11 +++ regress/usr.bin/ssh/unittests/sshkey/mktestdata.sh 14 Jul 2020 07:48:40 - @@ -9,13 +9,13 @@ rsa_params() { set -e openssl rsa -noout -text -in $_in | \ awk '/^modulus:$/,/^publicExponent:/' | \ - grep -v '^[a-zA-Z]' | tr -d ' \n:' > ${_outbase}.n + grep -v '^[[:alpha:]]' | tr -d '[:space:]:' > ${_outbase}.n openssl rsa -noout -text -in $_in | \ awk '/^prime1:$/,/^prime2:/' | \ - grep -v '^[a-zA-Z]' | tr -d ' \n:' > ${_outbase}.p + grep -v '^[[:alpha:]]' | tr -d '[:space:]:' > ${_outbase}.p openssl rsa -noout -text -in $_in | \ awk '/^prime2:$/,/^exponent1:/' | \ - grep -v '^[a-zA-Z]' | tr -d ' \n:' > ${_outbase}.q + grep -v '^[[:alpha:]]' | tr -d '[:space:]:' > ${_outbase}.q for x in n p q ; do echo "" >> ${_outbase}.$x echo ${_outbase}.$x @@ -30,13 +30,13 @@ dsa_params() { set -e openssl dsa -noout -text -in $_in | \ awk '/^priv:$/,/^pub:/' | \ - grep -v '^[a-zA-Z]' | tr -d ' \n:' > ${_outbase}.priv + grep -v '^[[alpha:]]' | tr -d '[:space:]:' > ${_outbase}.priv openssl dsa -noout -text -in $_in | \ awk '/^pub:/,/^P:/' | #\ - grep -v '^[a-zA-Z]' | tr -d ' \n:' > ${_outbase}.pub + grep -v '^[[:alpha:]]' | tr -d '[:space:]:' > ${_outbase}.pub openssl dsa -noout -text -in $_in | \ awk '/^G:/,0' | \ - grep -v '^[a-zA-Z]' | tr -d ' \n:' > ${_outbase}.g + grep -v '^[[:alpha:]]' | tr -d '[:space:]:' > ${_outbase}.g for x in priv pub g ; do echo "" >> ${_outbase}.$x echo ${_outbase}.$x @@ -51,10 +51,10 @@ ecdsa_params() { set -e openssl ec -noout -text -in $_in | \ awk '/^priv:$/,/^pub:/' | \ - grep -v '^[a-zA-Z]' | tr -d ' \n:' > ${_outbase}.priv + grep -v '^[[:alpha:]]' | tr -d '[:space:]:' > ${_outbase}.priv openssl ec -noout -text -in $_in | \ grep "ASN1 OID:" | \ sed 's/.*: //;s/ *$//' | tr -d '\n' > ${_outbase}.curve
Re: unbound 1.10.0
On 2020-03-16 06:01, Renaud Allard wrote: On 3/15/20 9:53 PM, Stuart Henderson wrote: On 2020/03/15 19:05, Renaud Allard wrote: On 15/03/2020 17:36, Stuart Henderson wrote: Lots of churn again.. most of the new + are related to the new rpz and serve-stale support. I've been running it for a few days with my usual setup with no problems, haven't tried the new things yet. Anyone want to test? I have had a lot of stalling issues with unbound 1.9.4 being a DoT server. It went better with 1.9.6, but there are still stalling issues. I am now trying your patch on 6.6-stable to see if it solves the stalling issues. I would be interested to know if it helps, but I don't think all that much has changed for DoT in this release so I think it's unlikely. Honestly I would just put dnsdist in front if you want good DoT (or DoH) service, it is solid. From my time limited testing of about 15 hours, I got no stall at all with 1.10.0 on any of the 3 servers tested. Down from at least 10 times in this kind of timeframe with 1.9.4 and 4-5 with 1.9.6. I think the problem is more related to tcp sessions themselves than DoT. That's good to hear, I've had a number of issues with DoT stalling on 1.9.4 and 1.9.6. I've found having multiple entries in my forwarders section has somewhat mitigated the issue. I wonder if the stalling problem is related to this: https://github.com/NLnetLabs/unbound/issues/47
Re: Stream Control Transmission Protocol SCTP RFC4960
On 5/23/19 2:35 PM, Job Snijders wrote: On Thu, May 23, 2019 at 19:50 Denis wrote: SCTP(4) present in FreeBSD 12.0 OpenBSD implementation planned? Nothing planned as far as I know. Out of curiosity - what is your use case? Do you really use it? It doesn’t seem to be a widely used protocol. Kind regards, Job It's so infrequently used, Dragonfly removed it a while back. There's only a handful of programs that have ever made use of it.
Re: Removing PF
On 4/1/19 9:03 AM, Kevin Chadwick wrote: On 4/1/19 3:18 PM, Mateusz Guzik wrote: While I support pf removal, I don't think bpf is the way to go. FreeBSD just removed their pf [1] so the code is up for grabs and you can import it with one weird trick. [1] https://lists.freebsd.org/pipermail/svn-src-projects/2019-April/013336.html lol, did you read the link that you posted "pf in FreeBSD lags years behind OpenBSD's pf. Remove it. Users are advised to migrate to ipf." Why would they replace new pf with old or are you trying to suggest ipf instead of bpf? Realistically, we need to move to the one true firewall-- iptables! Ideally, OpenBSD needs a firewall thats 'web scale' that can be administered from a PHP web based frontend that uses JSON message passing for clustering and failover.