urndis0: urndis_decap invalid buffer len 1 < minimum header 44
Hi, even after having recently updated the phone to a newer version of android, i'm still spammed by urndis w/msg on subject. doesn't really matter to me what you do to silence it, but something like below does work for me, and thanks in advacne:) -Artturi ...this... diff --git a/sys/dev/usb/if_urndis.c b/sys/dev/usb/if_urndis.c index 5d148da4ab5..7dc12573c0d 100644 --- a/sys/dev/usb/if_urndis.c +++ b/sys/dev/usb/if_urndis.c @@ -834,11 +834,11 @@ urndis_decap(struct urndis_softc *sc, struct urndis_chain *c, u_int32_t len) len)); if (len < sizeof(*msg)) { - printf("%s: urndis_decap invalid buffer len %u < " + DPRINTF(("%s: urndis_decap invalid buffer len %u < " "minimum header %zu\n", DEVNAME(sc), len, - sizeof(*msg)); + sizeof(*msg))); return; } ...or this... diff --git a/sys/dev/usb/if_urndis.c b/sys/dev/usb/if_urndis.c index 5d148da4ab5..4b2c6e89ec9 100644 --- a/sys/dev/usb/if_urndis.c +++ b/sys/dev/usb/if_urndis.c @@ -834,6 +834,8 @@ urndis_decap(struct urndis_softc *sc, struct urndis_chain *c, u_int32_t len) len)); if (len < sizeof(*msg)) { + if (len == 1) /* workaround for spamming androids */ + return; printf("%s: urndis_decap invalid buffer len %u < " "minimum header %zu\n", DEVNAME(sc),
Re: Please test: kqueue & rwlock
On Tue, Sep 12, 2017 at 11:09:21AM +0200, Martin Pieuchot wrote: > Test reports? Comments? I did run all regression test with it. I did not see any regressions. OK bluhm@
trunk (roundrobin) interface unexpected behavior
Removing interface speed up the trunk . Snapshot + pkg_add iperf # cat /etc/rc.conf.local pflogd_flags=NO # add more flags, e.g. "-s 256" smtpd_flags=NO sndiod_flags=NO Nothing else The network : configuration of device one# uname -a OpenBSD beta.test 6.2 GENERIC.MP#89 amd64 one#for v in `ls /etc/hostname.*`; do echo file:$v; cat $v; done file:/etc/hostname.bridge0 add em0 add em5 up file:/etc/hostname.bridge1 add trunk0 add vether1 up file:/etc/hostname.em0 down file:/etc/hostname.em5 dhcp file:/etc/hostname.em7 up file:/etc/hostname.em8 up file:/etc/hostname.trunk0 up trunkport em7 trunkport em8 file:/etc/hostname.vether1 rdomain 1 inet 10.0.0.2 255.0.0.0 device two is exactly the same with file:/etc/hostname.vether1 rdomain 1 inet 10.0.0.1 255.0.0.0 two# ping -V 1 10.0.0.2 PING 10.0.0.2 (10.0.0.2): 56 data bytes 64 bytes from 10.0.0.2: icmp_seq=0 ttl=255 time=1.039 ms two#route -T1 exec iperf -s one# route -T1 exec iperf -c 10.0.0.1 Client connecting to 10.0.0.1, TCP port 5001 TCP window size: 17.0 KByte (default) [ 3] local 10.0.0.2 port 23507 connected with 10.0.0.1 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.1 sec 70.4 MBytes 58.6 Mbits/sec Wow super slow :o two# (iperf output server ) [ 4] local 10.0.0.1 port 5001 connected with 10.0.0.2 port 17665 [ ID] Interval Transfer Bandwidth [ 4] 0.0-10.0 sec 97.0 MBytes 81.1 Mbits/sec [ 5] local 10.0.0.1 port 5001 connected with 10.0.0.2 port 36997 [ 5] 0.0-10.0 sec 781 MBytes 655 Mbits/sec one#ifconfig trunk0 -trunkport em7 one#route -T1 exec iperf -c 10.0.0.1 Client connecting to 10.0.0.1, TCP port 5001 TCP window size: 17.0 KByte (default) [ 3] local 10.0.0.2 port 9762 connected with 10.0.0.1 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0- 9.9 sec 770 MBytes 650 Mbits/sec With more time the speed reach almost the expected 1 gb /s Afaik the rdomain/vether/bridge shenenigans is just my test glu, sorry about that. With more interface in the trunk ( i put 5 interfaces in it ) the speed goes to 200Mb/s With this two it s terrible. Because i expect some reader to ask more, pci0 at mainbus0 bus 0 ppb7 at pci0 dev 28 function 2 "Intel Bay Trail PCIE" rev 0x0e: msi pci8 at ppb7 bus 8 ppb8 at pci8 dev 0 function 0 "Pericom PI7C9X2G608GP PCIE" rev 0x00 pci9 at ppb8 bus 9 ppb11 at pci9 dev 3 function 0 "Pericom PI7C9X2G608GP PCIE" rev 0x00: msi ppb12 at pci9 dev 4 function 0 "Pericom PI7C9X2G608GP PCIE" rev 0x00: msi pci13 at ppb12 bus 13 pci12 at ppb11 bus 12 em7 at pci12 dev 0 function 0 "Intel I210 Fiber" rev 0x03: msi, address 00:30:18:13:41:b2 em8 at pci13 dev 0 function 0 "Intel I210 Fiber" rev 0x03: msi, address 00:30:18:13:41:b3 and get 'amused' by all the [pci bridging] same behavior with em1 + em8 pci0 at mainbus0 bus 0 ppb0 at pci0 dev 28 function 0 "Intel Bay Trail PCIE" rev 0x0e: msi pci1 at ppb0 bus 1 ppb1 at pci1 dev 0 function 0 "Pericom PI7C9X2G608GP PCIE" rev 0x00 pci2 at ppb1 bus 2 ppb3 at pci2 dev 2 function 0 "Pericom PI7C9X2G608GP PCIE" rev 0x00: msi pci4 at ppb3 bus 4 em1 at pci4 dev 0 function 0 "Intel 82574L" rev 0x00: msi, address 00:30:18:03:b8:c8 or em4 pci0 at mainbus0 bus 0 ppb6 at pci0 dev 28 function 1 "Intel Bay Trail PCIE" rev 0x0e: msi pci7 at ppb6 bus 7 em4 at pci7 dev 0 function 0 "Intel I211" rev 0x03: msi, address 00:30:18:04:3f:b3 which perform a tiny bit better # route -T1 exec iperf -c 10.0.0.1 Client connecting to 10.0.0.1, TCP port 5001 TCP window size: 17.0 KByte (default) [ 3] local 10.0.0.2 port 6334 connected with 10.0.0.1 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.1 sec 97.4 MBytes 81.0 Mbits/sec I hope this help . you can ask more ( i hope i ll be able to compile from this snap ) -- -- - Knowing is not enough; we must apply. Willing is not enough; we must do
Re: 6.2 beta snapshot , dhclient and bridge
On Tue, Sep 12, 2017 at 5:11 PM, sven falempinwrote: > Following beta snaps > > same setup ( one machine is a bridge for the next ) still cannot recover > DHCP OFFER back through the bridge > > ( updated the bridge device) > > # uname -a > OpenBSD bridgeandstuff.my.domain 6.2 GENERIC.MP#63 amd64 > > # dhclient em5 > DHCPDISCOVER on em5 - interval 1 > DHCPDISCOVER on em5 - interval 1 > DHCPDISCOVER on em5 - interval 2 > DHCPDISCOVER on em5 - interval 4 > DHCPDISCOVER on em5 - interval 10 > DHCPDISCOVER on em5 - interval 10 > ^C > # ifconfig em5 > em5: flags=8843 mtu 1500 > lladdr 00:30:18:13:41:b0 > index 6 priority 0 llprio 3 > groups: egress > media: Ethernet autoselect (1000baseSX full-duplex) > status: active > inet 172.16.1.51 netmask 0x broadcast 172.16.255.255 > # ping 172.16.1.1 > PING 172.16.1.1 (172.16.1.1): 56 data bytes > 64 bytes from 172.16.1.1: icmp_seq=0 ttl=255 time=0.504 ms > > Now updating client > ( for trunk test on last version , currently the more the interface the > less the speed :s ) > > Confirmed last snap both device (Device one) em0 <---> em5 ( Device Two <--> bridge <--> ) em0 <---> the interface em0 on the device 'two' must be down the up for the device 'one' to receive the OFFER one#reboot two#reboot one#dhclient em5 FAILED two#ifconfig em0 down two#ifconfig em0 up one#dhclient em5 SUCCESS -- - Knowing is not enough; we must apply. Willing is not enough; we must do
Re: 6.2 beta snapshot , dhclient and bridge
Following beta snaps same setup ( one machine is a bridge for the next ) still cannot recover DHCP OFFER back through the bridge ( updated the bridge device) # uname -a OpenBSD bridgeandstuff.my.domain 6.2 GENERIC.MP#63 amd64 # dhclient em5 DHCPDISCOVER on em5 - interval 1 DHCPDISCOVER on em5 - interval 1 DHCPDISCOVER on em5 - interval 2 DHCPDISCOVER on em5 - interval 4 DHCPDISCOVER on em5 - interval 10 DHCPDISCOVER on em5 - interval 10 ^C # ifconfig em5 em5: flags=8843mtu 1500 lladdr 00:30:18:13:41:b0 index 6 priority 0 llprio 3 groups: egress media: Ethernet autoselect (1000baseSX full-duplex) status: active inet 172.16.1.51 netmask 0x broadcast 172.16.255.255 # ping 172.16.1.1 PING 172.16.1.1 (172.16.1.1): 56 data bytes 64 bytes from 172.16.1.1: icmp_seq=0 ttl=255 time=0.504 ms Now updating client ( for trunk test on last version , currently the more the interface the less the speed :s ) On Fri, Sep 1, 2017 at 2:42 PM, sven falempin wrote: > Unexpected behavior : > > GENERIC.MP#63 6.2 AMD64 > > (Device one) em0 <---> em5 ( Device Two <--> bridge <--> ) em0 <---> > DHCP SERVER > > two#: dhclient em0 > two#: ifconfig bridge0 create > two#: ifconfig bridge0 add em5 > two#: ifconfig bridge0 add em0 > two#: ifconfig bridge0 up > > one#: dhclient em0 > FAILED (paquet does not come back thourgh bridge ) > > two#: ifconfig em0 down > two#: ifconfig vether0 create > two#: dhclient vether0 > FAILED > > two#: ifconfig em0 up > two#: dhclient vether0 > FAILED > > two#: ifconfig em0 down > two#: ifconfig em0 down > two#: dhclient vether0 > success > > BUT IP on em0 and vether1 > > one# dhclient em0 > SUCCESS > > -- > -- > > - > Knowing is not enough; we must apply. Willing is not enough; we must do > -- -- - Knowing is not enough; we must apply. Willing is not enough; we must do
fix blinkenlichten on TURBOchannel alpha
Blinkenlichten used to be disabled by default, and became enabled by default some releases ago. However, the tc alpha blinkenlichten code was expecting to be triggered by a sysctl machdep.led_blink change, and would not start by default. The following diff fixes this, and restores the balance of serenity and peace of mind, to some extent. Index: sys/arch/alpha/tc/ioasic.c === RCS file: /OpenBSD/src/sys/arch/alpha/tc/ioasic.c,v retrieving revision 1.17 diff -u -p -r1.17 ioasic.c --- sys/arch/alpha/tc/ioasic.c 20 Sep 2010 06:33:46 - 1.17 +++ sys/arch/alpha/tc/ioasic.c 12 Sep 2017 20:37:09 - @@ -91,6 +91,7 @@ struct cfdriver ioasic_cd = { intioasic_intr(void *); intioasic_intrnull(void *); +void ioasic_led_blink(void *); #defineC(x)((void *)(u_long)(x)) #defineKV(x) (ALPHA_PHYS_TO_K0SEG(x)) @@ -207,6 +208,8 @@ ioasicattach(parent, self, aux) * Try to configure each device. */ ioasic_attach_devs(sc, ioasic_devs, ioasic_ndevs); + + ioasic_led_blink(NULL); } void @@ -348,7 +351,7 @@ static const uint8_t led_pattern8[] = { void ioasic_led_blink(void *unused) { - extern int alpha_led_blink; + extern int alpha_led_blink; /* machdep.c */ vaddr_t rw_csr; u_int32_t pattern; int display_loadavg;
Re: Open /dev/mem file failed when running as a root priviledge
On Tue, 12 Sep 2017 04:04:23 +0200, Ingo Schwarze wrote: > Any OKs for the patch below? Looks good to me. OK millert@ - todd
Re: sysctl_int(), sysctl_struct() & MP work
On 12.9.2017. 15:53, Martin Pieuchot wrote: > Diff below reduces the scope of the NET_LOCK(), this time in sysctl > path. It is interesting for multiple reasons: > > - It reduces the contention on the NET_LOCK(), which should improve > the overall latency on the system when counters are frequently > queried. Accesses to read-only operations and per-CPU counters > are no longer protected by the NET_LOCK(). > > - It allows per-CPU counters to be accessed in parallel. counters_read(9) > is now executed without holding the NET_LOCK(). > > - sysctl_mq() is now MP-safe, by serializing access using the mq's > mutex. However a dance is required to not hold a mutex around > copyin(9)/copyout(9). > > - The NET_LOCK() is now taken around all sysctl_int(), sysctl_struct() > and sysctl_int_arr(). This is not nice but it will allow people to > fix parts of the sysctl path independently. > > Note that all data structure currently protected in these sysctl paths > do not necessarily need the NET_LOCK(). Take CARP's carp_opts[] for > example. Does it need the NET_LOCK()? Is it at all the right primitive? > Well interested readers can answer with diffs and explanations to these > questions Hi all, i'm running this and "kqueue & rwlock" diff on primary firewall with: carp, pf, pflow, pfsync, snmpd, dhcpd, dhcp sync, ipsec, remote pflog and syslog logging and it seems that everything is working nicely ...
Re: vscsi.4, wsdisplay.4: add missing Dv tags to ioctl constants
Hi Scott, Scott Cheloha wrote on Mon, Sep 11, 2017 at 11:57:13PM -0500: > Constants without the Dv styling stick out quite a bit in a browser. > This patch adds them to the ioctl constants in vscsi.4 and the two > lone constants without them in wsdisplay.4. > > I also noticed that wsdisplay.4 is perhaps the only device driver > page not using Fa for its ioctl arguments, Not the only one, but one among very few. I just fixed all the outliers. > so it might make sense to > sneak in the missing Dvs as part of a patch that makes the switch > from Pq+Li to Fa (attached below with the vscsi.4 changes). All committed. Thanks, Ingo
sysctl_int(), sysctl_struct() & MP work
Diff below reduces the scope of the NET_LOCK(), this time in sysctl path. It is interesting for multiple reasons: - It reduces the contention on the NET_LOCK(), which should improve the overall latency on the system when counters are frequently queried. Accesses to read-only operations and per-CPU counters are no longer protected by the NET_LOCK(). - It allows per-CPU counters to be accessed in parallel. counters_read(9) is now executed without holding the NET_LOCK(). - sysctl_mq() is now MP-safe, by serializing access using the mq's mutex. However a dance is required to not hold a mutex around copyin(9)/copyout(9). - The NET_LOCK() is now taken around all sysctl_int(), sysctl_struct() and sysctl_int_arr(). This is not nice but it will allow people to fix parts of the sysctl path independently. Note that all data structure currently protected in these sysctl paths do not necessarily need the NET_LOCK(). Take CARP's carp_opts[] for example. Does it need the NET_LOCK()? Is it at all the right primitive? Well interested readers can answer with diffs and explanations to these questions :) Comments, oks? Index: kern/uipc_domain.c === RCS file: /cvs/src/sys/kern/uipc_domain.c,v retrieving revision 1.52 diff -u -p -r1.52 uipc_domain.c --- kern/uipc_domain.c 11 Aug 2017 21:24:19 - 1.52 +++ kern/uipc_domain.c 12 Sep 2017 13:20:42 - @@ -207,10 +207,8 @@ net_sysctl(int *name, u_int namelen, voi protocol = name[1]; for (pr = dp->dom_protosw; pr < dp->dom_protoswNPROTOSW; pr++) if (pr->pr_protocol == protocol && pr->pr_sysctl) { - NET_LOCK(); error = (*pr->pr_sysctl)(name + 2, namelen - 2, oldp, oldlenp, newp, newlen); - NET_UNLOCK(); return (error); } return (ENOPROTOOPT); Index: net/if.c === RCS file: /cvs/src/sys/net/if.c,v retrieving revision 1.512 diff -u -p -r1.512 if.c --- net/if.c22 Aug 2017 15:02:34 - 1.512 +++ net/if.c12 Sep 2017 13:44:41 - @@ -2666,10 +2666,14 @@ ifpromisc(struct ifnet *ifp, int pswitch return ((*ifp->if_ioctl)(ifp, SIOCSIFFLAGS, (caddr_t))); } +/* XXX move to kern/uipc_mbuf.c */ int sysctl_mq(int *name, u_int namelen, void *oldp, size_t *oldlenp, void *newp, size_t newlen, struct mbuf_queue *mq) { + unsigned int maxlen; + int error; + /* All sysctl names at this level are terminal. */ if (namelen != 1) return (ENOTDIR); @@ -2678,8 +2682,14 @@ sysctl_mq(int *name, u_int namelen, void case IFQCTL_LEN: return (sysctl_rdint(oldp, oldlenp, newp, mq_len(mq))); case IFQCTL_MAXLEN: - return (sysctl_int(oldp, oldlenp, newp, newlen, - >mq_maxlen)); /* XXX directly accessing maxlen */ + maxlen = mq->mq_maxlen; + error = sysctl_int(oldp, oldlenp, newp, newlen, ); + if (!error && maxlen != mq->mq_maxlen) { + mtx_enter(>mq_mtx); + mq->mq_maxlen = maxlen; + mtx_leave(>mq_mtx); + } + return (error); case IFQCTL_DROPS: return (sysctl_rdint(oldp, oldlenp, newp, mq_drops(mq))); default: Index: net/if_etherip.c === RCS file: /cvs/src/sys/net/if_etherip.c,v retrieving revision 1.19 diff -u -p -r1.19 if_etherip.c --- net/if_etherip.c6 Jun 2017 11:51:13 - 1.19 +++ net/if_etherip.c12 Sep 2017 12:38:46 - @@ -662,18 +662,26 @@ int ip_etherip_sysctl(int *name, u_int namelen, void *oldp, size_t *oldlenp, void *newp, size_t newlen) { + int error; + /* All sysctl names at this level are terminal. */ if (namelen != 1) return ENOTDIR; switch (name[0]) { case ETHERIPCTL_ALLOW: - return sysctl_int(oldp, oldlenp, newp, newlen, _allow); + NET_LOCK(); + error = sysctl_int(oldp, oldlenp, newp, newlen, _allow); + NET_UNLOCK(); + return (error); case ETHERIPCTL_STATS: if (newp != NULL) return EPERM; - return sysctl_struct(oldp, oldlenp, newp, newlen, + NET_LOCK(); + error = sysctl_struct(oldp, oldlenp, newp, newlen, , sizeof(etheripstat)); + NET_UNLOCK(); + return (error); default: break; } Index: net/pfkeyv2.c === RCS file: /cvs/src/sys/net/pfkeyv2.c,v retrieving revision 1.166 diff -u -p -r1.166 pfkeyv2.c --- net/pfkeyv2.c
Re: syslogd close *:514 sockets
On 2017/09/11 21:27, Alexander Bluhm wrote: > Hi, > > In the default configuration syslogd keeps two *:514 UDP sockets > open. > > udp 0 0 *.514 *.* > udp6 0 0 *.514 *.* > > Several people have asked me why they are in netstat output and > whether it is a security risk. These sockets are used for sending > UDP packets if there is a UDP loghost in syslog.conf. If syslogd > is started with -u, they can receive packets, otherwise they are > disabled with shutdown(SHUT_RD). > > In case we do neither send nor receive, we can close them after > reading the config file. This gives us a cleaner netstat output. > > ok? ok with me. I have tested that adding a new UDP loghost and reloading syslogd still works.
Please test: kqueue & rwlock
My previous attempt to grab the NET_LOCK(), thus potentially sleeping, inside kqueue_scan() resulting in NULL dereferences: https://marc.info/?l=openbsd-bugs=149935139022501=2 The problem is that the loop isn't ready to be consulted by multiple threads at the same time. By "at the same time", I mean that when a thread sleeps it can be consulted by another one. The diff below addresses that by correcting kqueue's refcount and by using a per-threada marker. I took this idea from Dragonfly because I believe that we can extend it to make kevent(2) MPSAFE later. Diff below also includes socket filter modifications grabbing the NET_LOCK() as well. juanfra@ told me he couldn't reproduce the previous crash with this diff, however I'm looking for more testers. Test reports? Comments? Index: kern/kern_event.c === RCS file: /cvs/src/sys/kern/kern_event.c,v retrieving revision 1.79 diff -u -p -r1.79 kern_event.c --- kern/kern_event.c 31 May 2017 14:52:05 - 1.79 +++ kern/kern_event.c 5 Sep 2017 14:24:50 - @@ -757,25 +757,40 @@ start: goto done; } - TAILQ_INSERT_TAIL(>kq_head, , kn_tqe); + marker.kn_filter = EVFILT_MARKER; + TAILQ_INSERT_HEAD(>kq_head, , kn_tqe); while (count) { - kn = TAILQ_FIRST(>kq_head); - TAILQ_REMOVE(>kq_head, kn, kn_tqe); - if (kn == ) { + kn = TAILQ_NEXT(, kn_tqe); + if (kn == NULL) { + TAILQ_REMOVE(>kq_head, , kn_tqe); splx(s); if (count == maxevents) goto retry; goto done; } + if (kn->kn_filter == EVFILT_MARKER) { + kn = TAILQ_NEXT(kn, kn_tqe); + if (kn == NULL) { + TAILQ_REMOVE(>kq_head, , kn_tqe); + splx(s); + if (count == maxevents) + goto retry; + goto done; + } + /* Move marker past some other threads marker */ + TAILQ_REMOVE(>kq_head, , kn_tqe); + TAILQ_INSERT_BEFORE(kn, , kn_tqe); + continue; + } + TAILQ_REMOVE(>kq_head, kn, kn_tqe); + kq->kq_count--; if (kn->kn_status & KN_DISABLED) { kn->kn_status &= ~KN_QUEUED; - kq->kq_count--; continue; } if ((kn->kn_flags & EV_ONESHOT) == 0 && kn->kn_fop->f_event(kn, 0) == 0) { kn->kn_status &= ~(KN_QUEUED | KN_ACTIVE); - kq->kq_count--; continue; } *kevp = kn->kn_kevent; @@ -783,7 +798,6 @@ start: nkev++; if (kn->kn_flags & EV_ONESHOT) { kn->kn_status &= ~KN_QUEUED; - kq->kq_count--; splx(s); kn->kn_fop->f_detach(kn); knote_drop(kn, p, p->p_fd); @@ -796,8 +810,8 @@ start: if (kn->kn_flags & EV_DISPATCH) kn->kn_status |= KN_DISABLED; kn->kn_status &= ~(KN_QUEUED | KN_ACTIVE); - kq->kq_count--; } else { + kq->kq_count++; TAILQ_INSERT_TAIL(>kq_head, kn, kn_tqe); } count--; Index: kern/uipc_socket.c === RCS file: /cvs/src/sys/kern/uipc_socket.c,v retrieving revision 1.204 diff -u -p -r1.204 uipc_socket.c --- kern/uipc_socket.c 11 Sep 2017 11:15:52 - 1.204 +++ kern/uipc_socket.c 12 Sep 2017 07:46:54 - @@ -1960,8 +1960,10 @@ int filt_sowrite(struct knote *kn, long hint) { struct socket *so = kn->kn_fp->f_data; - int rv; + int s, rv; + if (!(hint & NOTE_SUBMIT)) + s = solock(so); kn->kn_data = sbspace(so, >so_snd); if (so->so_state & SS_CANTSENDMORE) { kn->kn_flags |= EV_EOF; @@ -1977,6 +1979,8 @@ filt_sowrite(struct knote *kn, long hint } else { rv = (kn->kn_data >= so->so_snd.sb_lowat); } + if (!(hint & NOTE_SUBMIT)) + sounlock(s); return (rv); } @@ -1985,8 +1989,13 @@ int filt_solisten(struct knote *kn, long hint) { struct socket *so = kn->kn_fp->f_data; + int s; + if (!(hint & NOTE_SUBMIT)) + s = solock(so); kn->kn_data = so->so_qlen; + if (!(hint & NOTE_SUBMIT)) + sounlock(s);
Re: syslogd close *:514 sockets
On Mon, Sep 11 2017, Alexander Bluhmwrote: > Hi, > > In the default configuration syslogd keeps two *:514 UDP sockets > open. > > udp 0 0 *.514 *.* > udp6 0 0 *.514 *.* > > Several people have asked me why they are in netstat output and > whether it is a security risk. These sockets are used for sending > UDP packets if there is a UDP loghost in syslog.conf. If syslogd > is started with -u, they can receive packets, otherwise they are > disabled with shutdown(SHUT_RD). > > In case we do neither send nor receive, we can close them after > reading the config file. This gives us a cleaner netstat output. > > ok? ok jca@ > bluhm > > Index: usr.sbin/syslogd/syslogd.c > === > RCS file: /data/mirror/openbsd/cvs/src/usr.sbin/syslogd/syslogd.c,v > retrieving revision 1.245 > diff -u -p -r1.245 syslogd.c > --- usr.sbin/syslogd/syslogd.c8 Aug 2017 14:23:23 - 1.245 > +++ usr.sbin/syslogd/syslogd.c11 Sep 2017 21:25:39 - > @@ -274,7 +274,7 @@ size_tctl_reply_offset = 0; /* Number o > char *linebuf; > int linesize; > > -int fd_ctlconn, fd_udp, fd_udp6; > +int fd_ctlconn, fd_udp, fd_udp6, send_udp, send_udp6; > struct event *ev_ctlaccept, *ev_ctlread, *ev_ctlwrite; > > struct peer { > @@ -825,6 +825,20 @@ main(int argc, char *argv[]) > event_add(ev_udp, NULL); > if (fd_udp6 != -1) > event_add(ev_udp6, NULL); > + } else { > + /* > + * If generic UDP file descriptors are used neither > + * for receiving nor for sending, close them. Then > + * there is no useless *.514 in netstat. > + */ > + if (fd_udp != -1 && !send_udp) { > + close(fd_udp); > + fd_udp = -1; > + } > + if (fd_udp6 != -1 && !send_udp6) { > + close(fd_udp6); > + fd_udp6 = -1; > + } > } > for (i = 0; i < nbind; i++) > if (fd_bind[i] != -1) > @@ -2659,9 +2673,11 @@ cfline(char *line, char *progblock, char > if (strncmp(proto, "udp", 3) == 0) { > switch (f->f_un.f_forw.f_addr.ss_family) { > case AF_INET: > + send_udp = 1; > f->f_file = fd_udp; > break; > case AF_INET6: > + send_udp6 = 1; > f->f_file = fd_udp6; > break; > } > -- jca | PGP : 0x1524E7EE / 5135 92C1 AD36 5293 2BDF DDCC 0DFA 74AE 1524 E7EE