multi-package ports make
Can somebody tell me what I'm doing wrong here. When I run 'make' against this makefile it blows up with: Fatal: WRKDIR ends with a slash: /usr/ports/pobj/ (in hush/hush-proxyctl) Fatal: WRKDIST ends with a slash: /usr/ports/pobj/ (in hush/hush-proxyctl) Fatal: WRKSRC ends with a slash: /usr/ports/pobj/ (in hush/hush-proxyctl) Fatal: WRKCONF ends with a slash: /usr/ports/pobj/ (in hush/hush-proxyctl) Fatal: WRKBUILD ends with a slash: /usr/ports/pobj/ (in hush/hush-proxyctl) *** Error 1 in /usr/ports/hush/hush-proxyctl (/usr/ports/infrastructure/mk/bsd.port.mk:3885 '.BEGIN': @exit 1) This is my first foray into MULTI_PACKAGE. This make template works for other non-MULTI_PACKAGE builds. --lyndon COMMENT-main= DMZ proxy management and control COMMENT-server= DMZ proxy management daemon MAINTAINER= XXX V= 1.0 PKGNAME-main= hush-proxyctl-${V} PKGNAME-server= hush-proxyctld-${V} REVISION-main= 0 REVISION-server=0 CATEGORIES= hush MULTI_PACKAGES= -main -server PERMIT_PACKAGE= Yes NO_TEST=Yes BUILD_DEPENDS= lang/go pre-configure: mkdir -p ${WRKSRC}; cd ${.CURDIR}/files && cp -R . ${WRKSRC} do-build: cd ${WRKSRC}/proxyctl && go build proxyctl cd ${WRKSRC}/proxyctld && go build proxyctld .include
Re: securelevel=2 and mount hardening
Stuart Henderson writes: > I think you'd need to disable mount completely, otherwise you can mount > a new writable filesystem (e.g. MFS) that doesn't have noexec. Yeah, I completely missed that vector. And really, that makes more sense. How often do you live mount filesystems on a firewall? Anyway, I'm going to go ahead and code this up so I can try it on a running production firewall. I'll add in a sysctl to control if secureleve=2 mounts are allowed at all. --lyndon
Re: securelevel=2 and mount hardening
Omar Polo writes: > or they can just upload to /usr/local or /home, or mess with /etc, or... > I don't see how this would help. It's another layer to make things more difficult. If the writable filesystems are noexec and they can't take that away, uploads become less valuable. /etc is always going to be problematic. I've been experimenting to see if I can create a viable firewall config with a read-only root filesystem. --lyndon
securelevel=2 and mount hardening
I am curious to hear peoples thoughts on adding some mount(2) hardening when the system is running at securelevel 2. Specifically: * do not allow removing MT_NODEV, MT_NOEXEC, MT_NOSUID, or MT_RDONLY in conjunction with MNT_UPDATE * do not allow MNT_WXALLOWED in conjunction with MNT_UPDATE Currently, if someone does manage to get a root toehold on a host, they can remove noexec from /tmp as a possible springboard to upload nasties, and then change /usr from read-only to read-write and scribble all over your binaries. This somewhat follows from how securelevel 1 removes the ability to muck with the immutable and append only bits on files. --lyndon
Re: pf nat64 rule not matching
Try changing ($wan:0) to $(wan) and see what happens.
Re: Automatic OS updates
Kevin Williams writes: > The main use case I see for this is to manage a fleet of more than 10 or > so machines/VMs/instances. rdist or a package such as Ansible could > manage the crontab and possibly search announce@ on marc.info for > keywords to hold off on the upgrade. Blind updating out of cron is utter madness. If there are any merge errors in /etc (think sshd_config for starters), you can end up with a machine you cannot log in to, or that's just acting out destructively. At work I manage a herd of a dozen OpenBSD machines. We "upgrade" by perforing a full network install. The process is pxe boot / fdisk / install / reboot / ansible (create the logins) / reboot / rdist / reboot / verify everything is running correctly (esp. pf), / reboot. The entire process takes 20 minutes per machine, so I can update the entire herd in < 1 day. Although we typically spread it over two or three days. All these machines run as carped a/b pairs, so we upgrade the b hosts first and run on them for a day or two to check for regressions, then upgrade the a machines and switch back to them. The primary reason for installing from scratch is to verify we have not introduced any bugs into the network installation and configuration steps, as this is a core part of our disaster recovery process. It also ensures we launch out of the box with up-to-date packages. And if the number of machines gets entirely out of hand, it should be simple enough semi-automate a good part of the process using expect and some glue. --lyndon
Re: unbound resolving 10.in-addr.arpa
Todd C. Miller writes: > local-zone: "1.1.10.in-addr.arpa." transparent That (well, a variant) was the answer. I was having a real problem wrapping my head around what 'transparent' did, so I was applying it incorrectly. Thanks for prodding me to revisit it! --lyndon
unbound resolving 10.in-addr.arpa
I am at Witt's End. I am trying to get unbound to serve up reverse DNS for our internal 1918 address space. I have been going hammer and tongs at unbound.conf to try to make it forward requests for '*.10.in-addr.arpa.' to our two internal nameservers that are authoritative for the 10.in-addr.arpa zone. Someone, *please*, show me the light. And no, static zone files are not an option at this point. I need unbound to forward the requests as described. I really don't want to have to install named just to get this functionality. Thanks! --lyndon
Re: squid replacement
Sean Kamath writes: > Just which hosts and ports? No caching? Sorry, I should have given a better description ... We proxy http, https, and rsync. squid functions as a simple L7 relay for those protocols. The purpose of the proxy is to restrict 1) which internal hosts can establish outbound connections in the first place, and 2) which hosts they can connect to. E.g., our admin hosts that handle billing can only connect to our payment processor's services. The server that front-ends the internal help desk can only connect to hubscout. Etc. Pretty simple, we just don't want to make it easy for people to exfiltrate data if they do manage to get a foothold inside. There's also the issue of most of our internal infrastructure servers running in 1918 address space. We don't NAT at the border, so the proxy is their only way out (again, by design). > Kinda sounds like a pf.conf solution. . . Maybe with relay to relay everythi > ng through a firewall? That's how we used to do it. The problem is upstream services change their IP addresses on a surprisingly frequent basis, and they don't always let people know this is happening. By using the proxy, I no longer have to hardwire and keep track of IP addresses. The squid ACLs serve as the L7 "firewall", and we have a single rule on the border firewall that allows the proxy host unfettered access to ports 80, 443, and 873. --lyndon
squid replacement
We've been running squid on OpenBSD for years, but it seems these days that any time it tries to proxy a file > 1MB, it just dies. This makes it impossible to do thinks like mirror the OpenBSD distributions. Does anyone know of another HTTP proxy that supports squid-style ACLs? That's a big part of why we chose it in the first place. We restrict which hosts can connect to the proxy, and further restrict which hosts they can connect to upstream. We don't need (or want) caching -- just connection pass through. I've been looking for a while but haven't found anything with equivalent ACL support. Anybody out there have suggestions for a likely candidate? Thanks, --lyndon
Re: No /etc/rpki/arin.tal?
Peter Hessler writes: > On 2023 Sep 13 (Wed) at 14:45:37 -0700 (-0700), Lyndon Nerenberg (VE7TFX/VE6B > BM) wrote: > :This might be worth a note in the rpki-client manpage > > Please re-read my entire email. > Doh! Sorry, I didn't look at that part of the page as I already knew where the files were supposed to be.
Re: No /etc/rpki/arin.tal?
Peter Hessler writes: > Because ARIN insists on a completely ridiculous agreement for a public > key to verify their data. That's odd. I didn't have to agree to anything to download the file. This might be worth a note in the rpki-client manpage, as it certainly violates POLA. --lyndon
No /etc/rpki/arin.tal?
After some head bashing wondering why rpki-client wasn't finding our ROAs I discovered the system doesn't ship with ARINs tal file. So great swaths of RPKI data aren't getting downloaded. Why are those things? --lyndon
Re: Stacked MTUs
> dmesg | grep em em0 at pci8 dev 0 function 0 "Intel I210" rev 0x03: msi, address 00:25:90:b8:82:b8 em1 at pci9 dev 0 function 0 "Intel I210" rev 0x03: msi, address 00:25:90:b8:82:b9 em2 at pci12 dev 0 function 0 "Intel I350" rev 0x01: msi, address 00:25:90:b8:82:ba em3 at pci12 dev 0 function 1 "Intel I350" rev 0x01: msi, address 00:25:90:b8:82:bb em4 at pci12 dev 0 function 2 "Intel I350" rev 0x01: msi, address 00:25:90:b8:82:bc em5 at pci12 dev 0 function 3 "Intel I350" rev 0x01: msi, address 00:25:90:b8:82:bd > ifconfig em1 hwfeatures em1: flags=8b43 mtu 1500 hwfeatures=10 hardmtu 9216 lladdr fe:e1:ba:d0:1d:61 index 4 priority 0 llprio 3 trunk: trunkdev aggr0 media: Ethernet autoselect (1000baseT full-duplex,master) status: active > ifconfig em5 hwfeatures em5: flags=8b43 mtu 1500 hwfeatures=10 hardmtu 9216 lladdr fe:e1:ba:d0:1d:61 index 8 priority 0 llprio 3 trunk: trunkdev aggr0 media: Ethernet autoselect (1000baseT full-duplex,master) status: active
Stacked MTUs
I'm setting up jumbograms on a couple of vlans stacked on an aggr and I need a sanithy check that I'm doing this right. The switches use a hardware MTU of 9192. We want an IP MTU of 9000 for the vlans. I'm assuming this will work? ifconfig em1 mtu 9192 ifconfig em5 mtu 9192 ifconfig aggr0 9192 # em1+em5 lacp ifconfig vlanX mtu 9000 # stacked on aggr0 ifconfig vlanY mtu 1500 # ditto --lyndon
Re: pf state-table-induced instability
Gabor LENCSE writes: > If you are interested, you can find the results in Tables 18 - 20 of > this (open access) paper: https://doi.org/10.1016/j.comcom.2023.08.009 Thanks for the pointer -- that's a very interesting paper. After giving it a quick read through, one thing immediately jumps out. The paper mentions (section A.4) a boost in performance after increasing the state table size limit. Not having looked at the relevant code, so I'm guessing here, but this is a classic indicator of a hashing algorithm falling apart when the table gets close to full. Could it be that simple? I need to go digging into the pf code for a closer look. You also describe how the performance degrades over time. This exactly matches the behaviour we see. Could the fix be as simple as cranking 'set limit states' up to, say, two milltion? There is one way to find out ... :-) --lyndon
pf state-table-induced instability
For over a year now we have been seeing instability on our firewalls that seems to kick in when our state tables approach 200K entries. The number varies, but it's a safe bet that once we cross the 180K threshold, the machines start getting cranky. At 200K+ performance visibly degrades, often leading to a complete lockup of the network stack, or a spontaneous reboot. The symptoms are varied, but the early onset indication is interactive response at the shell prompt gets stuttery. As it progresses, network traffic stops flowing and the network stack eventually just locks up. We also see the occasional: pmap_unwire: wiring for pmap 0xfd8e8a946528 va 0xc000d4d000 didn't change! logged on the console. The machines are not hurting for resources: load averages: 1.06, 1.12, 1.12 xxx 17:53:08 48 processes: 47 idle, 1 on processor up 6:06 CPU0: 0.0% user, 0.0% nice, 22.0% sys, 0.8% spin, 5.8% intr, 71.5% idle CPU1: 0.0% user, 0.0% nice, 27.7% sys, 1.2% spin, 5.2% intr, 65.9% idle CPU2: 0.0% user, 0.0% nice, 40.5% sys, 0.6% spin, 4.4% intr, 54.5% idle CPU3: 0.0% user, 0.0% nice, 1.4% sys, 0.0% spin, 6.8% intr, 91.8% idle Memory: Real: 110M/1722M act/tot Free: 60G Cache: 851M Swap: 0K/21G Our pf settings are pretty simple: set optimization normal set ruleset-optimization basic set limit states 40 set limit src-nodes 10 set loginterface none set skip on lo set reassemble yes # Reduce the number of state table entries in FIN_WAIT_2 state. set timeout tcp.finwait 4 (Note that the limit states 40 is a hold over from the 6.x days, where the default value was too small to handle our load.) vmstat reports this for pf state table memory usage: pfstate 320 584171770 202558 135845 117730 18115 25210 0 80 pfstkey 112 584171770 179214 35152 29744 5408 7208 0 80 pfstitem 24 584171220 179214 6952 5811 1141 1520 0 80 At this moment we're running with 210K state table entries. There seem to be an awful lot (>40%) of those in FIN_WAIT_2:FIN_WAIT_2 state -- I'm still trying to puzzle that one out. But my immediate (and only -- please do NOT start a bikeshed on ruleset design!) question is: Is there a practical limit on the number of states pf can handle? Our experiences says there is, and the number is around 180K. Prior to release 7.1 we didn't see anything like this at all. This started happening with the 7.1 release, and we noticed a real escalation in instability in 7.2. Enough so that we rolled the affected firewalls back to 7.1. That worked around the problem, until last night, when the firewall rebooted itself (at the time of least traffic load?!). Because of all this we have been avoiding upgrading any of the firewalls beyond 7.1 as we cannot afford the resulting downtime. Even carp didn't save us. We've had a couple of incidents where on firewall panics, carp fails over, then the 2nd firewall locks up. And this points out another issue. When the network stack freezes, the carp interfaces do not flip. I haven't figured that one out yet, either. Okay, so what's the point of all this blathering? I guess there are two things I'm wondering: 1) are there known limitations in the pf code that would explain this? 2) has anyone else seen this sort of behaviour on their firewalls? Thanks! --lyndon
ip6-only ipsec tunnel over ip4
I need to set up an ipsec tunnel between a couple of ip6 networks, but I only have an ip4 path between the two gateways. I don't want any ip4 traffic inside the ipsec tunnel, so I'm a bit puzzled about how to set this up. Once I have the end-points up, can I just point the ip6 traffic and routes at enc0? All the example I can find assume you're tunneling ip4 traffic through an ip4 tunnel. (Sorry, but after three decades of trying, I still can't make heads nor tails of ipsec :-P) --lyndon
BGP Router Hardware Suggestions
We are about to discover the joys of upstream BGP routing :-P The current plan is to use a pair of OpenBSD+bgpd hosts as the routers. Each host will require 4x10gig ports (SFP+). One of those links (to AWS) will be close to saturated, along with the downlink to our switches. The other two will only need to carry ~1Gb/s of traffic. We are pretty much a Supermicro shop, and I'm wondering if anyone out there is running a similar setup on SM hardware. My main concern is finding NICs that will let us squeeze every last drop of bandwidth on the 10gig links. I did run some brief ttcp tests on a pair of SM 1Us (don't have the model number handy, maybe 5018-FTN4s?) with add-in Intel cards (550s?) and was able to get 700 MBytes/s of throughput. This would have been circa the 6.7 or 6.8 releases. I'm hoping to get >70% of the theoretical bandwidth out of the new hardware, and my gut says it's the NIC that's constraining us. So, I'd be interested in hearing from anyone running a similar setup, or who has benchmarked any of the current crop of 10gig NICs and has good/bad things to say about specific models. Thanks, --lyndon
Re: carp flapping
Nick, spare yourself the pain and just designate one machine as the master. This is how we run all our proxy server pairs (nginx, squid, other stuff). For a pair fooa/foob, 'a' is the master, and gets advskew 100. The 'b' host gets 150. Make sure preemption is enabled. When it's upgrade time, upgrade the 'b' machine and reboot. If it looks stable, set its advskew to 50 and wait for it to pick up traffic. Now upgrade and reboot the 'a' host. When it looks happy, set 'b's advskew back to 150. This keeps everything in a known state. You are going to break connections no matter what -- even when you let the master float -- so you might as well do it under your own control. We schedule our updates for off-peak hours, and accept that the flip is going to interrupt traffic. You just have to live with it. We moved to this scheme on all our proxies and firewalls seven years ago and have never looked back. --lyndon
Re: OpenBSD support for xattr on file systems other than UFS ?
Marcus MERIGHI writes: > > vfs = catia fruit streams_xattr > > I run a Samba server that does not have these options set - but > successfully serves iOS/macOS clients. You need those extra attributes if you want to use your Samba share for TimeMachine backups. --lyndon
Logitech C922 Video Issues
I have a C922 wired up to a mid-2014 Mac Mini. The system sees the camera, /dev/video responds as expected, but when I run video(1) I just get a window with a solid green background. The camera works with MacOS, so I know the hardware is good, and when I run the command the white "on the air" LEDs light up on the camera. I have monkeyed around with different size settings, tried poking the encoding settings, etc., to no avail. At this point, I'm stumped. Anyone have any ideas? : lyn...@hqmac24.bitsea.ca:/home/lyndon; dmesg|grep video acpivideo0 at acpi0: IGPU acpivideo0 at acpi0: IGPU uvideo0 at uhub0 port 8 configuration 1 interface 0 "Logitech C922 Pro Stream Webcam" rev 2.00/0.16 addr 10 video0 at uvideo0 : r...@hqmac24.bitsea.ca:/home/lyndon; video -v video device /dev/video: encodings: yuy2 frame sizes (width x height, in pixels) and rates (in frames per second): 160x90: 30, 24, 20, 15, 10, 7, 5 160x120: 30, 24, 20, 15, 10, 7, 5 176x144: 30, 24, 20, 15, 10, 7, 5 320x180: 30, 24, 20, 15, 10, 7, 5 320x240: 30, 24, 20, 15, 10, 7, 5 352x288: 30, 24, 20, 15, 10, 7, 5 432x240: 30, 24, 20, 15, 10, 7, 5 640x360: 30, 24, 20, 15, 10, 7, 5 640x480: 30, 24, 20, 15, 10, 7, 5 800x448: 30, 24, 20, 15, 10, 7, 5 800x600: 24, 20, 15, 10, 7, 5 864x480: 24, 20, 15, 10, 7, 5 960x720: 15, 10, 7, 5 1024x576: 15, 10, 7, 5 1280x720: 10, 7, 5 1600x896: 7, 5 1920x1080: 5 controls: brightness, contrast, saturation, gain, sharpness, white_balance_temperature, backlight_compensation Xv adaptor 0, GLAMOR Textured Video: encodings: yv12 max size: 3840x2160 using yuy2 encoding using frame size 640x480 (614400 bytes) using default frame rate run time: 13.795492 seconds frames grabbed: 209 frames played: 208 played fps: 15.004902 : r...@hqmac24.bitsea.ca:/home/lyndon; --lyndon
Re: spurious synproxy warning from pfctl
Stuart Henderson writes: > "synproxy state" cannot work on outbound (for more details see > https://marc.info/?l=openbsd-tech&m=160686649524095&w=2). > > Because pfctl is doing something other than what you asked it to do, > IMO the warning makes sense. > > Alternatively it could be classed as an error but that won't be very > fun for people upgrading. I get that it doesn't work for 'out'. The point I was trying to make, but didn't explain clearly, is that the implicit 'in' matches the documented behaviour in the man page, and therefore shouldn't lead to a warning message. After reading the manpage, I think anyone would understand that that is the case. In the case of 'pass out' where the rule clearly won't apply to any inbound traffic, the warning is completely justified. --lyndon
spurious synproxy warning from pfctl
Given the rule pass proto tcp from any to mail.example.com \ port { 25 80 110 143 443 587 993 } synproxy state pfctl barks /etc/pf.conf:586: warning: synproxy used for inbound rules only, ignored for outbound It's pretty obvious from reading pf.conf(5) that the above is the default behaviour, and it seems perfectly reasonable to apply 'synproxy state' to pass rule that implies 'in'. So I don't see the reason for pfctl to nag at me like that, It would be nice if simple pass rules like the above did not provoke that warning message. --lyndon
Re: smtpd.comf: '... reject "message"' fails
Florian Obser writes: > > You need this one: > > filter filter-name phase phase-name match conditions decision > Register a filter filter-name. A decision about what to do with > the mail is taken at phase phase-name when matching conditions. > Phases, matching conditions, and decisions are described in MAIL > FILTERING, below. > > i.e. > > filter dtag phase mail-from match rdns regex "\.t-online\.de$" reject "550 > 5.7.1 you don't accept our mail, so we don't accept yours." > listen on egress filter dtag Thanks Florian, that clears up a lot for me. So my reading of the grammar was right -- the form of reject I was using doesn't accept a following string. I got compleetely lost when I tried to parse the other reject forms in the grammar, and missed the other syntax you desribed above. Now that I have that clear, it seems pretty straight forward to modify the grammar to allow a string in the other reject cases. I think I'll give that a go over the weekend. Thanks for the help! --lyndon
smtpd.comf: '... reject "message"' fails
My reading of smtpd.conf says that any reject action should be able to take a message parameter. Yet the following line is rejected with a syntax error message: match mail-from rdns regex "\.t-online\.de$" reject "550 5.7.1 you don't accept our mail, so we don't accept yours." Yet the same line without the string after the reject keyword works. I spent some time digging in the grammar, but yacc just gives me migraines. Should this in fact work? Or is the manpage wrong. --lyndon
Re: A minimal browser in base
Chris Bennett writes: > I would instead recommend a new package with the critical newbie > information included in text form. > FAQ, anoncvs and ftp addresses, etc. Long ago and far away, the Berkeley distributions used to ship an assortment of system documentation in /usr/share/doc, including a general-purpose system administrators manual. I guess people didn't want to update those, or maybe thought they were sacred relics, never to be touched. But all the *BSDs dropped them, years ago. I thought that was the wrong move; they should have been kept, along with a /usr/share/doc/README that noted they are historical, and therefore probably out of date. Although I'm sure the vi documentation stands up to this day. Ragardless, if someone does write a new "intro to sysadmin" document, it should live in /usr/share/doc, and not an external package that the new sysadmin might need to read to know how to install the package that contains the documentation she needs to know how to install the documentation she needs to know how to ... [SIGSEGV -- stack overflow] --lyndon
Supermicro SYS-510T-MR PXE issues
We have one of the above (X12STH-SYS motherboard) that's refusing to PXE boot. It's connecting to DHCP and downloading the pxeboot file (according to tftpd), and the bios appears to be printing a message saying the boot image was successfully loaded, but it only stays on the screen for about 200ms before getting erased, so it's hard to be sure. Anyway, immediately after printing that message the system immediately moves on to the next NIC and tries to PXE boot from it. I'm curious if anyone has run into the same issue. We're trying to figure out "why," and at this stage all we can think of is some setting in the BIOS is upsetting the machine to the point where it won't run the image. But I'm stumped. I've never seen anything like this before. Anyone have any ideas? --lyndon
whither struct __kvm?
The first declaration in is: typedef struct __kvm kvm_t; and yet 'grep -r __kvm /usr/include /sys' returns only the above line. What am I missing? --lyndon
Re: port builds with inline source
Marc Espie writes: > have DISTFILES be empty, put your sources under FILESDIR > and a bit of glue to ln/mv them into WRKDIR since you got to have a WRKDIR > for ports. That was hinted at by a few people, and it's working like a champ! --lyndon
port builds with inline source
We have a number of in-house utilities that we push out as packages. Right now these are built using the standard make framework, with a bunch of hand-crafted glue to build and sign the packages before pushing them to our internal distribution server. I would really like to take advantage of to automate as much of the packing process as I can. The problem is that port builds assume you're obtaining the program source from external distribution files, whereas I want to build right out of the port directory itself, i.e. have the program source live under /usr/ports/foo/bar/src/. Has anyone come up with an idiomatic solution to this that doesn't involve surgery on /usr/share/mk/*port*? --lyndon
Re: calling all PFsync users for experience, gotchas, feedback, tips and tricks
Nick Holland writes: > Wrote a little script which, when run: Good grief, man! Just put the pf.conf in CVS and push it with rdist. We do that for all our carped firewall pairs and it works a treat. The following 'special' command in the Distfile will give you a failsafe reload of the pf rules: special files-hc1/etc/pf.conf " pfctl -f /etc/pf.conf || mv /etc/pf.conf.OLD /etc/pf.conf" ; --lyndon
Re: rc.daily missing diff markers
Ingo Schwarze writes: > That's not new, it has been like that for at least 14 years and likely > much longer: Heh :-) Filing a bug report about my horrible memory seems wrong. > I don't think adding the more characters to each line would be a good idea. > It would cause line wrapping in mail even more often than the long lines > already do now. Besides, there is no real ambiguity because the file > name in the last column makes the pairing obvious and the dates right > in front of that show the direction of the change. Sure. It just caught my attention today because this is probably the first time I've seen such a large batch of changes in one email like that. Which has no doubt happened endless times in the past, but I never noticed it until today for some reason. Mostly this jars my OCD by having some of it in diff format, and some not. I'll get over it. --lyndon
Re: 7.1 & nsd - failed writing to tcp: Permission denied
Laura, for a first step I would look at pflog(4). As Peter hinted, if you have an obscure pf rule blocking things after the connection sets up, this will point it out. (Make sure you have all the appropriate pflog bits enabled, of course.) If that doesn't work your next step is to fire up tcpdump and see what's actually happening on the wire. Look for a post-connection RST and work back from that. --lyndon
rc.daily missing diff markers
In the output from the daily insecurity report run, the sections on setuid and block device changes are missing any diff markup. The remaining sections are fine. >From this morning's post-7.1-upgrade run: Setuid changes: -r-sr-xr-x 2 root bin 355952 Sep 30 13:01:03 2021 /sbin/ping -r-sr-xr-x 2 root bin 358736 Apr 11 16:46:17 2022 /sbin/ping -r-sr-xr-x 2 root bin 355952 Sep 30 13:01:03 2021 /sbin/ping6 -r-sr-xr-x 2 root bin 358736 Apr 11 16:46:17 2022 /sbin/ping6 -r-sr-x--- 1 root operator 274936 Sep 30 13:01:04 2021 /sbin/shutdown -r-sr-x--- 1 root operator 277592 Apr 11 16:46:17 2022 /sbin/shutdown [...] Block device changes: brw-r- 1 root operator 6, 0 Mar 4 14:48:06 2022 /dev/cd0a brw-r- 1 root operator 6, 0 Apr 21 12:19:45 2022 /dev/cd0a brw-r- 1 root operator 6, 2 Mar 4 14:48:06 2022 /dev/cd0c brw-r- 1 root operator 6, 2 Apr 21 12:19:45 2022 /dev/cd0c brw-r- 1 root operator 6, 16 Mar 4 14:48:01 2022 /dev/cd1a brw-r- 1 root operator 6, 16 Apr 21 12:19:40 2022 /dev/cd1a [...]
Sprurios errors from syspatch -c
After the 7.1 update syspatch -c started throwing errors due to a missing signatures file: Patch check: syspatch: Error retrieving http://ftp.openbsd.org/pub/OpenBSD/syspatch/7.1/amd64/SHA256.sig: 404 Not Found The error is valid. To suppress this message it would make sense to drop an empty place holder SHA256.sig file whenever a new release comes out. --lyndon
pf synproxy
I'm trying to get synproxy working on a firewall, using the following rule: pass quick proto tcp from any to $front_smtp4 port 25 synproxy state The firewall accepts the connection on the outside interface, but I don't see (tcpdump) any attempt to complete the connectiom on the inside interface. The state table shows a pair of entries with state PROXY:SRC and DST:PROXY which line up with the connection, but all I get it dead air. This seems like it should 'just work'. Is there something obvious I'm missing? I can give more detailed info (pf rules, ifconfig) offline for anyone interested in helping out. Thanks! --lyndon
Re: samba macos epic fail
kasak writes: > The one thing you should know about, is fact, that OpenBSD doesn't > support extended attributes. > So, basically, you cannot use streams_xattr module. And that explains why this works on FreeBSD but not on Open. Thanks for clarifying this. --lyndon
samba macos epic fail
Somebody please tell me what the hell I am doing wrong here. OpenBSD 6.8, samba 4.9.18 via pkg_add, MacOS 10.15.7 fully patched. My main goal is to get Time Machine backups running, but I keep getting all sorts of inscrutable errors about file permissions. The backup manages to create a few directories before it blows up: : root@broken:/dump/tm; find . -ls 27641856 32 drwxr-xr-x3 lyndon wheel 512 Nov 22 13:27 . 276418570 -rwxr--r--1 lyndon wheel 0 Nov 22 13:27 ./.com.apple.timemachine.supported-d865743e-fb2-4a68-b0e7-10857c459e5c 276418580 -rwxr--r--1 lyndon wheel 0 Nov 22 13:27 ./.com.apple.timemachine.supported-64bebc6e-ed10-4c41-9f21-301de558be49 27641859 32 drwx--3 lyndon wheel 512 Nov 22 13:27 ./30228818-9C9E-5DBF-8F9B-36F186FA68BF.sparsebundle 27641860 32 -rw-r--r--1 lyndon wheel 502 Nov 22 13:27 ./30228818-9C9E-5DBF-8F9B-36F186FA68BF.sparsebundle/Info.plist 27641861 32 -rw-r--r--1 lyndon wheel 502 Nov 22 13:27 ./30228818-9C9E-5DBF-8F9B-36F186FA68BF.sparsebundle/Info.bckup 27641862 32 drwx--2 lyndon wheel 512 Nov 22 13:27 ./30228818-9C9E-5DBF-8F9B-36F186FA68BF.sparsebundle/bands 276418630 -rwx--1 lyndon wheel 0 Nov 22 13:27 ./30228818-9C9E-5DBF-8F9B-36F186FA68BF.sparsebundle/token : root@broken:/dump/tm; ls -ld drwxr-xr-x 3 lyndon wheel 512 Nov 22 13:27 . : root@broken:/dump/tm; There's nothing magic about the /dump mount: fd71e51011d0eabf.c /dump ffs rw,softdep,nodev,nosuid 0 2 Below is my smbd.conf in full. I'm hoping somebody can point out the stupidly obvious mistake I'm making :-P Note that by now I have tried every sample smbd.conf that exists on the web, so I'd really like to hear from somebody who *actually has this working*. --lyndon ---8<---8<---8<---8<---8<---8<---8<---8<---8<---8<---8<---8<---8<--- [global] min protocol = SMB2 ea support = yes inherit acls = yes #create mask = 0640 #directory mask = 0750 workgroup = BITSEA server role = standalone server log file = /var/log/samba/smbd.%m max log size = 200 # Shares [homes] comment = Home Directories browseable = no writable = yes [public] comment = Public Stuff path = /pub public = yes writable = no printable = no # Time Machine [timemachine] comment = Time Machine Backups path = /dump/tm browseable = yes writeable = yes vfs objects = catia fruit streams_xattr fruit:aapl = yes fruit:time machine = yes fruit:metadata = stream fruit:model = MacSamba fruit:posix_rename = yes fruit:veto_appledouble = no fruit:advertise_fullsync = true
Re: Shell account service providers
ibs...@ripsbusker.no.eu.org writes: > Aaron Mason writes: > > What are you looking for in such a service? > > Minimally, SSH login, 100GB disk space, and build tools arpnetworks.com
Re: Relayd with TLS and non-TLS backends - bug
Henry Bonath writes: > I would like to chime in here and confirm that I am seeing very > similar behavior with HAProxy on OpenBSD 6.7, > I was preparing to create my own post on this issue until I saw your thread. > I too believe this is a bug. We saw the same thing after upgrading our proxy host from 6.5 -> 6.7. Since our use case was such a simple one, we tossed haproxy overboard and moved things over to relayd. --lyndon
Re: rsync repo for firmwares
Comète writes: > is there any rsync mirror for firmwares ? Nope. But you can wget -nH -r http://firmware.openbsd.org/firmware/ instead.
Re: What do you use to generate invoices on OpenBSD?
tbl + troff -ms has always worked for me.
Re: experience with supermicro based Network Devices for 1Gb/s Ipsec throughput
> doing a project for a large client and I would like to know if anyone has > any issues running. > supermicro with SOC CPUS models > SYS-5018A-FTN4 If you have any of these, replace them. They have known buggy CPUs and will randomly fail without warning. We replaced about a dozen of them after >50% failed within the first year of installation. Note this isn't an OpenBSD problem -- the 5018As are just bad hardware. (They also have APIC interrupt issues, most likely due to a buggy ACPI implementation.) We replaced all our SYS-5018A-FTN4s with SYS-5018D-FN8Ts. I can't speak to the other models you mentioned. As for network throughput, we did test a pair of 5018As with 10-gig NIC cards. They were able to sustain a bit over 750 MB/s throughput on ttcp tests, so this class of Supermicro will certainly shovel the packets across the network. I don't know how much of a hit you will take with IPsec, but we ran our TLS-terminating load balancers on the 5018As before replacing them, and they had no trouble keeping up with a saturated 1-gig NIC worth of TLS connections. The replacement 5018Ds just loaf along. [ All the above gear was/is running the at-the-time current 'release' version of OpenBSD. ] --lyndon
Re: Full path in SYNOPSIS for /usr/libexec programs
Theo de Raadt writes: > Disagree on this. > > Those programs are intentionally not in the path, since you don't > run them by hand. That's what I was getting at. It's not clear they are 'libexec's. That's what confuses people. I just thought this might be a way to make it clear(er) that you don't run these directly, but that they are invoked by other things. I.e. it's easy to explain the concept of /usr/libexec to people once, so they recognize it when they see the path spelled out. But when we're bringing in people from Linux, their first reaction upon not finding the documented command is to start installing packages from hell to breakfast until something works. From an indoctrination standpoint, there's a lot less pain (and cleanup work) if they know right away that /usr/libexec/foo(8) is not something you run from the shell. I won't push this any further, but experience shows this change really helps guide people into the BSD filesystem hierarchy conventions. --lyndon
Full path in SYNOPSIS for /usr/libexec programs
For programs that live in /usr/libexec, those with manpages show just the bare program name in the SYNOPSIS section (when there is a SYNOPSIS section). There is a long-standing expectation that programs documented in section 8 of the manual can be run from a shell with /sbin:/usr/sbin in the $PATH. It's frustrating for newer BSD admins when foo(8) programs aren't there, apparently, due to living in libexec. Also, sometimes, even for the older BSD admins who expect to find these things in *sbin from, e.g., older SunOS releases. What I'd like to do is fully qualify the paths to those commands in the SYNOPSIS section, where that makes sense. E.g. for the mail.*(8) entries, 'mail.*' becomes '/usr/libexec/mail.*' under SYNOPSIS (only). For entries that have a *sbin command shadowing a /usr/libexec program, nothing changes. If people think this makes sense I'll sendbug a patch. --lyndon
Re: Postscript printer recommendations
> I am not familiar with Postsript printers. Thanks for correcting > me. I want something that will work with Ghostscript and not > depend on Printer Command Language (PCL). Just search for a printer that supports Postscript. Many laser printers do. I have an HP LaserJet M402dn. It supports Postscript level 3 and speaks USB and Ethernet (lpr at the least, maybe IPP as well). Low volume monochrome, prints duplex, reasoably inexpensive.
Re: Ansible install Re: Reboot and re-link
Frank Beuth writes: > Yes, and being able to Ansible-manage even the re-installation would make the > whole process that much nicer :) I started writing a rebuttal to this, but it quickly turned into writing our design document for how we handle this internally across he data- centre. That's not something I can share. Suffice to say this is not as simple a process as you might think it is. --lyndon
Re: Ansible install Re: Reboot and re-link
Daniel Jakots writes: > You can automate installation with autoinstall(8). You can also > automate upgrades with autoinstall(8) This works like a charm. On our load balancers we PXE install with a local rc.firsttime that installs python. After that we do all the system, haproxy, nginx, &c management via ansible. > and from 6.6 you'll be able to > use sysupgrade(8) as well. We are looking forward to that. *However*, there is a lot to be said for regularly re-installing your hosts from scratch. This ensures your installer scripts don't rot as host system "features" accrete over time. This is prone to happen when you Ansible- or Puppet-manage servers. Things get added, things get removed. Often you miss hidden dependencies that sneak in; you don't want to be discovering those when you're trying to reinstall a production host after a catastrophic failure. --lyndon
Re: HIPPA supported ciphers
Kihaguru Gathura writes: [...] > TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 Non-compliant with HIPAA guidance > TLS_RSA_WITH_CAMELL TLS_RSA_WITH_CAMELLIA_128_CBC_SHA Non-compliant > with HIPAA guidance > TLS_RSA_WITH_CAMELLIA_128_CBC_SHA Non-compliant with HIPAA guidance > Under what circumstances could these ciphers be not considered for > HIPPA compliance? These aren't known to the HIPAA standard, and it doesn't allow unknown ciphers. Just disable the Camellia ciphers and you'll pass the validation. You'll run into similar issues passing PCI-DSS. We use the following settings to make the various validators happy: ssl_ciphers "HIGH:!DES:!3DES:!CHACHA20:!RC4:!MD5:!aNULL:!EDH:!CAMELLIA"; ssl_prefer_server_ciphers on; --lyndon
Re: LACP inquiry
> The panic indicated that there was no memory left and > was in UFS region. Since this is the only change I did in the last few month > s > I'm guessing there is a memory leak in the LACP routines, somewhere. Seems unlikely. We run LACP trunks on all our firewalls and nginx load balancers. Each of those machines pushes a steady 150 Mb/s of traffic through the trunk interfaces, 24 hours a day. Are you doing any NFS mounts? I've seen panicks in the past due to stuck NFS servers causing clients to run out of mbufs. But that was a long time ago, so it's just a hint based on the panic being near the filesystem code ... Seeing the actual panic traceback would help. --lyndon
mirroring firmware.openbsd.org
Our firewalls can't connecto to firmware.openbsd.org (by design). Is there a way to mirror the contents of firmware.openbsd.org? It would be nice if these files were available in the usual OpenBSD mirrors, since we already mirror those and could just point fw_update at our internal mirror host. But something like an rsync- or ftp-able firmware.openbsd.org source would be just fine. --lyndon
Re: 6.5: rc.firsttime failed, how to restart?
> This could be improved for 6.6. Maybe you should set a marker in > the filesystem instead, indicating that rc.firsttime was already run. > The upgrade procedure could remove the marker. This is pretty common during new installs. I think in 6.5 fw_update is run automatically when the system boots, so it should just take care of itself. If not, just run 'doas fw_update' once you have network connectivity.
Re: 6.5 auto_install fails due to custom /var/tmp?
> Sadly, no :-( > > But I should be able to accomplish what I need using rc.firsttime and > a tiny bit of hackery. Sadly, no :-( What I was aiming for was to have the newly installed machines come up with a 2GB MFS /tmp and a ~20GB /var/tmp. But MFS /tmp really needs help in the system boot scripts. The critical part for us is that /var/tmp not overwhelm /var, and we can get that with the current scheme by sizing /tmp accordingly. --lyndon
Re: 6.5 auto_install fails due to custom /var/tmp?
Nick Holland writes: > normally, /var/tmp is a symlink to /tmp. > It can't make the link. No surprise. > Answer "Yes" to the "Continue anyway?" prompt, and all will be fine, I > believe. Sadly, no :-( But I should be able to accomplish what I need using rc.firsttime and a tiny bit of hackery. --lyndon
Re: Upgrading a CARP firewall cluster
mabi writes: > Now I would first like to upgrade the cluster to 6.4 and then to 6.5 and was > wondering if it is possible to operate that cluster for a short amount of tim > e having one node running 6.3 and the other node with 6.4 and then the same f > or going to 6.4 to 6.5. In general this is not a problem. We run several carp-ed firewall (and load balancer/proxy) pairs, and upgrade them in this manner. As was already mentioned, always read the release notes to look for carp or pfsync changes that might cause trouble. On our systems, we run the 'a' machine as primary and the 'b' machine as backup. When upgrading, we do the 'b' machine first, since this doesn't disrupt the primary. After the 'b' machine is fully configured, monitor its state table to ensure it's consistent with the 'a' machine. Once you are convinced pf is staying in sync, demote the 'a' machine and upgrade it. Make sure you have 'net.inet.carp.preempt=1' in /etc/sysctl.conf, and set advskew appropriately on each host in the pair. --lyndon
6.5 auto_install fails due to custom /var/tmp?
While trying to PXE install a 6.5 machine I was hit with this failure: Installing bsd 100% |**| 15163 KB00:00 Installing bsd.mp 100% |**| 15248 KB00:00 Installing bsd.rd 100% |**| 9984 KB00:00 Installing base65.tgz99% |* | 189 MB00:00 ETAtar: Unable to remove directory ./var/tmp: Device busy Installing base65.tgz 100% |**| 190 MB00:14 Installation of base65.tgz failed. Continue anyway? [no] no which I suspect is related to this: / 1G swap4G-16G 10% /tmp2G /usr4G /usr/local 2-6G 10% /var10-20G 20% /var/tmp10-20G 15% /var/log20-40G 30% /u 1G-* I've never run into this until today, when I tried to carve out an explicit /var/tmp. Autopartitioning be able to handle /var/tmp, no? --lyndon
Re: virtual colocation? Amazon/cloud?
For BSD virtual servers I've had no problems with Arp Networks (https://www.arpnetworks.com/), going back several years now. I use them for FreeBSD hosts of my own, and at $WORK we use them to host OpenBSD. They even worked with me to get a Plan 9 server running. Their tech support gang is wonderful to work with. My only complaint is that their support ticketing web interface is a pain to work with, but they'll happily deal with sales/support issues via email :-) --lyndon
Re: door opening sensor HW for OpenBSD?
By far the easiest way to do this is to connect a switch to the door that opens/closes as the door opens/closes. This assumes that when you say "the door moves" you really meant "is opened or closed". Whether the switch is normally open or normally closed doesn't matter. Wire the switch to a serial port; connect one side to DTR, the other to CD. Now you can write a program that runs a loop that (1) opens the blocking tty device, then (2) reads the descriptor, which will block until CD drops returning an error from read(2), at which point you close the descriptor and repeat the loop. (1) will complete when the switch closes. (2) will complete when the switch opens. After each event, you can perform whatever actions you want.
Re: manual assistance
On 03/15/18 19:39, Edgar Pettijohn wrote: Is there a man page template somewhere that I can use to get started writing a manual? No more so than there is a template somewhere that will get you started writing Shakespeare. The mdoc macros encourage consistency of layout. But the words come from the writer. Read the OpenBSD manpages. All of them. And I mean "read" them, like a novel. Learn the style. The tightness of the prose. Buy and worship a copy of Strunk and White. (You should do that regardless of this project.) The Plan 9 manpages are also a demonstration of eloquence. Although there are differing opinions about the layout :-P None of that changes the conciseness of the writing, and that's what matters. --lyndon
Re: IPsec help: too much NAT!
NET-P GW-Q <-> internet <-> GW-H GW-V NET-V In the schematic above, '' represents a NAT translation point. '<->' is a regular router interconnect. Except for where I screwed up, of course. That should read: NET-P GW-Q <-> internet <-> GW-H GW-V <-> NET-V I.e. the GW-V <-> NET-V interface is native, not translated. Sorry for the screw-up :-P
IPsec help: too much NAT!
I have an IPsec conundrum I'm trying to solve. Yes, the scenario is somewhat absurd; it's also the problem I've been taksed with solving, so spare the peanut gallery comments, okay? NET-P GW-Q <-> internet <-> GW-H GW-V NET-V NET-P is 10.0.2.0/24 NET-V is 10.0.11.0/24 GW-Q is an OpenBSD host with fixed addresses 10.0.2.1 (inside) and 1.2.3.4 (internet). GW-H is some random ISP cable/DSL modem that NATs everything behind it, with a random external address. (I.e., assume DHCP on the "internet" side.) GW-V is an OpenBSD host. It has a variable upstream address obtained from the back end of GW-H (DHCP). On the other side, GW-V presents 10.0.11.1 to NET-V. The goal here is to establish an IPsec tunnel that links NET-P and NET-V together, in the face of all the other nonsense in between. In the schematic above, '' represents a NAT translation point. '<->' is a regular router interconnect. I have tried setting up an IKEv2 passive connection from GW-V to GW-Q (connections in the other direction are impossible), but I'll be damned if I can figure out how to specify the SA associations and ESP flows on GW-V, given the lack of fixed addresses on the upstream sides of GW-V and GW-H. (Or in the other direction, for that matter.) Is there any hope this can possibly work? --lyndon
Re: IPMI still requires Java! I'm screwed.
We manage to deal with all our servers using the IPMI serial console redirect. You might need to set it up in the BIOS once, although we've not had to do that in ages. You do have to create the IPMI remote login/password, but you need that anyway if you're trying to use the web/java console. --lyndon
Re: logging in to joyent images
> Another option is, when writing the JSON descriptor, to have it inject an SSH > key into the machine when provisioning. I've never done this myself, but I > know there's a few examples floating around on the web somewhere. That was the trick, although it took some digging to find the specific instructions. https://wiki.smartos.org/display/DOC/How+to+create+a+KVM+VM+%28+Hypervisor+virtualized+machine+%29+in+SmartOS has the details at the bottom of the frame. In a nutshell, add the following stanza to the VM definition: "customer_metadata": { "root_authorized_keys": "ssh-rsa B3NzaC1yc2EBIwAAAQEA8aQRt2JAgq6jpQOT5nukO8gI0Vst+EmBtwBz6gnRjQ4Jw8pERLlMAsa7jxmr5yzRA7Ji8M/kxGLbMHJnINdw/TBP1mCBJ49TjDpobzztGO9icro3337oyvXo5unyPTXIv5pal4hfvl6oZrMW9ghjG3MbIFphAUztzqx8BdwCG31BHUWNBdefRgP7TykD+KyhKrBEa427kAi8VpHU0+M9VBd212mhh8Dcqurq1kC/jLtf6VZDO8tu+XalWAIJcMxN3F3002nFmMLj5qi9EwgRzicndJ3U4PtZrD43GocxlT9M5XKcIXO/rYG4zfrnzXbLKEfabctxPMezGK7iwaOY7w== wooyay@houpla" } Create the VM and you can ssh in as root using that key. I gather this only works with the Joyent-supplied OpenBSD VM images. Note I also tried to VNC to the console and boot to single user mode, but the bootstrap redirects the console to com0 long before there's any chance of overriding that at the boot prompt. --lyndon
Re: logging in to joyent images
I have only limited experience with SmartOS, but the quick fix is to login to the global zone and use zlogin to enter the VM (get the VM hash from vmadmin list). You'll then have a shell and can change the password, add users, and adjust the sshd config to your liking. Not sure that will work on a VM (vs zone), but I'll give it a shot when I get back to the machine. --lyndon
logging in to joyent images
I have installed one of the openbsd-6 SmartOS VM images, gotten the VM to boot, but I'll be damned if I can find out anywhere a login id and password that will actually let me log in to the bloody thing. Anybody been down this road and have an answer? I'm using the c1fce07e-663b-62b9-b766-aa35c2d231f0 image, if it matters. --lyndon
daemon(8)
The current daemons discussion prompts a vaguely related question. We have a small but growing collection of in-house daemons written in Go. Go's runtime isn't amenable to the fork/setsid dance you would normally do to push a daemon process into the background. As a workaround, I ported FreeBSD's daemon(8), which neatly solves the problem. The current FreeBSD daemon(8) has accumulated more features than I would like, versus the original version I knew. But it's a useful command in its original form (with just the -c and -f flags). Was this ever part of Open? If not, could it be, if we offered a stripped down version of the FreeBSD code? --lyndon
Re: Read sysctl from file
> On Jul 20, 2017, at 6:35 AM, BARDOU Pierre wrote: > > Hello, > > Is there a way to make sysctl re-read its conf file, or even another file, > like sysctl -p does on linux systems ? > Supporting this option would be nice, as it is used by the sysctl module of > ansible. Here's the script we call (ansible handler, or as an rdist 'special') whenever we push a new sysctl.conf. It's the same code the system runs at boot time, lifted out into a standalone script. #!/bin/sh # sysctlreload: apply sysctl.conf(5) settings. # Strip in- and whole-line comments from a file. # Strip leading and trailing whitespace if IFS is set. # Usage: stripcom /path/to/file stripcom() { local _file=$1 _line [[ -s $_file ]] || return while read _line ; do _line=${_line%%#*} [[ -n $_line ]] && print -r -- "$_line" done <$_file } stripcom /etc/sysctl.conf | while read _line; do sysctl "$_line" done
Re: LACP problem
> On Jun 10, 2017, at 10:44 AM, Charles Lecklider > wrote: > > Is there no other diagnostic information I can get from the OpenBSD side? Not really, other than running tcpdump on the two interfaces and examining the LACP protocol packets to try to discover why the negotiation is acting the way it is. Also, if you don't have the enable password, how did you configure LACP on the switch to begin with?
Re: LACP problem
> On Jun 8, 2017, at 7:47 PM, Charles Lecklider wrote: > > The trunk is there, seems to be configured the right way, but the second > port doesn't come up. If I pull the cable on em0, em1 comes up, put the > cable back, em0 doesn't join the trunk. What you're showing looks fine. We run this all over the place in house. This points to the switch being confused about the configuration of the trunk. > Have I botched the config somewhere? Or is there some incompatibility > going on between OpenBSD and the switch? And if it's the latter, how do > I get some diagnostic information to work out what's going on? The first step is to have the switch display its idea of the LACP configuration and status. I haven't a clue how a TP-LINK does that, but on our Junipers it's 'show lacp interfaces'. E.g.: > show lacp interfaces Aggregated interface: ae0 LACP state: Role Exp Def Dist Col Syn Aggr Timeout Activity ge-0/0/0 ActorNoNo Yes Yes Yes Yes FastActive ge-0/0/0 PartnerNoNo Yes Yes Yes Yes FastActive ge-1/0/0 ActorNoNo Yes Yes Yes Yes FastActive ge-1/0/0 PartnerNoNo Yes Yes Yes Yes FastActive LACP protocol:Receive State Transmit State Mux State ge-0/0/0 Current Fast periodic Collecting distributing ge-1/0/0 Current Fast periodic Collecting distributing Aggregated interface: ae1 LACP state: Role Exp Def Dist Col Syn Aggr Timeout Activity ge-0/0/7 ActorNoNo Yes Yes Yes Yes FastActive ge-0/0/7 PartnerNoNo Yes Yes Yes Yes SlowActive ge-1/0/7 ActorNoNo Yes Yes Yes Yes FastActive ge-1/0/7 PartnerNoNo Yes Yes Yes Yes SlowActive LACP protocol:Receive State Transmit State Mux State ge-0/0/7 Current Slow periodic Collecting distributing ge-1/0/7 Current Slow periodic Collecting distributing [...]
Re: LACP problem
> On Jun 8, 2017, at 7:54 PM, Lyndon Nerenberg wrote: > > Why do em0 and em1 have the same MAC address? Oh shit, never mind - it's the trunk interface :-P Sorry ...
Re: LACP problem
> On Jun 8, 2017, at 7:47 PM, Charles Lecklider wrote: > > em0: flags=8b43 > mtu 9000 >lladdr 0c:c4:7a:d9:ea:d0 >index 5 priority 0 llprio 3 >trunk: trunkdev trunk0 >media: Ethernet autoselect (1000baseT full-duplex,rxpause,txpause) >status: active > em1: flags=8b43 > mtu 9000 >lladdr 0c:c4:7a:d9:ea:d0 >index 6 priority 0 llprio 3 >trunk: trunkdev trunk0 >media: Ethernet autoselect (1000baseT full-duplex,rxpause,txpause) >status: active Why do em0 and em1 have the same MAC address?
82599ES support
We're looking to buy some 10-gig SFP+ boards, and are eyeing up Supermicro's 2-port boards (listed as the 'Intel 82599ES - AOC-STGN-i2S'). ix(4) doesn't list the ES variant of the chip, and a quick grep through the driver source doesn't mention it explicitly, either. Are any of you running this board under >= 6.0 ? (We need to buy these boards as part of a single lease, so I'm constrained on what's available. Otherwise I'd just buy some X520s.) --lyndon
Re: Missing message-ID header in OpenSMTPD emails
I don't use the submission port on either server, just port 25, but 5.9 sends a message-id and 6.0 does not. What does "/if necessary/" mean for the 5.9 server? What is the deciding factor to make the header necessary? I would like the v6.0 server to send a message-id too, how do I make whatever-it-is necessary on this server? Or was the "if necessary" feature removed in 6.0 and replaced with "...submit port"? The change in 6.0 brings smtpd into compliance with the SMTP specification. RFC 5321 section 6.4 contains the following text: The following changes to a message being processed MAY be applied when necessary by an originating SMTP server, or one used as the target of SMTP as an initial posting (message submission) protocol: o Addition of a message-id field when none appears o Addition of a date, time, or time zone when none appears o Correction of addresses to proper FQDN format The less information the server has about the client, the less likely these changes are to be correct and the more caution and conservatism should be applied when considering whether or not to perform fixes and how. These changes MUST NOT be applied by an SMTP server that provides an intermediate relay function. Note the MUST NOT in the final paragraph. When a message is received on port 25, smtpd really has no way of knowing if it is the "originating" server, therefore it can't add the message ID. If it receives the message on port 587 (the submission service), it is - by definition - the originating SMTP server, and is allowed to add the message ID if it's missing. --lyndon
relayd(8) relay: redirect based on URL paths
My relayd.conf fu is lame and needs help. Given the following config: ---8<---8<--- interval 60 timeout 2000 table { w1.example.com w2.example.com w3.example.com } http protocol https { tcp { nodelay, sack } match request header append "X-Forwarded-For" value "$REMOTE_ADDR" match request header append "X-Forwarded-By" \ value "$SERVER_ADDR:$SERVER_PORT" match request header set "Connection" value "close" } relay web { listen on 203.0.113.5 port 443 tls protocol https forward with tls to port https mode loadbalance \ check https "/" code 200 } ---8<---8<--- I am trying to figure out how to intercept request paths beginning with "/xy/" so that I can forward them to a different port in the same server pool. I.e.: https://host.example.com/xy/mumblebarge -> https://:/xy/mumblebarge https://host.example.com/anything_else -> https:///anything_else It seems this should be possible, but I just can't get my head around relayd.conf(5) :-( --lyndon
Re: Hardware recommendations for compact 1U firewall
As promissed in one of my earlier e-mails. OpenBSD 6.0 dmesg for SYS-5018A-FTN4 FWIW, we have six of these doing firewall duty (currently running 5.9) and they perform flawlessly. We run them in CARPed pairs, and LACP across redundant switches. --lyndon
Re: Would you use OpenBSD on Power8, and if so what applications? (IBM asks! They're thinking about donating hw.)
> On Oct 18, 2016, at 10:48 AM, Jack J. Woehr wrote: > > The Power8 *needs* OpenBSD because they don't have a really good firewalling regimen at that level. I suspect anyone running Power8 gear is doing so behind dedicated firewall hardware, e.g. Juniper SRX. --lyndon
Re: OpenBSD 6.0 and emacs-24.5p2-gtk2
> On Sep 5, 2016, at 10:16 AM, Peter Fraser wrote: > > (emacs:17220): GLib-GIO-CRITICAL **: g_settings_schema_source_lookup: > assertion 'source != NULL' failed > > The failed assertion does not seem to cause any trouble, and I expect > gsettings is part of the answer,. > but I don't know what the answer is. It's a (mostly) bogus error message from GTK. You'll see these from all sorts of GTK-based programs. If the program doesn't crash, just ignore them. You'll soon get in the habit of adding '2>/dev/null' when you run those from the command line. --lyndon [demime 1.01d removed an attachment of type application/pgp-signature which had a name of signature.asc]
Re: GPIO for P8 Expansion Header on Beaglebone Black
> Most hardware + firmware combinations provide insufficient detail > to know what pins are used for what, reserved for what, or wired > to an auto-destruct. But that's by design. GPIO is simply an interface to a digital I/O pin on the CPU. Everything after that is up to the end-user. Especially so since they are the ones controlling what is connected to those pins. I bit-bang the RPI all the time, and no two of them ever uses the available pins in the same way. Because I'm prototyping, so this changes all the time. That makes it *my* responsibility to know WTF I am doing (including which pins to stay away from on that specific device). As it should be. Bottom line is if you are banging on low-level hardware interfaces like this, you better know what your hardware is doing. At this level, just like with device drivers, you have all the tools at your disposal to destroy everything in sight. This is why gpio device interfaces require (or at least should require) root perms to access in any way. Of course, you're much more likely to destroy your device (or a good portion of those ports, at least) by plugging a 5V peripheral into a 3V port. No OS assistance required :-P [demime 1.01d removed an attachment of type application/pgp-signature which had a name of signature.asc]
Re: ntpd tries to connect via ipv6
> On May 31, 2016, at 3:58 PM, Ted Unangst wrote: > > If we're talking about timeframes long enough for network connectivity to come > and go, that's long enough for IP addresses to come and go as well. This is an interesting problem, in general. In my MTA development days, we would cache the targets of the MX record(s) we found in queued message's metadata. For each host target, we included the absolute time the data would expire, based on the original MX lookup. Expired records were ignored, and when we ran out of hosts we would re-run the MX lookup and update the meta-data. This worked quite well, considering the underlying DNS data didn't change all that often. But SMTP sessions are not long-lived, so this just worked. These days I wish I had similar functionality in pf. And not for mobile hosts. E.g., at work we need to open up access to things like Paypal payment API hosts. For those rules we can either hardwire IP addresses, or use their hostnames. But they inevitably move their API hosts around. In the first case, our list of hardwired IP addresses gets stale. In the second, the addresses returned by the A(AAA) record lookup gets stale. I would really like to be able to say "build the rule from this hostname, but refresh the A(AAA) record results as the underlying data's TTL expires." pf isn't special - this is the same problem as the ntpd example. I've puzzled over how to deal with this, but I can't see a solution that doesn't involve some sort of proxy that isolates the process from the network changes. And even then, you're dealing with at least a TCP connection reset if an existing address vanishes. For some things, that's not an issue. For others, ... ?
Re: vi vs emacs, which one makes me look more smart in front of my friends?
> In all seriousness, Richard Stallman incurred a repetitive stress injury > from using emacs commands. Holding down Ctrl or Alt can be bad for your > health. That's why I generally use vi even though there are things I don't > like and wish there were a better choice by default. acme(1)
Re: vi vs emacs, which one makes me look more smart in front of my friends?
> acme(1) Or sam(1) if you are a purist.
Re: ntpd commandline expansion
On 2016-05-07 3:56 PM, Luke Small wrote: It is because I am saving the state in virtualbox, which is like putting it in hibernate, except instead of refreshing the time, the time remains the same as when it last ran, which can be some time ago. Why are you running ntpd in a VM? Just have the VM pay attention to the hardware clock, and let ntpd on the host take care of things.
Re: implementing circular queue for tcpdump logging
Has anyone done something like this with OpenBSD? I don't see anything obvious and was wondering what others might have done to accomplish this. Perhaps some kind of wrapper script ... We had the same issue a couple of months ago. I just brought over the tcpdump source from FreeBSD and compiled that. It supports capture file rotation based on time or file size.
Re: Find - Sillyness
spider:/var/logtransfer/dc-fw1# find . -name pflog.*.gz -exec zcat {} | tcpdump -entttv -r - \; find: -exec: no terminating ";" Find -exec invokes the command directly using exec(2). There's no shell underlying the command, so pipes are out (even if you had correctly escaped the '|'). The easiest way out of this is to put the compound command into a shell script and have find run that. E.g.: cat > scanlog << _HOOPY_FROOD #!/bin/sh zcat $1 | tcpdump -entttv -r - _HOOPY_FROOD chmod +x scanlog find . -name 'pflog.*.gz' -exec ./scanlog '{}' --lyndon Our users will know fear and cower before our software! Ship it! Ship it and let them flee like the dogs they are!
Re: Relaying denied. Trying to do TLS+SMTP AUTH. Do I really need SASL?
Well, that is exactly what I want to do. I use the system passwords for imap anyway, so why not? Of course, the channel must be protected by SSL/TLS when you do that. Because there are a large number of IMAP clients that are not aware of LOGINDISABLED, and which will blindly attempt LOGIN or AUTH PLAIN in the absence of TLS (which they are not aware of, either). Many IMAP clients predate RFC3501. So those passwords (with the matching authentication ids) are going to be flying around the Internet in the clear no matter what you do. Using the UNIX account password for IMAP (or POP) in this manner makes your system effectively password free. --lyndon Specifications are for the weak and timid!
Re: Relaying denied. Trying to do TLS+SMTP AUTH. Do I really need SASL?
If someone sends a good patch: yes (see the website for the correct address where to sent patches). Note that this isn't as simple as it might seem: the problem is where you store the passwords for PLAIN. You certainly don't want to reuse the existing system passwords. Put the authentication database behind a map; that way sendmail doesn't have to care. --lyndon We've heard that a million monkeys at a million keyboards could produce the Complete Works of Shakespeare; now, thanks to the Internet, we know this is not true. -- Robert Wilensky, University of California
Re: Volume Management
> I m not tied in anyway to OpenBSD, what i m trying to avoid is > multiplying the amount of different OS i m using hence the question > about OpenBSD, Okay, but it helps to know this info up front. > i think i will indeed take a look at GEOM for time being. Also, the Express releases of Solaris are shipping ZFS in addition to the traditional Solaris volume management tools. As a SAN storage engine, that's one of your better places to start. Use the right tool for the right job. OpenBSD isn't what you want for the SAN. But it is what you want to use to secure access to that SAN. --lyndon
Re: NOOP and Spamd
On Mar 19, 2007, at 7:17 PM, Timothy A. Napthali wrote: The only problem I can foresee is that I remember reading somewhere that some MTAs use NOOP as a kind of keep-alive at times. You will also find the command sequence RSET+NOOP used to delimit transactions when an SMTP client reuses an established SMTP session to send multiple messages. Once upon a time the NOOP after RSET was required to work around some SMTP server protocol state machine bugs. While those servers a likely long gone, I still see this client behaviour on occasion. --lyndon
Re: stupid question re kernal build make install
The chance on something like that happening during the mv is much smaller, because it takes much less time. More importantly, mv (actually, rename(2)) is an atomic operation, which means there is no period of time where /bsd does not exist. If the system dies while there is no /bsd, it won't have a kernel to load when it boots. --lyndon
Re: Which tools the OpenBSD developers are using?
On Nov 28, 2006, at 7:39 PM, Chris Kuethe wrote: if you're not careful about your date, you might find you have some unwanted growfs. you never know what's in swap space. That's why it's important to finger, first.
Re: Sun BlackBox
I haven't priced shipping containers lately, but I imagine this sort of setup could be useful in more rural areas instead of building out a facility. Plus, they're shipping containers so you could stack a bunch of them together. I'm thinking the Vancouver economy could take on a whole new look if we buried the docks in AMD64 ...
Re: What do you use for MIME email?
Why would you want a MIME encoding solution in the default installation? I mean, really, what do a large majority of systems need MIME for? 1) Character set support. These days I suspect the number of Unix users who can live completely within the US-ASCII glyph set are in the minority. 2) PGP/MIME and S/MIME. Even without doing crypto processing, MIME lets the MUA display only the human readable parts without contortions. MIME has been around for 14 years. There's no excuse for any MUA not to be able to deal with it at least minimally. In the case of /usr/bin/Mail that means recognizing content types and only displaying text/* sections when printing to the screen. It doesn't *have* to be complicated. --lyndon
Re: Sizing an IMAP Server on OpenBSD
First, about hardware requirements. What you're proposing is absolute overkill for such a small client load. You won't need to upgrade the hardware :-) About resource limits of _cyrus user and sysctl values, are there well known values? Should I increase kern.maxfiles for example? I wouldn't like to learn it at production time. Again, given the minimal load from IMAP, the out of the box defaults will do just fine. Well, this are my questions. May be the hardware is overkill for our load, but sizing hardware without prior experience it's always a difficult task, so if anybody wants to share their experience... Cyrus has a very small CPU and memory footprint. All you need to ensure is that you have enough I/O bandwidth from the disk, through the imapd process, and out the network interface. From what you're describing, you have nothing to worry about. Sendmail can want memory when delivering messages with large numbers of recipients (e.g. mailing list expansion), but again, it's doubtful your load will even begin to stress the hardware. --lyndon
Re: sendmail causing high load
My isp blocks traffic on port 25. So i decided to experiment on adding a listening port for sendmail. While not an answer to your load problem, I suggest you read up on the Submission service (RFC 4409). --lyndon