Re: Files in /etc containing empty VCSId header

2021-06-09 Thread Michael Gmelin



On Wed, 09 Jun 2021 08:23:20 -0600
Ian Lepore  wrote:

> On Wed, 2021-06-09 at 18:54 +1000, Peter Jeremy via freebsd-current
> wrote:
> > On 2021-Jun-08 17:13:45 -0600, Ian Lepore  wrote:  
> > > On Tue, 2021-06-08 at 15:11 -0700, Rodney W. Grimes wrote:  
> > > > There is a command for that which does or use to do a pretty
> > > > decent job of it called whereis(1).  
> > 
> > Thanks.  That looks useful.
> >   
> > > revolution > whereis ntp.conf
> > > ntp.conf:
> > > revolution > whereis netif
> > > netif:
> > > revolution > whereis services
> > > services:
> > > 
> > > So how does that help me locate the origin of these files in the
> > > source
> > > tree?  
> > 
> > It works for me™:
> > server% whereis ntp.conf
> > ntp.conf: /usr/src/usr.sbin/ntp/ntpd/ntp.conf
> > server% whereis netif   
> > netif: /usr/src/libexec/rc/rc.d/netif
> > server% whereis services
> > services: /usr/src/contrib/unbound/services
> > 
> > Is your source tree somewhere other than /usr/src?
> >   
> 
> My /usr/src is a symlink to the actual source tree on a different
> filesystem (but it is the source tree the running system was built
> from).  It seems odd that that would make whereis(1) not work.
> 

whereis(1) falls back to using "locate" if it can't find the sources
directly, so e.g., in case of `whereis -s ls', it will get through the
results of `locate '*'/ls` and see if they match "^/usr/src" (or
whatever you gave as source dir using -S).

Therefore if

  locate '*'/ntp.conf | grep "^/usr/src"

gives you a result, then `whereis -s ntp.conf' will too.

See also
https://cgit.freebsd.org/src/tree/usr.bin/whereis/whereis.c#n607

Michael

(re-sent, as the previous mail bounced from the list)

-- 
Michael Gmelin



Re: drm-kmod kernel crash fatal trap 12

2021-06-09 Thread Hans Petter Selasky

On 6/9/21 4:43 PM, Thomas Laus wrote:

I updated my system this morning to main-n247260-dc318a4ffab June 9 2012
and the first boot after the kernel was loaded I received:

'fatal trap 12' fault virtual address = 0x0
fault code = supervisor write data, page not present
instruction pointer = 0x20:0x82fc3d1b
stack pointer = 0x28:0xfe011aea3330
frame pointer = 0x28:0xfe011aea3370
code segment = base 0x0 limit 0x, type 0x1b
DPL 0,pres 1, long 1, def 32 0, gran 1
processor eflags = interrupt enabled, resume, IOPL = 0
current process = 1187 (kldload)
trap number = 12

I hand copied the screen display since I was not able to generate a
crash dump to /var/crash on a zfs file system.

I am rebuilding the GENERIC kernel since the crash was using the NODEBUG
version.  This is 100 percent repeatable.

Tom



Make sure you also re-build the drm-kmod module.

--HPS



drm-kmod kernel crash fatal trap 12

2021-06-09 Thread Thomas Laus
I updated my system this morning to main-n247260-dc318a4ffab June 9 2012
and the first boot after the kernel was loaded I received:

'fatal trap 12' fault virtual address = 0x0
fault code = supervisor write data, page not present
instruction pointer = 0x20:0x82fc3d1b
stack pointer = 0x28:0xfe011aea3330
frame pointer = 0x28:0xfe011aea3370
code segment = base 0x0 limit 0x, type 0x1b
   DPL 0,pres 1, long 1, def 32 0, gran 1
processor eflags = interrupt enabled, resume, IOPL = 0
current process = 1187 (kldload)
trap number = 12

I hand copied the screen display since I was not able to generate a
crash dump to /var/crash on a zfs file system.

I am rebuilding the GENERIC kernel since the crash was using the NODEBUG
version.  This is 100 percent repeatable.

Tom

-- 
Public Keys:
PGP KeyID = 0x5F22FDC1
GnuPG KeyID = 0x620836CF



Re: Files in /etc containing empty VCSId header

2021-06-09 Thread Ian Lepore
On Wed, 2021-06-09 at 18:54 +1000, Peter Jeremy via freebsd-current
wrote:
> On 2021-Jun-08 17:13:45 -0600, Ian Lepore  wrote:
> > On Tue, 2021-06-08 at 15:11 -0700, Rodney W. Grimes wrote:
> > > There is a command for that which does or use to do a pretty
> > > decent job of it called whereis(1).
> 
> Thanks.  That looks useful.
> 
> > revolution > whereis ntp.conf
> > ntp.conf:
> > revolution > whereis netif
> > netif:
> > revolution > whereis services
> > services:
> > 
> > So how does that help me locate the origin of these files in the
> > source
> > tree?
> 
> It works for me™:
> server% whereis ntp.conf
> ntp.conf: /usr/src/usr.sbin/ntp/ntpd/ntp.conf
> server% whereis netif   
> netif: /usr/src/libexec/rc/rc.d/netif
> server% whereis services
> services: /usr/src/contrib/unbound/services
> 
> Is your source tree somewhere other than /usr/src?
> 

My /usr/src is a symlink to the actual source tree on a different
filesystem (but it is the source tree the running system was built
from).  It seems odd that that would make whereis(1) not work.

-- Ian




Re: ssh connections break with "Fssh_packet_write_wait" on 13 [SOLVED]

2021-06-09 Thread tuexen
> On 9. Jun 2021, at 08:57, Don Lewis  wrote:
> 
> On  8 Jun, Michael Gmelin wrote:
>> 
>> 
>> On Thu, 3 Jun 2021 15:09:06 +0200
>> Michael Gmelin  wrote:
>> 
>>> On Tue, 1 Jun 2021 13:47:47 +0200
>>> Michael Gmelin  wrote:
>>> 
 Hi,
 
 Since upgrading servers from 12.2 to 13.0, I get
 
 Fssh_packet_write_wait: Connection to 1.2.3.4 port 22: Broken pipe
 
 consistently, usually after about 11 idle minutes, that's with and
 without pf enabled. Client (11.4 in a VM) wasn't altered.
 
 Verbose logging (client and server side) doesn't show anything
 special when the connection breaks. In the past, QoS problems
 caused these disconnects, but I didn't see anything apparent
 changing between 12.2 and 13 in this respect.
 
 I did a test on a newly commissioned server to rule out other
 factors (so, same client connections, some routes, same
 everything). On 12.2 before the update: Connection stays open for
 hours. After the update (same server): connections breaks
 consistently after < 15 minutes (this is with unaltered
 configurations, no *AliveInterval configured on either side of the
 connection). 
>>> 
>>> I did a little bit more testing and realized that the problem goes
>>> away when I disable "Proportional Rate Reduction per RFC 6937" on the
>>> server side:
>>> 
>>> sysctl net.inet.tcp.do_prr=0
>>> 
>>> Keeping it on and enabling net.inet.tcp.do_prr_conservative doesn't
>>> fix the problem.
>>> 
>>> This seems to be specific to Parallels. After some more digging, I
>>> realized that Parallels Desktop's NAT daemon (prl_naptd) handles
>>> keep-alive between the VM and the external server on its own. There is
>>> no direct communication between the client and the server. This means:
>>> 
>>> - The NAT daemon starts sending keep-alive packages right away (not
>>> after the VM's net.inet.tcp.keepidle), every 75 seconds.
>>> - Keep-alive packages originating in the VM never reach the server.
>>> - Keep-alive originating on the server never reaches the VM.
>>> - Client and server basically do keep-alive with the nat daemon, not
>>> with each other.
>>> 
>>> It also seems like Parallels is filtering the tos field (so it's
>>> always 0x00), but that's unrelated.
>>> 
>>> I configured a bhyve VM running FreeBSD 11.4 on a separate laptop on
>>> the same network for comparison and is has no such issues.
>>> 
>>> Looking at TCP dump output on the server, this is what a keep-alive
>>> package sent by Parallels looks like:
>>> 
>>> 10:14:42.449681 IP (tos 0x0, ttl 64, id 15689, offset 0, flags
>>> [none], proto TCP (6), length 40)
>>>   192.168.1.1.58222 > 192.168.1.2.22: Flags [.], cksum x (correct),
>>>   seq 2534, ack 3851, win 4096, length 0
>>> 
>>> While those originating from the bhyve VM (after lowering
>>> net.inet.tcp.keepidle) look like this:
>>> 
>>> 12:18:43.105460 IP (tos 0x0, ttl 62, id 0, offset 0, flags [DF],
>>>   proto TCP (6), length 52)
>>>   192.168.1.3.57555 > 192.168.1.2.22: Flags [.], cksum x
>>>   (correct), seq 1780337696, ack 45831723, win 1026, options
>>>   [nop,nop,TS val 3003646737 ecr 3331923346], length 0
>>> 
>>> Like written above, once net.inet.tcp.do_prr is disabled, keepalive
>>> seems to be working just fine. Otherwise, Parallel's NAT daemon kills
>>> the connection, as its keep-alive requests are not answered (well,
>>> that's what I think is happening):
>>> 
>>> 10:19:43.614803 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF],
>>>   proto TCP (6), length 40)
>>>   192.168.1.1.58222 > 192.168.1.2.22: Flags [R.], cksum x (correct),
>>>   seq 2535, ack 3851, win 4096, length 0
>>> 
>>> The easiest way to work around the problem Client side is to configure
>>> ServerAliveInterval in ~/.ssh/config in the Client VM.
>>> 
>>> I'm curious though if this is basically a Parallels problem that has
>>> only been exposed by PRR being more correct (which is what I suspect),
>>> or if this is actually a FreeBSD problem.
>>> 
>> 
>> So, PRR probably was a red herring and the real reason that's happening
>> is that FreeBSD (since version 13[0]) by default discards packets
>> without timestamps for connections that formally had negotiated to have
>> them. This new behavior seems to be in line with RFC 7323, section
>> 3.2[1]:
>> 
>>   "Once TSopt has been successfully negotiated, that is both  and
>>contain TSopt, the TSopt MUST be sent in every non-
>>   segment for the duration of the connection, and SHOULD be sent in an
>>segment (see Section 5.2 for details)."
>> 
>> As it turns out, macOS does exactly this - send keep-alive packets
>> without a timestamp for connections that were negotiated to have them.
> 
> I wonder if I'm running into this with ssh connections to freefall.  My
> outgoing IPv6 connections pass through an ipfw firewall that uses
> dynamic rules.  When the dynamic rule gets close to expiration, it
> generates keep alive packets that just seem to be ignored by freefall.
> Eventually the 

Re: Files in /etc containing empty VCSId header

2021-06-09 Thread Rodney W. Grimes
> On 2021-Jun-08 17:13:45 -0600, Ian Lepore  wrote:
> >On Tue, 2021-06-08 at 15:11 -0700, Rodney W. Grimes wrote:
> >> There is a command for that which does or use to do a pretty
> >> decent job of it called whereis(1).
> 
> Thanks.  That looks useful.
> 
> >revolution > whereis ntp.conf
> >ntp.conf:
> >revolution > whereis netif
> >netif:
> >revolution > whereis services
> >services:
> >
> >So how does that help me locate the origin of these files in the source
> >tree?
> 
> It works for me?:
> server% whereis ntp.conf
> ntp.conf: /usr/src/usr.sbin/ntp/ntpd/ntp.conf
> server% whereis netif   
> netif: /usr/src/libexec/rc/rc.d/netif
> server% whereis services
> services: /usr/src/contrib/unbound/services
> 
> Is your source tree somewhere other than /usr/src?

And if source is not located at /usr/src, /usr/ports there is
the -S option.

> Peter Jeremy
-- 
Rod Grimes rgri...@freebsd.org



Re: OpenZFS Encryption: Docs, and re Metadata Leaks

2021-06-09 Thread Michael Gmelin


> On 9. Jun 2021, at 04:17, grarpamp  wrote:
> 
> On 4/17/20, Ryan Moeller  wrote:
>> 
 On Apr 17, 2020, at 4:56 PM, Pete Wright  wrote:
>>> 
>>> On 4/17/20 11:35 AM, Ryan Moeller wrote:
 OpenZFS brings many exciting features to FreeBSD, including:
 * native encryption
>>> Is there a good doc reference on available for using this?  I believe this
>>> is zfs filesystem level encryption and not a replacement for our existing
>>> full-disk-encryption scheme that currently works?
>> 

I found this to be a useful starting point:

https://blog.heckel.io/2017/01/08/zfs-encryption-openzfs-zfs-on-linux/#What-s-encrypted

-m


>> I’m not aware of a good current doc for this. If anyone finds/writes
>> something, please post it!
>> There are some old resources you can find with a quick search that do a
>> pretty good job of covering the basic ideas, but I think the exact syntax of
>> commands may be slightly changed in the final implementation.
>> 
>> The encryption is performed at a filesystem level (per-dataset).
> 
> 
> You could find some initial doc and video about zfs encryption
> on openzfs.org and youtube, and in some commit logs.
> Therein was mentioned...
> 
> People are needed to volunteer to expand documentation on the
> zfs crypto subject further in some document committed to openzfs
> repo since users and orgs will want to know it when considering use.
> Volunteers are also sought by openzfs to review the crypto itself.
> 
> Maybe there was already some central place made with further
> current documentation about the zfs encryption topics since then?
> 
> https://www.youtube.com/watch?v=frnLiXclAMo openzfs encryption
> https://drive.google.com/file/d/0B5hUzsxe4cdmU3ZTRXNxa2JIaDQ/view
> https://www.youtube.com/watch?v=kFuo5bDj8C0 openzfs encryption cloud
> https://drive.google.com/file/d/14uIZmJ48AfaQU4q69tED6MJf-RJhgwQR/view
> 
> It's dataset level, so GELI or GBDE etc are needed for full coverage,
> perhaps even those two may not have yet received much or formal
> review either, so there is always good volunteer opportunity there
> to start with review of a potentially simpler cryptosystem like those.
> 
> zfs list, dataset snapshot names properties etc not covered.
> 
> zfs history not covered, many sensitive strings will end up in
> there, including cutpaste password typos into commandline,
> usernames, timestamps, etc...
> and no tool exist to scrub overwrite history extents with random data,
> and no option exists to turn keeping of
> 'user initiated events' or 'long format' off,
> and ultimately no option exists to tell zpool create to
> disable history keeping from the very start entirely.
> So maybe users have to zero disks and pools along
> with it just to scrub that.
> 
> zfs also exposes these variety of path and device names,
> timestamps, etc in cleartext on disk structures in various
> places, including configuration cachefile...
> Some of those could could be NULLed or dummied
> with new zpool create options for more security
> restricted use cases.
> 
> There are other meta things and tools left exposed
> such as potentially any plaintext meta in send/recv.
> 
> Another big metadata leak for environments and users that
> ship, sell, embed, clone, distribute, fileshare, and backup,
> their raw disks pools and usbs around to untrusted third parties...
> is that zfs also puts hostnames and UUID type of unique
> static meta and identifying things in cleartext on disk.
> zfs thus needs options to allow users to set and use a NULL,
> or generic dummy default, or random string, or chosen,
> "hostname" for those from the very first zpool create command.
> 
> Most applications users use, including zfs, can today
> consider ways in which metadata leaks could be removed
> entirely, or at least optioned out for use under
> high security restricted environments modes.
> That could even involve considering trading off some
> extra features not actually required for a basic mode
> of functionality of the app.
> 
> 
> 
> (cc's for fyi inclusion about leaks, and as lists still haven't
> been configured to support discreet bcc for that purpose,
> which would also maintain nice headers for thread following.
> Gmail breaks threads. zfsonlinux topicbox peole can't subscribe
> without javabloatbroken website, so someone could forward
> this there. Drop non-relevant cc's from further replies.
> Parent thread from freebsd current and stable lists was
> Subject: OpenZFS port updated)




Re: Files in /etc containing empty VCSId header

2021-06-09 Thread Michael Gmelin



> On 9. Jun 2021, at 01:15, Ian Lepore  wrote:
> 
> On Tue, 2021-06-08 at 15:11 -0700, Rodney W. Grimes wrote:
>>> On Tue, 8 Jun 2021 09:41:34 +
>>> Mark Linimon  wrote:
>>> 
 On Mon, Jun 07, 2021 at 01:58:01PM -0600, Ian Lepore wrote:
> Sometimes it's a real interesting exercise to figure out where
> a
> file on your runtime system comes from in the source world.  
>> 
>> There is a command for that which does or use to do a pretty
>> decent job of it called whereis(1).
>> 
> 
> revolution > whereis ntp.conf
> ntp.conf:
> revolution > whereis netif
> netif:

That line might make it to a shirt one day:

> revolution > whereis services

;)
Michael





Re: Files in /etc containing empty VCSId header

2021-06-09 Thread Peter Jeremy via freebsd-current
On 2021-Jun-08 17:13:45 -0600, Ian Lepore  wrote:
>On Tue, 2021-06-08 at 15:11 -0700, Rodney W. Grimes wrote:
>> There is a command for that which does or use to do a pretty
>> decent job of it called whereis(1).

Thanks.  That looks useful.

>revolution > whereis ntp.conf
>ntp.conf:
>revolution > whereis netif
>netif:
>revolution > whereis services
>services:
>
>So how does that help me locate the origin of these files in the source
>tree?

It works for me™:
server% whereis ntp.conf
ntp.conf: /usr/src/usr.sbin/ntp/ntpd/ntp.conf
server% whereis netif   
netif: /usr/src/libexec/rc/rc.d/netif
server% whereis services
services: /usr/src/contrib/unbound/services

Is your source tree somewhere other than /usr/src?

-- 
Peter Jeremy


signature.asc
Description: PGP signature


Re: ssh connections break with "Fssh_packet_write_wait" on 13 [SOLVED]

2021-06-09 Thread Rodney W. Grimes
> On  8 Jun, Michael Gmelin wrote:
> > 
> > 
> > On Thu, 3 Jun 2021 15:09:06 +0200
> > Michael Gmelin  wrote:
> > 
> >> On Tue, 1 Jun 2021 13:47:47 +0200
> >> Michael Gmelin  wrote:
> >> 
> >> > Hi,
> >> > 
> >> > Since upgrading servers from 12.2 to 13.0, I get
> >> > 
> >> >   Fssh_packet_write_wait: Connection to 1.2.3.4 port 22: Broken pipe
> >> > 
> >> > consistently, usually after about 11 idle minutes, that's with and
> >> > without pf enabled. Client (11.4 in a VM) wasn't altered.
> >> > 
> >> > Verbose logging (client and server side) doesn't show anything
> >> > special when the connection breaks. In the past, QoS problems
> >> > caused these disconnects, but I didn't see anything apparent
> >> > changing between 12.2 and 13 in this respect.
> >> > 
> >> > I did a test on a newly commissioned server to rule out other
> >> > factors (so, same client connections, some routes, same
> >> > everything). On 12.2 before the update: Connection stays open for
> >> > hours. After the update (same server): connections breaks
> >> > consistently after < 15 minutes (this is with unaltered
> >> > configurations, no *AliveInterval configured on either side of the
> >> > connection). 
> >> 
> >> I did a little bit more testing and realized that the problem goes
> >> away when I disable "Proportional Rate Reduction per RFC 6937" on the
> >> server side:
> >> 
> >>   sysctl net.inet.tcp.do_prr=0
> >> 
> >> Keeping it on and enabling net.inet.tcp.do_prr_conservative doesn't
> >> fix the problem.
> >> 
> >> This seems to be specific to Parallels. After some more digging, I
> >> realized that Parallels Desktop's NAT daemon (prl_naptd) handles
> >> keep-alive between the VM and the external server on its own. There is
> >> no direct communication between the client and the server. This means:
> >> 
> >> - The NAT daemon starts sending keep-alive packages right away (not
> >>   after the VM's net.inet.tcp.keepidle), every 75 seconds.
> >> - Keep-alive packages originating in the VM never reach the server.
> >> - Keep-alive originating on the server never reaches the VM.
> >> - Client and server basically do keep-alive with the nat daemon, not
> >>   with each other.
> >> 
> >> It also seems like Parallels is filtering the tos field (so it's
> >> always 0x00), but that's unrelated.
> >> 
> >> I configured a bhyve VM running FreeBSD 11.4 on a separate laptop on
> >> the same network for comparison and is has no such issues.
> >> 
> >> Looking at TCP dump output on the server, this is what a keep-alive
> >> package sent by Parallels looks like:
> >> 
> >>   10:14:42.449681 IP (tos 0x0, ttl 64, id 15689, offset 0, flags
> >> [none], proto TCP (6), length 40)
> >> 192.168.1.1.58222 > 192.168.1.2.22: Flags [.], cksum x (correct),
> >> seq 2534, ack 3851, win 4096, length 0
> >> 
> >> While those originating from the bhyve VM (after lowering
> >> net.inet.tcp.keepidle) look like this:
> >> 
> >>   12:18:43.105460 IP (tos 0x0, ttl 62, id 0, offset 0, flags [DF],
> >> proto TCP (6), length 52)
> >> 192.168.1.3.57555 > 192.168.1.2.22: Flags [.], cksum x
> >> (correct), seq 1780337696, ack 45831723, win 1026, options
> >> [nop,nop,TS val 3003646737 ecr 3331923346], length 0
> >> 
> >> Like written above, once net.inet.tcp.do_prr is disabled, keepalive
> >> seems to be working just fine. Otherwise, Parallel's NAT daemon kills
> >> the connection, as its keep-alive requests are not answered (well,
> >> that's what I think is happening):
> >> 
> >>   10:19:43.614803 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF],
> >> proto TCP (6), length 40)
> >> 192.168.1.1.58222 > 192.168.1.2.22: Flags [R.], cksum x (correct),
> >> seq 2535, ack 3851, win 4096, length 0
> >> 
> >> The easiest way to work around the problem Client side is to configure
> >> ServerAliveInterval in ~/.ssh/config in the Client VM.
> >> 
> >> I'm curious though if this is basically a Parallels problem that has
> >> only been exposed by PRR being more correct (which is what I suspect),
> >> or if this is actually a FreeBSD problem.
> >> 
> > 
> > So, PRR probably was a red herring and the real reason that's happening
> > is that FreeBSD (since version 13[0]) by default discards packets
> > without timestamps for connections that formally had negotiated to have
> > them. This new behavior seems to be in line with RFC 7323, section
> > 3.2[1]:
> > 
> > "Once TSopt has been successfully negotiated, that is both  and
> >  contain TSopt, the TSopt MUST be sent in every non-
> > segment for the duration of the connection, and SHOULD be sent in an
> >  segment (see Section 5.2 for details)."
> > 
> > As it turns out, macOS does exactly this - send keep-alive packets
> > without a timestamp for connections that were negotiated to have them.
> 
> I wonder if I'm running into this with ssh connections to freefall.  My
> outgoing IPv6 connections pass through an ipfw firewall that uses
> dynamic rules.  When the dynamic rule 

Re: ssh connections break with "Fssh_packet_write_wait" on 13 [SOLVED]

2021-06-09 Thread Don Lewis
On  8 Jun, Michael Gmelin wrote:
> 
> 
> On Thu, 3 Jun 2021 15:09:06 +0200
> Michael Gmelin  wrote:
> 
>> On Tue, 1 Jun 2021 13:47:47 +0200
>> Michael Gmelin  wrote:
>> 
>> > Hi,
>> > 
>> > Since upgrading servers from 12.2 to 13.0, I get
>> > 
>> >   Fssh_packet_write_wait: Connection to 1.2.3.4 port 22: Broken pipe
>> > 
>> > consistently, usually after about 11 idle minutes, that's with and
>> > without pf enabled. Client (11.4 in a VM) wasn't altered.
>> > 
>> > Verbose logging (client and server side) doesn't show anything
>> > special when the connection breaks. In the past, QoS problems
>> > caused these disconnects, but I didn't see anything apparent
>> > changing between 12.2 and 13 in this respect.
>> > 
>> > I did a test on a newly commissioned server to rule out other
>> > factors (so, same client connections, some routes, same
>> > everything). On 12.2 before the update: Connection stays open for
>> > hours. After the update (same server): connections breaks
>> > consistently after < 15 minutes (this is with unaltered
>> > configurations, no *AliveInterval configured on either side of the
>> > connection). 
>> 
>> I did a little bit more testing and realized that the problem goes
>> away when I disable "Proportional Rate Reduction per RFC 6937" on the
>> server side:
>> 
>>   sysctl net.inet.tcp.do_prr=0
>> 
>> Keeping it on and enabling net.inet.tcp.do_prr_conservative doesn't
>> fix the problem.
>> 
>> This seems to be specific to Parallels. After some more digging, I
>> realized that Parallels Desktop's NAT daemon (prl_naptd) handles
>> keep-alive between the VM and the external server on its own. There is
>> no direct communication between the client and the server. This means:
>> 
>> - The NAT daemon starts sending keep-alive packages right away (not
>>   after the VM's net.inet.tcp.keepidle), every 75 seconds.
>> - Keep-alive packages originating in the VM never reach the server.
>> - Keep-alive originating on the server never reaches the VM.
>> - Client and server basically do keep-alive with the nat daemon, not
>>   with each other.
>> 
>> It also seems like Parallels is filtering the tos field (so it's
>> always 0x00), but that's unrelated.
>> 
>> I configured a bhyve VM running FreeBSD 11.4 on a separate laptop on
>> the same network for comparison and is has no such issues.
>> 
>> Looking at TCP dump output on the server, this is what a keep-alive
>> package sent by Parallels looks like:
>> 
>>   10:14:42.449681 IP (tos 0x0, ttl 64, id 15689, offset 0, flags
>> [none], proto TCP (6), length 40)
>> 192.168.1.1.58222 > 192.168.1.2.22: Flags [.], cksum x (correct),
>> seq 2534, ack 3851, win 4096, length 0
>> 
>> While those originating from the bhyve VM (after lowering
>> net.inet.tcp.keepidle) look like this:
>> 
>>   12:18:43.105460 IP (tos 0x0, ttl 62, id 0, offset 0, flags [DF],
>> proto TCP (6), length 52)
>> 192.168.1.3.57555 > 192.168.1.2.22: Flags [.], cksum x
>> (correct), seq 1780337696, ack 45831723, win 1026, options
>> [nop,nop,TS val 3003646737 ecr 3331923346], length 0
>> 
>> Like written above, once net.inet.tcp.do_prr is disabled, keepalive
>> seems to be working just fine. Otherwise, Parallel's NAT daemon kills
>> the connection, as its keep-alive requests are not answered (well,
>> that's what I think is happening):
>> 
>>   10:19:43.614803 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF],
>> proto TCP (6), length 40)
>> 192.168.1.1.58222 > 192.168.1.2.22: Flags [R.], cksum x (correct),
>> seq 2535, ack 3851, win 4096, length 0
>> 
>> The easiest way to work around the problem Client side is to configure
>> ServerAliveInterval in ~/.ssh/config in the Client VM.
>> 
>> I'm curious though if this is basically a Parallels problem that has
>> only been exposed by PRR being more correct (which is what I suspect),
>> or if this is actually a FreeBSD problem.
>> 
> 
> So, PRR probably was a red herring and the real reason that's happening
> is that FreeBSD (since version 13[0]) by default discards packets
> without timestamps for connections that formally had negotiated to have
> them. This new behavior seems to be in line with RFC 7323, section
> 3.2[1]:
> 
> "Once TSopt has been successfully negotiated, that is both  and
>  contain TSopt, the TSopt MUST be sent in every non-
> segment for the duration of the connection, and SHOULD be sent in an
>  segment (see Section 5.2 for details)."
> 
> As it turns out, macOS does exactly this - send keep-alive packets
> without a timestamp for connections that were negotiated to have them.

I wonder if I'm running into this with ssh connections to freefall.  My
outgoing IPv6 connections pass through an ipfw firewall that uses
dynamic rules.  When the dynamic rule gets close to expiration, it
generates keep alive packets that just seem to be ignored by freefall.
Eventually the dynamic rule expires, then sometime later sshd on
freefall sends a keepalive which gets dropped at my end.