daily CVS update output

2022-09-16 Thread NetBSD source update


Updating src tree:
P src/bin/sh/sh.1
P src/sys/dev/pci/files.pci
P src/sys/dev/pci/if_aq.c
P src/sys/dev/pci/if_ixl.c
P src/sys/dev/pci/if_lii.c
P src/sys/dev/pci/if_vmx.c
P src/sys/dev/pci/ixgbe/ix_txrx.c
P src/sys/dev/pci/ixgbe/ixgbe_netbsd.h
P src/sys/dev/usb/if_ure.c
P src/sys/dev/usb/usb.h
P src/sys/external/bsd/acpica/dist/dispatcher/dswexec.c

Updating xsrc tree:
P xsrc/external/mit/libXft/dist/ChangeLog
P xsrc/external/mit/libXft/dist/NEWS
P xsrc/external/mit/libXft/dist/configure
P xsrc/external/mit/libXft/dist/configure.ac
P xsrc/external/mit/libXft/dist/include/X11/Xft/Xft.h
P xsrc/external/mit/libXft/dist/src/xftextent.c
P xsrc/external/mit/libXft/dist/src/xftrender.c


Killing core files:


Updating tar files:
src/top-level: collecting... replacing... done
src/bin: collecting... replacing... done
src/common: collecting... replacing... done
src/compat: collecting... replacing... done
src/crypto: collecting... replacing... done
src/dist: collecting... replacing... done
src/distrib: collecting... replacing... done
src/doc: collecting... replacing... done
src/etc: collecting... replacing... done
src/external: collecting... replacing... done
src/games: collecting... replacing... done
src/include: collecting... replacing... done
src/lib: collecting... replacing... done
src/libexec: collecting... replacing... done
src/regress: collecting... replacing... done
src/rescue: collecting... replacing... done
src/sbin: collecting... replacing... done
src/share: collecting... replacing... done
src/sys: collecting... replacing... done
src/tests: collecting... replacing... done
src/tools: collecting... replacing... done
src/usr.bin: collecting... replacing... done
src/usr.sbin: collecting... replacing... done
src/config: collecting... replacing... done
src: collecting... replacing... done
xsrc/top-level: collecting... replacing... done
xsrc/external: collecting... replacing... done
xsrc/local: collecting... replacing... done
xsrc: collecting... replacing... done



Updating release-8 src tree (netbsd-8):
U doc/CHANGES-8.3
P share/man/man4/mfii.4
P sys/arch/x86/x86/procfs_machdep.c
P sys/dev/ic/mfireg.h
P sys/dev/pci/mfii.c
P sys/dev/usb/xhci.c

Updating release-8 xsrc tree (netbsd-8):


Updating release-8 tar files:
src/top-level: collecting... replacing... done
src/bin: collecting... replacing... done
src/common: collecting... replacing... done
src/compat: collecting... replacing... done
src/crypto: collecting... replacing... done
src/dist: collecting... replacing... done
src/distrib: collecting... replacing... done
src/doc: collecting... replacing... done
src/etc: collecting... replacing... done
src/external: collecting... replacing... done
src/extsrc: collecting... replacing... done
src/games: collecting... replacing... done
src/include: collecting... replacing... done
src/lib: collecting... replacing... done
src/libexec: collecting... replacing... done
src/regress: collecting... replacing... done
src/rescue: collecting... replacing... done
src/sbin: collecting... replacing... done
src/share: collecting... replacing... done
src/sys: collecting... replacing... done
src/tests: collecting... replacing... done
src/tools: collecting... replacing... done
src/usr.bin: collecting... replacing... done
src/usr.sbin: collecting... replacing... done
src/config: collecting... replacing... done
src: collecting... replacing... done
xsrc/top-level: collecting... replacing... done
xsrc/external: collecting... replacing... done
xsrc/local: collecting... replacing... done
xsrc: collecting... replacing... done



Updating release-9 src tree (netbsd-9):
U doc/CHANGES-9.4
P share/man/man4/mfii.4
P sys/arch/x86/x86/procfs_machdep.c
P sys/dev/ic/mfireg.h
P sys/dev/pci/mfii.c
P sys/dev/usb/xhci.c

Updating release-9 xsrc tree (netbsd-9):


Updating release-9 tar files:
src/top-level: collecting... replacing... done
src/bin: collecting... replacing... done
src/common: collecting... replacing... done
src/compat: collecting... replacing... done
src/crypto: collecting... replacing... done
src/dist: collecting... replacing... done
src/distrib: collecting... replacing... done
src/doc: collecting... replacing... done
src/etc: collecting... replacing... done
src/external: collecting... replacing... done
src/extsrc: collecting... replacing... done
src/games: collecting... replacing... done
src/include: collecting... replacing... done
src/lib: collecting... replacing... done
src/libexec: collecting... replacing... done
src/regress: collecting... replacing... done
src/rescue: collecting... replacing... done
src/sbin: collecting... replacing... done
src/share: collecting... replacing... done
src/sys: collecting... replacing... done
src/tests: collecting... replacing... done
src/tools: collecting... replacing... done
src/usr.bin: collecting... replacing... done
src/usr.sbin: collecting... replacing... done
src/config: collecting... replacing... done
src: collecting... replacing... done
xsrc/top-level: collecting... replacing... done

Re: Any TCP VTW users?

2022-09-16 Thread Thor Lancelot Simon
On Thu, Sep 15, 2022 at 04:33:09PM +0900, Ryota Ozaki wrote:
> Hi,
> 
> Are there any users of TCP Vestigial Time-Wait (VTW)?
> The feature is disabled by default and we need to explicitly
> enable via sysctl to use it.
> 
> I just want to know if we should still maintain it.

Have you read the original discussion at
https://mail-index.netbsd.org/tech-net/2011/04/08/msg002566.html ?  I
believe the rationale for this feature is still valid, and I think it
is unfortuanate it was never enabled by default, and thus seems to have
rotted.

Thor


Re: Any TCP VTW users?

2022-09-16 Thread Thor Lancelot Simon
On Fri, Sep 16, 2022 at 10:33:31AM +0900, Ryota Ozaki wrote:
> 
> Thank you for the report!
> 
> Just curious. Does it improve performance? (or reduce CPU/memory usage?)

The Coyote Point loadbalancers couldn't survive our load testing (which
was modeled on traces of real world workloads) without it.  But Coyote
Point was bought by Fortinet and I believe the NetBSD-based firmware is
long gone.

Coyote Point did much more with the fat pointer stuff, but I don't think any
of it even got into the internal tree before they were purchased and
basically shut down.  A lot of it was actually aimed at connection placement
to improve concurrency within the stack - it is not surprising that if you
are looking at only a small fragment of it, it makes that much _harder_.

Thor


Re: macppc system wedging under memory pressure

2022-09-16 Thread Paul Ripke
On Fri, Sep 16, 2022 at 08:02:07PM -0400, Michael wrote:
> Hello,
> 
> On Fri, 16 Sep 2022 19:41:44 +0100
> Mike Pumford  wrote:
> 
> > I've been running my build system ( an 8 core amd64 system with 16GB of 
> > RAM) with:
> > 
> > vm.filemax=10
> > vm.filemin=1
> > 
> > So its not just SMALL systems that need better tuning.
> > 
> > Before I set those I found that the system would prioritise file cache 
> > so much that any large process that ran for a long time would be forced 
> > to swap out so much that it would then take them ages to recover. In my 
> > case that was the jenkins process that was managing the build leading to 
> > lots of failed builds as the jenkins process fell apart. Setting those 
> > limits meant the file cache got evicted instead of the jenkins process.
> > 
> > I also found the same settings kept things like firefox from getting 
> > swapped out during builds as well.
> 
> I've seen the same thing on a sparc64 with 12GB RAM - firefox and claws
> would get swapped out while the buffer cache would stay at 8GB or more,
> with a couple cc1plus instances fighting over the remaining RAM.

Yup; my amd64 16GiB general purpose system tends to run some RAM heavy
apps (builds, java, firefox, blender, prusaslicer, ...), so I've had these
tweaks for many years:

vm.anonmin=50
vm.filemin=5
vm.filemax=15

Cheers,
-- 
Paul Ripke
"Great minds discuss ideas, average minds discuss events, small minds
 discuss people."
-- Disputed: Often attributed to Eleanor Roosevelt. 1948.


Re: macppc system wedging under memory pressure

2022-09-16 Thread Michael
Hello,

On Fri, 16 Sep 2022 19:41:44 +0100
Mike Pumford  wrote:

> I've been running my build system ( an 8 core amd64 system with 16GB of 
> RAM) with:
> 
> vm.filemax=10
> vm.filemin=1
> 
> So its not just SMALL systems that need better tuning.
> 
> Before I set those I found that the system would prioritise file cache 
> so much that any large process that ran for a long time would be forced 
> to swap out so much that it would then take them ages to recover. In my 
> case that was the jenkins process that was managing the build leading to 
> lots of failed builds as the jenkins process fell apart. Setting those 
> limits meant the file cache got evicted instead of the jenkins process.
> 
> I also found the same settings kept things like firefox from getting 
> swapped out during builds as well.

I've seen the same thing on a sparc64 with 12GB RAM - firefox and claws
would get swapped out while the buffer cache would stay at 8GB or more,
with a couple cc1plus instances fighting over the remaining RAM.

have fun
Michael


Re: macppc system wedging under memory pressure

2022-09-16 Thread Mike Pumford




On 16/09/2022 06:14, Lloyd Parkes wrote:
You aren't the first person to have problems with memory pressure. We 
really are going to have to get around to documenting the memory 
management algorithms and all the tuning knobs.


I used to use this page (https://imil.net/NetBSD/mirror/vm_tune.html), 
but I have no idea how current it is. Also, I haven't used my smaller 
systems for a while now.


In the past, I used to set vm.filemax to 5 because I never want a page 
that I can simply reread to force an anonymous page to be written out to 
swap.


I've been running my build system ( an 8 core amd64 system with 16GB of 
RAM) with:


vm.filemax=10
vm.filemin=1

So its not just SMALL systems that need better tuning.

Before I set those I found that the system would prioritise file cache 
so much that any large process that ran for a long time would be forced 
to swap out so much that it would then take them ages to recover. In my 
case that was the jenkins process that was managing the build leading to 
lots of failed builds as the jenkins process fell apart. Setting those 
limits meant the file cache got evicted instead of the jenkins process.


I also found the same settings kept things like firefox from getting 
swapped out during builds as well.


This is all on 9.3 stable and all alther vm.xxx setting are at their 
default.


Mike