daily CVS update output

2017-09-20 Thread NetBSD source update

Updating src tree:
P src/distrib/sets/lists/tests/mi
P src/share/misc/acronyms.comp
P src/sys/arch/arm/nvidia/tegra210_xusbpad.c
P src/sys/arch/macppc/stand/fixcoff/elf32_powerpc_merge.x
P src/sys/dev/i2c/adm1021.c
P src/tests/net/route/Makefile
U src/tests/net/route/t_rtcache.sh
P src/usr.bin/stat/stat.1
P src/usr.bin/stat/stat.c
P src/usr.bin/sys_info/sys_info.1
P src/usr.bin/sys_info/sys_info.sh

Updating xsrc tree:


Killing core files:




Updating file list:
-rw-rw-r--  1 srcmastr  netbsd  49145655 Sep 21 03:05 ls-lRA.gz


Re: Automated report: NetBSD-current/i386 test failure

2017-09-20 Thread Robert Elz
Date:Wed, 20 Sep 2017 21:06:51 + (UTC)
From:NetBSD Test Fixture 
Message-ID:  <150594161124.25268.1441500354190307...@babylon5.netbsd.org>

  | This is an automatically generated notice of new failures of the
  | NetBSD test suite.
  | 
  | The newly failing test cases are:
  | 
  | bin/dd/t_dd:seek
  | fs/tmpfs/t_create:attrs

These should be fixed (fallout from stat breakage.)

kre



Re: ssh, HPN extension and TCP auto-tuning

2017-09-20 Thread Swift Griggs

On Wed, 20 Sep 2017, Havard Eidnes wrote:
the OpenSSH in NetBSD has for quite a while had the "high- performance 
networking" patches applied.


I wasn't aware of this. That's good to know. I've often wondered what 
patch sets we apply to NetBSD's SSH implementation.


However, when you copy "in the other direction", i.e. when the remote 
sshd is the one which is pushing the file across the network, we get an 
average of 8.4MB/s when copying a 143MB large file,


What is on the other side? NetBSD or something else? From the context, I'm 
assuming "something else" and that the something-else has the HPN patches 
also.


and a tcpdump + tcptrace reveals that in this case the system's 
automatic tuning of the TCP window is indeed kicking into action.


This could be the result of some kind of router issue. Perhaps the side 
that's not working is the victim of some kind of misconfiguration. Some 
possibilities are:



* One of the routers/firewalls is doing some reassembly or "cleansing"
  the TCP frames and preventing something like window scaling.

* Path MTU discovery is failing because it's blocking the ICMP probes. So,
  your MTU is artificially smaller in one direction.

* The connection on one side is being proxied in such a way that doesn't
  allow the optimizations to kick in.

I'm sure there are a few I'm not thinking of, too.

send space, performance improves, but again, TCP auto-tuning does not 
appear to be kicking in. Am I alone in seeing this?


I have to say that I've had a lot of weird issues with auto-tuning on 
NetBSD but I've never been had the right case to submit a PR over it. It's 
just... squirrly. I can't give enough details to be helpful, but no, you 
aren't alone in seeing it. I've noticed similar issues, but never narrowed 
it down enough to blame NetBSD.


-Swift


Automated report: NetBSD-current/i386 test failure

2017-09-20 Thread NetBSD Test Fixture
This is an automatically generated notice of new failures of the
NetBSD test suite.

The newly failing test cases are:

bin/dd/t_dd:seek
fs/tmpfs/t_create:attrs
fs/tmpfs/t_devices:basic
fs/tmpfs/t_link:basic
fs/tmpfs/t_link:subdirs
fs/tmpfs/t_mkdir:attrs
fs/tmpfs/t_mkdir:many
fs/tmpfs/t_mkdir:single
fs/tmpfs/t_mknod:block
fs/tmpfs/t_mknod:char
fs/tmpfs/t_mknod:pipe
fs/tmpfs/t_mount:attrs
fs/tmpfs/t_rename:dotdot
fs/tmpfs/t_rmdir:links
fs/tmpfs/t_rmdir:single
fs/tmpfs/t_setattr:chgrp
fs/tmpfs/t_setattr:chown
fs/tmpfs/t_setattr:chowngrp
fs/tmpfs/t_setattr:chtimes
fs/tmpfs/t_sizes:overwrite
fs/tmpfs/t_times:empty
fs/tmpfs/t_times:link
fs/tmpfs/t_times:non_empty
fs/tmpfs/t_times:rename

The above tests failed in each of the last 3 test runs, and passed in
at least 27 consecutive runs before that.

The following commits were made between the last successful test and
the failed test:

2017.09.19.20.45.09 jmcneill src/sys/arch/arm/nvidia/tegra210_car.c,v 1.2
2017.09.19.20.46.12 jmcneill 
src/sys/arch/arm/nvidia/Attic/tegra_xusbpadreg.h,v 1.3
2017.09.19.20.46.12 jmcneill src/sys/arch/arm/nvidia/files.tegra,v 1.42
2017.09.19.20.46.12 jmcneill src/sys/arch/arm/nvidia/tegra124_xusbpad.c,v 
1.1
2017.09.19.20.46.12 jmcneill 
src/sys/arch/arm/nvidia/tegra124_xusbpadreg.h,v 1.1
2017.09.19.20.46.12 jmcneill src/sys/arch/arm/nvidia/tegra_ahcisata.c,v 1.11
2017.09.19.20.46.12 jmcneill src/sys/arch/arm/nvidia/tegra_var.h,v 1.40
2017.09.19.20.46.12 jmcneill src/sys/arch/arm/nvidia/tegra_xusb-fw.mk,v 1.2
2017.09.19.20.46.12 jmcneill src/sys/arch/arm/nvidia/tegra_xusb.c,v 1.7
2017.09.19.20.46.12 jmcneill src/sys/arch/arm/nvidia/tegra_xusbpad.c,v 1.6
2017.09.19.20.46.12 jmcneill src/sys/arch/arm/nvidia/tegra_xusbpad.h,v 1.1
2017.09.19.20.46.12 jmcneill src/sys/arch/evbarm/conf/TEGRA,v 1.28
2017.09.19.21.45.28 christos src/usr.bin/stat/stat.1,v 1.39
2017.09.19.21.45.28 christos src/usr.bin/stat/stat.c,v 1.39

Log files can be found at:


http://releng.NetBSD.org/b5reports/i386/commits-2017.09.html#2017.09.19.21.45.28


Re: ssh, HPN extension and TCP auto-tuning

2017-09-20 Thread Brian Buhrow
Hello.  I spent quite a bit of time looking at this under NetBSD-5 and
discovered that the default ssh settings, along with the default tcp
network settings precluded the adaptive network performance from working.
As a result, I've added the following lines to the ssh configs as well as
the sysctl.conf files on the NetBSD-5 hosts we manage.  As faras I've been
able to tell, we've been able to realize good performance gains as a result
of these changes.  Unless there have been a lot of regressions, I have no
reason to believe that these settings won't yield similar performance
improvements under NetBSD-7 and NetBSD-8.


Here are the fixes I came up with.


# Improves TCP performance significantly with ssh.
net.inet.tcp.recvbuf_auto=1
net.inet.tcp.sendbuf_auto=1
net.inet.tcp.sendbuf_max=16777216
net.inet.tcp.recvbuf_max=16777216




# Put the following lines in both /etc/ssh/ssh_config
# and /etc/ssh/sshd_config

#Enable High Performance Networking options (BB 12/27/2010)

#Turn on HPN features
HPNDisabled no

#Allow 5MB of ssh window buffer
HPNBufferSize 5000 

#Enable dynamic window sizing of SSH buffers 
#You must have tcp autotuning turned on in the kernel for this to work
TcpRcvBufPoll yes


ssh, HPN extension and TCP auto-tuning

2017-09-20 Thread Havard Eidnes
Hi,

the OpenSSH in NetBSD has for quite a while had the "high-
performance networking" patches applied.

However, despite this, we are observing rather low performance
when copying files over a distance, e.g. we have a pair of hosts
running netbsd-7 code, placed some 14-15ms apart, where scp'ing a
file only manages to give around 2.6MB/s.

Doing a tcpdump and an analysis using tcptrace + looking at the
result with xplot reveals that the TCP window never climbs above
the default 32KB size.

This is when the scp client is pushing the file to the remote
server.

However, when you copy "in the other direction", i.e. when the
remote sshd is the one which is pushing the file across the
network, we get an average of 8.4MB/s when copying a 143MB large
file, and a tcpdump + tcptrace reveals that in this case the
system's automatic tuning of the TCP window is indeed kicking
into action.

The same behaviour can be observed with the scp client from
8.0_BETA: pushing with scp is slow, pulling with scp from the
remote server is quite a bit faster.  I'm going to guess that
"pushing with scp" is the most often used mode, as you may get
file name completion in that case...

If, on the other hand, I bump the recvspace and sendspace on the
two involved hosts, so that the scp client has a larger default
send space, performance improves, but again, TCP auto-tuning does
not appear to be kicking in.

Am I alone in seeing this?

I must say I'm puzzled by the result.

The configuration on both systems are pretty much "stock", and
the network is not the bottleneck in my case.

Admittedly, the OpenSSH in netbsd-7 is quite old, and the HPN
patches are probably of the same vintage, and I've not checked if
a newer combination on that front will improve matters; I may do
that next.

Regards,

- HÃ¥vard