I recently started seeing quite a lot of socket test failures on my
main Debian Linux machine. It turns out that an upgrade of the netbase
package included a new file:
/etc/sysctl.d/bindv6only.conf
which sets the system configuration:
net.ipv6.bindv6only = 1
which means that IPv6 bound sockets only accept IPv6 connections and not
IPv4 connections (w/address mapping). The expectation is that daemons
wanting to accept both IPv6 and IPv4 connections bind twice.
Then I had a Eureka! moment and remembered that I had some FreeBSD
changes that I had not committed yet because I was still getting socket
test failures when I had fixed everything that I thought was broken....
and the pattern of failures was similar. Changing the equivalent
"net.inet6.ip6.v6only=0" seems to have fixed most of the remaining
issues. I'll tidy up these changes[0] and commit them shortly.
For the time being I've changed this back on my machine (by editing the
above file and replacing the 1 with 0). However, I wonder if we should
be handling this in the classlib implementation. If we don't then users
will experience a change in behaviour unless they change the system
default but changing the system default may have unintended consequences
for other applications.
I believe it is possible to set the IPV6_V6ONLY socket option to avoid
the change in behaviour but for the moment, it looks like the RI is not
doing this so I guess we shouldn't either?
Regards,
Mark.
[0] Mostly removing the many things that I tried that we not actually
necessary ;-)