Debian Release : 3.1 (sarge)
Architecture: i386
Kernel : custom 2.4.33.3
Package: bind9
Version: 1:9.2.4-1
I just started to experienced the same problem after upgrading from
woody (oldstable) to sarge (stable). After some testing, I believe I
found out in which situation it happens, and a possible workaround to
avoid it.
Background :
On a default Linux system with dual stack IPv4+IPv6, an IPv6 TCP or UDP
socket listening on any address, which is the kind of IPv6 socket bind9
uses, can also receive IPv4 packets by using the IPv4-mapped address
(::ffff:a.b.c.d) translation mechanism when there is no IPv4 socket
listening on the same port. When you launch bind9 with the following
options :
listen-on { any; };
listen-on-v6 { any; }; // listen
bind9 opens the following sockets on port 53 :
zenith:~# netstat -nltu
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp6 0 0 :::53 :::* LISTEN
udp 0 0 10.0.0.1:53 0.0.0.0:*
udp 0 0 192.168.0.1:53 0.0.0.0:*
udp 0 0 127.0.0.1:53 0.0.0.0:*
udp6 0 0 :::53 :::*
For TCP, a single IPv6 socket receives all IPv4 and IPv6 traffic,
because it is created first and bind9 fails to create the IPv4 sockets.
In UDP, bind9 opens a separate IPv4 socket for each local interface IPv4
address and a single IPv6 socket.
Now, here is what I observed :
- When you do a TCP query either on IPv4 or IPv6, it goes to the TCPv6
socket and everything goes fine.
- When you do a UDP query on IPv6, it goes to the UDPv6 socket and
everything goes fine too.
- When you do a UDP query to an IPv4 address on which bind9 is
listening, it goes to that UDPv4 socket and everything goes fine too.
- But when you do a UDP query to an IPv4 address on which bind9 is NOT
listening, it goes to the UDPv6 socket and "blocks" it. This is what
triggers the following message you saw in syslog :
named[11577]: client.c:1325: unexpected error:
named[11577]: failed to get request's destination: failure
From this point it looks like stops reading packets on the UDPv6 socket
and they get stuck in the receive queue :
zenith:~# netstat -nltu
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp6 0 0 :::53 :::* LISTEN
udp 0 0 10.0.0.1:53 0.0.0.0:*
udp 0 0 192.168.0.1:53 0.0.0.0:*
udp 0 0 127.0.0.1:53 0.0.0.0:*
udp6 4328 0 :::53 :::*
I have absolutely no clue about why this is happening. It didn't happen
in the bind9 version included in woody.
The fourth situation can happen when bind9 does not listen on a given
local IPv4 address but receives a UDP query on that address. It's easy
to reproduce by doing a query to any loopack address other than
127.0.0.1, for example 127.1.2.3. Or when a PPP link goes down and bind9
re-scans the interfaces before the link goes up again :
- PPP interface goes down ;
- bind9 scans the interfaces and stops listening on the PPP IPv4
address, there is no more UDPv4 socket for that address ;
- PPP interface goes up again ;
- if a UDP query to the PPP address is received before bind9 rescans the
interfaces, the packet goes to the UDPv6 socket as there is no UDPv4
socket for that address -> bug.
To restore the UDP operation on IPv6, you can either stop and restart
bind9, or disable the listen-on-v6 option in named.conf, run "rndc
reconfig" to close the blocked IPv6 socket, re-enable the listen-on-v6
option and re-run "rndc reconfig" to reopen a new socket.
The workaround I found is to set the kernel parameter
/proc/sys/net/ipv6/bindv6only to 1. This can be automated at system
startup by adding the following line to /etc/sysctl.conf :
net/ipv6/bindv6only=1
This restricts the use of IPv6 sockets to IPv6 communication only and
prevents the use of the IPv4-mapped translation mechanism, so the UDPv6
socket receives only "pure" IPv6 queries. A positive side effect is that
bind9 (and other services such as sshd) is able to open separate IPv4
and IPv6 sockets for TCP communications, so for IPv4 queries you will
see in the logs the usual IPv4 addresses in dotted-quad format instead
of ugly IPv4-mapped IPv6 addresses. However I don't know whether there
are any negative side effets to this setting. Any feedback is welcome.
Sorry for being so long.
--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]