Re: socket.c:2135: unexpected error:

2018-06-05 Thread Bob Harold
On Mon, Jun 4, 2018 at 10:57 PM  wrote:

>
> Hi,
>
> After upgrading BIND from 9.9.9-P5 to 9.11.3., the following messages
> have been displayed continuously in the file /var/log/messages as below.
>
> -
> May 29 02:36:50 dns01 nanny[5609]: debug start 1
> May 29 02:37:08 dns01 named[1679]: socket.c:2135: unexpected error:
> May 29 02:37:08 dns01 named[1679]: internal_send: [global IPv6
> address]#38306: Invalid argument
> May 29 02:37:20 dns01 nanny[5617]: debug start 1
> May 29 02:37:24 dns01 named[1679]: socket.c:2135: unexpected error:
> May 29 02:37:24 dns01 named[1679]: internal_send: [global IPv6
> address]#36987: Invalid argument
> May 29 02:37:47 dns01 named[1679]: socket.c:2135: unexpected error:
> May 29 02:37:47 dns01 named[1679]: internal_send: [global IPv6
> address]#35862: Invalid argument
> May 29 02:37:48 dns01 named[1679]: socket.c:2135: unexpected error:
> May 29 02:37:48 dns01 named[1679]: internal_send: [global IPv6
> address]#39895: Invalid argument
> May 29 02:37:50 dns01 nanny[5632]: debug start 1
> May 29 02:38:00 dns01 named[1679]: socket.c:2135: unexpected error:
> May 29 02:38:00 dns01 named[1679]: internal_send: [global IPv6
> address]#38979: Invalid argument
> :
> -
>
> OS  : Red Hat Enterprise Linux Server 6.5
>
> DNS service seems to be working fine, but I don't understand the cause
> and how to fix the errors.
>
> Any advice would be greatly appreciated.
>
>
> Regards,
> Hotta
>

Just guessing, but it sounds like " [global IPv6 address]" is either
malformed, or it is expecting an IPv4 address.

-- 
Bob Harold
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: [bind-users] Slow reply under heavy load (on a specific NIC ip)

2018-06-05 Thread Ict Security
Dear guys,

thank you for answering.
We are using a CentOS 7.2 distribution, x64 architecture.
We use generic e1000 network driver, the Virtual machine runs under VMware 5.5.

We use netfilter on the Firewall machine, another machine, we raised
up the "somaxconn" parameter.
We do not see, right now, any warning about conntrack table full.

We are also trying to resolve from the SAME machine where Bind 9.x
runs, to avoid firewall/nat problems during the testing.
The NIC primary IP address, also from internal, returns delay under heavy load.
If i switch querying an alias IP address - on the same NIC - everything is fast.

Thank you!!
F

2018-06-04 18:04 GMT+02:00 Ict Security :
> Dear guys,
>
> thank you for answering.
> We are using a CentOS 7.2 distribution, x64 architecture.
> We use generic e1000 network driver, the Virtual machine runs under VMware 
> 5.5.
>
> We use netfilter on the Firewall machine, another machine, we raised
> up the "somaxconn" parameter.
> We do not see, right now, any warning about conntrack table full.
>
> We are also trying to resolve from the SAME machine where Bind 9.x
> runs, to avoid firewall/nat problems during the testing.
> The NIC primary IP address, also from internal, returns delay under heavy 
> load.
> If i switch querying an alias IP address - on the same NIC - everything is 
> fast.
>
> Thank you!!
> F
>
> 2018-06-04 17:42 GMT+02:00 Jerry Kemp :
>> Can you please provide some specifics about your setup that is experiencing
>> the problem?
>>
>> HW - Sparc, PPC, Intel x86/x64, ARM ?
>>
>> OS - what OS is the problem occurring on?
>>
>> specific BIND version?
>>
>> anything about the NIC in question, possibly to include mfg && model number,
>> if relevant?
>>
>> Thanks
>>
>>
>>
>> On 04/06/18 07:20, Ict Security wrote:
>>>
>>> Hi guys,
>>>
>>> we are running a Bind 9.x Server, everything is going fine.
>>> Under particular heavy load mometns, with some hundreds of concurrent
>>> queries coming in, sometime Bing stops answering for some seconds or
>>> answer with important delays.
>>>
>>> But, when i try to query the same server/same Bind on a NIC alias IP
>>> during congestion on the main IP, everything is fast!
>>>
>>> I changed some tunings in:
>>> max-connections in /proc
>>> txqueue in network
>>> ipv4_ports
>>>
>>> and i mitigate something.
>>> But it is not completely solved.
>>>
>>> Do you think Bind could have some NIC IP limit?
>>> Some ideas?
>>>
>>> Really thank you!
>>> Francesco
>>> ___
>>
>> ___
>> Please visit https://lists.isc.org/mailman/listinfo/bind-users to
>> unsubscribe from this list
>>
>> bind-users mailing list
>> bind-users@lists.isc.org
>> https://lists.isc.org/mailman/listinfo/bind-users
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users