Anyone use NIST SP 800-81-2 ?

2015-12-15 Thread John W. Blue
Specifically interested in hearing about how 9.4 is implemented in your 
environment.

Thanks!

John

Sent from Nine
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users

RHEL, Centos, Fedora rpm 9.10.3-P2

2015-12-15 Thread Carl Byington
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

http://www.five-ten-sg.com/mapper/bind contains links to the source
rpms, and build instructions.


-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.14 (GNU/Linux)

iEYEARECAAYFAlZwghYACgkQL6j7milTFsFEzACfRMUVu/TcMrQznlkhRLLNAja1
wqkAniTm5W8r/g8zEvDVgTg3RS4Ez3Rw
=rW3W
-END PGP SIGNATURE-
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Best practices for ipv6

2015-12-15 Thread Elias Pereira
Hello guys,

I like to know what are the best practices used or better bind
configuration methods with ipv6?

Thank you!

-- 
Elias Pereira
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users

Bind 9.10.3 on CentOS 7.1 - Recv-q on vmware

2015-12-15 Thread Rasmus Edgar

Hi Bind-⁠users,

A colleague recently posted a question on this list concerning latency 
and full recv-q on vmware using bind 9.10.3 and we have carried out some 
tests. As I have just joined the list recently I am creating this new 
thread.


We started noticing 1s+ latency problems on clients resolving using the 
vmware guest at a load around 6000 qps.


Test setup:

1 x x86_64 vmware guest on Esx 5.5
8xVCPU
8G RAM
vmxnet3 10Gb virtual interface
CentOS 7.1
Bind 9.10.3 resolver

1xIBM x86_64 physical machine
24xCPU cores
16G ram
1Gb interface
CentOS 7.1
Bind 9.10.3 resolver

Both bind servers are on the same VLAN.

Both bind servers have an identical bind configuration.

Test client is on the same VLAN as both servers, but is virtual and 
using the same hypervisor as the vmware guest.


Sysctl tuning:
/⁠etc/⁠sysctl.d/⁠tuning.conf
# 32M receive bufer
net.core.rmem_max=33554432
# 32M send buffer
net.core.wmem_max=33554432
net.core.netdev_max_backlog=2000
net.ipv4.ip_local_port_range=1024 65000
net.netfilter.nf_conntrack_max=1048576

/⁠etc/⁠modprobe.d/⁠nf_conntrack.conf
options nf_conntrack hashsize=262144

How to reproduce:

The tests were done with dnsperf using the following test data:

http://pkgs.fedoraproject.org/repo/pkgs/dnsperf/queryfile-example-10million-201202.bz2/0ff3de3eaf30a4ed94031fb89997369a/queryfile-example-10million-201202.bz2

And the following command:

./dnsperf -f inet -s  -d queryfile-example-10million-201202 
-l 30 -q 15000


The same dnsperf tests were run with powerdns as a resolver on the same 
two servers, and no full recv-q was seen on either the physical or the 
virtual machine. The performance on vmware was on par with a physical 
machine, when testing with powerdns.


We are having a hard time pinpointing the reason why recv-q gets full on 
vmware with bind 9.10.3.


I have attached some data which illustrates how the recv-q fills up on 
the vmware guest compared to the physical machine.The recv-q on the 
physical machine was aprox. half of what was seen on the virtual 
machine. Netstat extracts netstat_*-2-3 are the most illustrative.


Suggestions for further ways to troubleshoot the issue and possible 
solutions are welcome.


Br,
Rasmus

netstat-stat.tar.gz
Description: GNU Zip compressed data
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users

Testing DNS delegation using 2 Linux devices

2015-12-15 Thread Harshith Mulky
Hello,


Is it possible to test DNS delegation using 2 Linux devices running RHEL 
Version 6.1 and bind-9.8.2


What changes would be required in named.conf or Zone Files in order to test this


P.S: This is just for my learning purpose, as I am unable to understand how the 
Tiered architecture works


Thanks

Harshith
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users

Re: Best practices for ipv6

2015-12-15 Thread John W. Blue
I have found https://ipv6.he.net to be helpful.

John

Sent from Nine

From: Elias Pereira 
Sent: Dec 15, 2015 9:03 PM
To: bind-users@lists.isc.org
Subject: Best practices for ipv6

Hello guys,

I like to know what are the best practices used or better bind configuration 
methods with ipv6?

Thank you!

--
Elias Pereira
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users