Re: Performance Tuning RHEL 5 and Bind

2013-10-20 Thread Steven Carr
On 20 October 2013 02:34, brett smith brett.s9...@gmail.com wrote:
 When all the Windows PC's are switched to our resolver, bind stops responding.
 rndc querylog shows queries coming thru, I changed  tcp-clients from
 1000 to 1 but DNS seems lagging, so we switched back to the
 original Windows Domain resolver. Besides increasing open files
 tuning, what TCP / sysctl or named.conf settings can be set to
 optimize / speed up DNS queries? Because it seems that Windows clients
 use TCP instead of UDP when looking at netstat on the server.

It will depend on the type and size of the query (and on the
configuration/structure of the network in-between) as to whether
Windows uses UDP or is forced to switch to TCP.

But the option you are probably looking for is recursive-clients and
then pick a number. The default is 1000, so this is probably why if
all of your systems are querying at once it stops responding to some
of them.

Other than that it's a case of how much memory, CPU. Is it a VM? if so
have you reserved enough resources for it? What data is it serving?
caching only? authoritative for any zones? Is query logging enabled?
(this is a big performance hit as it has to write everything to disk,
so your disk is going to be a bottleneck).

Tuning is not something that you can be told this is what to do,
there are a huge number of factors that will influence which
parameters to tweak. But I'd definitely look to the
recursive-clients option for starters.

Steve
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: Performance Tuning RHEL 5 and Bind

2013-10-20 Thread Alan Clegg

On Oct 19, 2013, at 9:34 PM, brett smith brett.s9...@gmail.com wrote:

 When all the Windows PC's are switched to our resolver, bind stops responding.

What does stops responding mean?  Any logs?

 rndc querylog shows queries coming thru, I changed  tcp-clients from
 1000 to 1 but DNS seems lagging, so we switched back to the
 original Windows Domain resolver.

Are you really getting that many TCP based queries?  If so, something is 
seriously broken.

 Besides increasing open files
 tuning, what TCP / sysctl or named.conf settings can be set to
 optimize / speed up DNS queries? Because it seems that Windows clients
 use TCP instead of UDP when looking at netstat on the server.

Fix your windows clients.

AlanC
-- 
Alan Clegg | +1-919-355-8851 | a...@clegg.com



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users

RE: Performance Tuning RHEL 5 and Bind

2013-10-20 Thread Stuart Browne


 -Original Message-
 From: bind-users-bounces+stuart.browne=ausregistry.com...@lists.isc.org
 [mailto:bind-users-bounces+stuart.browne=ausregistry.com...@lists.isc.org]
 On Behalf Of brett smith
 Sent: Sunday, 20 October 2013 12:35 PM
 To: sth...@nethelp.no
 Cc: bind-users@lists.isc.org
 Subject: Re: Performance Tuning RHEL 5 and Bind
 
 When all the Windows PC's are switched to our resolver, bind stops
 responding.
 rndc querylog shows queries coming thru, I changed  tcp-clients from
 1000 to 1 but DNS seems lagging, so we switched back to the
 original Windows Domain resolver. Besides increasing open files
 tuning, what TCP / sysctl or named.conf settings can be set to
 optimize / speed up DNS queries? Because it seems that Windows clients
 use TCP instead of UDP when looking at netstat on the server.
 
 Thanks. Brett.
 
 On Sat, Oct 19, 2013 at 3:20 AM,  sth...@nethelp.no wrote:
  I need to build a pair DNS cache servers to support 5000+ clients (
  PC's and Servers ).  I have been looking for some guides on tuning
  BIND and the OS for Enterprise performance rather than the defaults.
  The version of bind is bind-9.8.2.
 
  5000 clients is such a low number that I don't think you need to worry
  about tuning at all.
 
  Steinar Haug, Nethelp consulting, sth...@nethelp.no

If my experience with high-throughput through a redhat system is anything to go 
by, what you are probably hitting is the IPTables conntrack bucket limits.

The simplest way to avoid this is to bypass connection tracking.

You can do one of the following:

- Turn off iptables (probably not a good idea)
- Turn off conn-tracking and not use the state module, rewriting all rules 
(nasty)
- Tell iptables to not conntrack for just udp/53  tcp/53 (-A -t raw -j NOTRACK 
-m tcp -p tcp --dport 53 ; -A -t raw -j NOTRACK -m udp -p udp --dport 53)

We use the 3rd method and it works beautifully.  Just ensure you're 'filter' 
rules don't force the use of conntrack for that traffic.  See the man page for 
more details.

Stuart
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users