Von: Raven ra...@vp44.net
I am currently trying to setup query logging with bind on a debian
server, but I seem unable to.
logging {
channel munin_log {
file /var/log/bind9/query.log versions 30 size 15m;
severity dynamic;
severity dynamic starts at 0 i.e.
Von: Tony Finch d...@dotat.at
Does anyone know if there is a way to prevent the creation of certain
records - by name?
update-policy {
deny * name internal.example.com;
# ...
};
Hi,
I have a quite similar question but can't figure it out from
Hi,
I have a problem with the load on my Bind. Normally it's fine, but from time to
time there are clients which causes through a misconfiguration or a failed
local service (not intentionally) a very high amount of queries. After finding
and informing the responsible person this problem is
Original-Nachricht
Datum: Mon, 16 Jan 2012 11:49:46 +0100
Von: Roel Wagenaar r...@wagenaar.nu
Betreff: Re: Defense against a client?
In this case iptables is your friend.
One of my solutions is partly based on this:
I have not the slightest clue why, I had suspected that rndc reconfig
would be much faster, especially is there is no altering in the
config at all.
How are you testing this?
'time rndc reconfing'?
Yes.
Or do you stop answering queries and time that?
No.
How long do
Why not try the latest version, really? Pick a test host. Install
9.8.1+.
Time it again. Then let's talk.
Such things take time.
Did it now, but it didn't changed anything.
It seemes that the performance optimization (which is mentioned in the
releasenotes for startup) doesn't affect
I just updated a couple of my DNS-servers from the rather old version
9.4.1 to a newer version 9.8.0-P4.
After this I have problem with outages. Looking into it, I found that
the time for a rndc reload has nearly doubled!
This has been pointed out to me before; do you really need
It is not clear in your question, are you use rndc reload or rndc
reload zone.name? Latter will be faster in case if you change one or
few zones in one pass of your updating-script.
I generate from my database the complete named.conf, especially including new
zones and then trigger a rndc
In this case rndc reconfig should be sufficient. This command tells
BIND to re-read config file and load all new zones without touching
any previously loaded zones.
This was my understanding (after reading the text from rndc) as well.
But to my surprise:
I tested rndc reload against rndc
The odd part is that both NS3 and NS4 weren't able to request ixfr
transfers.
Shouldn't allow-transfer cover these kind of transfer requests as well?
First: Do you have statements provide ixfr; and request ixfr; in your
config?
Second: To do a ixfr a server is first sending a query
Hi,
I just updated a couple of my DNS-servers from the rather old version 9.4.1 to
a newer version 9.8.0-P4.
After this I have problem with outages. Looking into it, I found that the time
for a rndc reload has nearly doubled!
I've made tests before the update and I have still a few old
In my case, dig is asking for the nameservers of the root-zone and is
getting the answer:
. IN NS root1
. IN NS root2
etc
Next dig is asking for the A-record of root1. And here is the
differrence:
If I do dig root1 dig is asking exactly this, it is asking for the
dig +trace calls getaddrinfo() and that needs to be able to resolve
the hostname (without dots at the end). getaddrinfo() is called
so that we don't have to have a full blown iterative resolver in
dig.
I see. So no way to solve this one in dig itself.
The Internet moved from being a
I found the cause of my problem (and a solution):
dig +trace actually has another behaviour than doing the trace manually step by
step with dig.
For a trace, dig is asking for the NS-records, then for the IP-address of the
nameserver found and then go on asking this nameserver. Till the
What strikes me as odd is that the first query does return 4 (internal)
root servers, but no glue records ?
I have no idea why this is this way.
Given those root name servers, do you have A-records for root[1234] in
your root zone ?
Yes, of course. From my root-zone:
. 10800 IN
Original-Nachricht
I believe what is missing the root cache file.
The root server would have glue records point to GTLDs, like this
Then the GTLDs would have glue records pointing to nameserver of the
domain you are trying to trace.
What you are seeing is your
What strikes me as odd is that the first query does return 4 (internal)
root servers, but no glue records ?
I have no idea why this is this way.
Because +trace only displays the answer section of the responses by
default.
Try dig +trace +additional.
Hi Chris,
you are right, thank
...@isc.org
An: Tom Schmitt tomschm...@gmx.de
CC: bind-us...@isc.org
Betreff: Re: Views on differrent interfaces
match-destination.
--
Mark Andrews, ISC
1 Seymour St., Dundas Valley, NSW 2117, Australia
PHONE: +61 2 9871 4742 INTERNET: ma...@isc.org
--
GRATIS für alle GMX
Hi,
I have a simple configuration question and can't find an answer in
doc/arm/Bv9ARM.ch06.html :
I have a nameserver with two differrent views which is listening on two
differrent networkinterfaces. Till now its configured that all queries coming
from a defined IP-range are getting the
Hi,
I'm running Bind 9.6.1-P1 on a Solaris 10-Box as a slave with different views
and it is running fine.
Now I want to use rndc. I don't have a rndc.conf, only a rndc.key.
If I try something like
rndc reload
rndc reconfig
rndc stop
rndc halt
it is working fine and does what I expect it to do.
20 matches
Mail list logo