RE: RE: what does dig +trace do?

2011-08-31 Thread Gary Gladney
I believe what is missing the root cache file.  The root cache file would 
something like this.

; <<>> DiG 9.7.4b1-RedHat-9.7.4-0.3.b1.fc14 <<>> +trace valhalla.stsci.edu
;; global options: +cmd
.   132693  IN  NS  c.root-servers.net.
.   132693  IN  NS  b.root-servers.net.
.   132693  IN  NS  j.root-servers.net.
.   132693  IN  NS  d.root-servers.net.
.   132693  IN  NS  f.root-servers.net.
.   132693  IN  NS  a.root-servers.net.
.   132693  IN  NS  i.root-servers.net.
.   132693  IN  NS  g.root-servers.net.
.   132693  IN  NS  h.root-servers.net.
.   132693  IN  NS  l.root-servers.net.
.   132693  IN  NS  e.root-servers.net.
.   132693  IN  NS  m.root-servers.net.
.   132693  IN  NS  k.root-servers.net.
;; Received 496 bytes from 192.168.0.1#53(192.168.0.1) in 266 ms

The root server would have glue records point to GTLDs, like this
 
edu.172800  IN  NS  f.edu-servers.net.
edu.172800  IN  NS  a.edu-servers.net.
edu.172800  IN  NS  c.edu-servers.net.
edu.172800  IN  NS  g.edu-servers.net.
edu.172800  IN  NS  d.edu-servers.net.
edu.172800  IN  NS  l.edu-servers.net.
;; Received 271 bytes from 198.41.0.4#53(198.41.0.4) in 205 ms

Then the GTLDs would have glue records pointing to nameserver of the domain you 
are trying to trace.

What you are seeing is your local nameservers, it seems to me they don't have 
access to the Internet or a firewall is blocking some of the response or you 
don't have the root cache file to do hints or combination of all the above. Or 
some other issue that not very clear but the trace should start with the 
Internet root name servers.

Gary


From: bind-users-bounces+gladney=stsci@lists.isc.org 
[bind-users-bounces+gladney=stsci@lists.isc.org] on behalf of Tom Schmitt 
[tomschm...@gmx.de]
Sent: Wednesday, August 31, 2011 2:18 AM
To: bind-users@lists.isc.org
Subject: Re: RE: what does dig +trace do?

>
> What strikes me as odd is that the first query does return 4 (internal)
> root servers, but no glue records ?

I have no idea why this is this way.

> Given those root name servers, do you have A-records for root[1234] in
> your root zone ?

Yes, of course. From my root-zone:


.  10800   IN  NS  root1.
.  10800   IN  NS  root2.
.  10800   IN  NS  root3.
.  10800   IN  NS  root4.
root1. 10800 IN A 10.111.111.111
root2. 10800 IN A 10.111.112.112
root3. 10800 IN A 10.111.113.113
root4. 10800 IN A 10.111.114.114
com. 10800 IN NS root3.
com. 10800 IN NS root4.


All these records I can query with dig without any problem, but dig +trace 
still fails. :-(


--
NEU: FreePhone - 0ct/min Handyspartarif mit Geld-zurück-Garantie!
Jetzt informieren: http://www.gmx.net/de/go/freephone
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: faster fail-over between multiple masters

2011-08-31 Thread Klaus Darilion
Hi Michael!

Am 30.08.2011 20:33, schrieb Michael Graff:
> On 2011-08-30 12:06 PM, Klaus Darilion wrote:
> 
>> Unfortunately I fail to find the options where I can configure the 
>> number of retransmissions, timeouts and number of transactions -
>> please give me some hints.
> 
> I don't believe there are external knobs for this behavior.
> 
> I can think of several possible fixes here:
> 
> (1)  if we get a notify during a SOA check, proceed as usual but flag
> this so we will just start another SOA check.  We may transfer the
> zone between these checks (and probably should.)
> 
> (2)  send all SOA requests in parallel, and use an overall max time to
> wait (perhaps 20 seconds) and re-send the SOA to servers which have
> not responded every 4 seconds.  This limits the total time an SOA
> check will take.
> 
> (3)  If any of the servers respond with better SOA serial numbers than
> we have, transfer from the masters as listed in the config file or
> whichever is better, depending on current behavior.
> 
> I do not know when we would be able to get to this change, but I'll
> put them on the back-log for future releases.
> 
> If you want to go code diving, you can likely find the timeouts and
> change the behavior for your servers.  However, you'll have to track
> this each time we do a release for the foreseeable future.

I'm not a coder, thus I will wait until someone else improves it. :-)

Anyway, regardles off what option will be implemented, I think it would
be good to make the retransmission paramters configureable, e.g:
- query-timeout (currently 15 seconds)
- query-retransmissions (number of retransmissions with same
 transaction id, currently 2)
- query-attempts (number of transactions, currently 4)

Thanks
Klaus

___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: RE: what does dig +trace do?

2011-08-31 Thread Chris Thompson

On Aug 31 2011, Tom Schmitt wrote:


What strikes me as odd is that the first query does return 4 (internal)
root servers, but no glue records ?


I have no idea why this is this way.


Because +trace only displays the answer section of the responses by default.
Try "dig +trace +additional".

--
Chris Thompson
Email: c...@cam.ac.uk
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: slow non-cached quries

2011-08-31 Thread TMK
On Tue, Aug 30, 2011 at 9:26 AM, TMK  wrote:

>
> On Tue, Aug 30, 2011 at 6:55 AM, Mark Andrews  wrote:
>>
>> In message 
>> ,
>>  TMK writes:
>>> Dears,
>>>
>>> Probably this the thousand time you get these question. but our bind server
>>> have slow response time for the non-cached entries.
>>>
>>> I have run dig with +trace option and below is the result
>>>
>>> ; <<>> DiG 9.8.0-P2 <<>> @127.0.0.1 www.google.com +trace
>>> ; (1 server found)
>>> ;; global options: +cmd
>>> . 2013 IN NS i.root-servers.net.
>>> . 2013 IN NS g.root-servers.net.
>>> . 2013 IN NS l.root-servers.net.
>>> . 2013 IN NS m.root-servers.net.
>>> . 2013 IN NS d.root-servers.net.
>>> . 2013 IN NS b.root-servers.net.
>>> . 2013 IN NS k.root-servers.net.
>>> . 2013 IN NS j.root-servers.net.
>>> . 2013 IN NS c.root-servers.net.
>>> . 2013 IN NS a.root-servers.net.
>>> . 2013 IN NS h.root-servers.net.
>>> . 2013 IN NS e.root-servers.net.
>>> . 2013 IN NS f.root-servers.net.
>>> ;; Received 228 bytes from 127.0.0.1#53(127.0.0.1) in 1 ms
>>>
>>> com. 172800 IN NS a.gtld-servers.net.
>>> com. 172800 IN NS b.gtld-servers.net.
>>> com. 172800 IN NS c.gtld-servers.net.
>>> com. 172800 IN NS d.gtld-servers.net.
>>> com. 172800 IN NS e.gtld-servers.net.
>>> com. 172800 IN NS f.gtld-servers.net.
>>> com. 172800 IN NS g.gtld-servers.net.
>>> com. 172800 IN NS h.gtld-servers.net.
>>> com. 172800 IN NS i.gtld-servers.net.
>>> com. 172800 IN NS j.gtld-servers.net.
>>> com. 172800 IN NS k.gtld-servers.net.
>>> com. 172800 IN NS l.gtld-servers.net.
>>> com. 172800 IN NS m.gtld-servers.net.
>>> ;; Received 492 bytes from 199.7.83.42#53(l.root-servers.net) in 175 ms
>>>
>>> google.com. 172800 IN NS ns2.google.com.
>>> google.com. 172800 IN NS ns1.google.com.
>>> google.com. 172800 IN NS ns3.google.com.
>>> google.com. 172800 IN NS ns4.google.com.
>>> ;; Received 168 bytes from 192.5.6.30#53(a.gtld-servers.net) in 250 ms
>>>
>>> www.google.com. 604800 IN CNAME www.l.google.com.
>>> www.l.google.com. 300 IN A 209.85.148.106
>>> www.l.google.com. 300 IN A 209.85.148.104
>>> www.l.google.com. 300 IN A 209.85.148.147
>>> www.l.google.com. 300 IN A 209.85.148.99
>>> www.l.google.com. 300 IN A 209.85.148.103
>>> www.l.google.com. 300 IN A 209.85.148.105
>>> ;; Received 148 bytes from 216.239.34.10#53(ns2.google.com) in 225 ms
>>>
>>>
>>>
>>> we are running bind version "BIND 9.8.0-P2" on CentOS release 5.6 (Final)
>>>
>>> the process is running as mutlithreaded and consuming total of 60% of cpu
>>> utilization.
>>>
>>> do we have network issue or performance bottleneck.
>>>
>>> engtmk
>>
>> To better match what a nameserver does, what does dig +trace +dnssec show?
>>
>>        dig +dnssec +trace www.google.com
>>
>> Mark
>> --
>> Mark Andrews, ISC
>> 1 Seymour St., Dundas Valley, NSW 2117, Australia
>> PHONE: +61 2 9871 4742                 INTERNET: ma...@isc.org
>>
>
> Hi Mark,
>
> here is the output of the command
>
> dig @127.0.0.1 www.google.com +trace +dnssec
>
> ; <<>> DiG 9.8.0-P2 <<>> @127.0.0.1 www.google.com +trace +dnssec
> ; (1 server found)
> ;; global options: +cmd
> .                       360 IN      NS      F.ROOT-SERVERS.NET.
> .                       360 IN      NS      A.ROOT-SERVERS.NET.
> .                       360 IN      NS      C.ROOT-SERVERS.NET.
> .                       360 IN      NS      J.ROOT-SERVERS.NET.
> .                       360 IN      NS      B.ROOT-SERVERS.NET.
> .                       360 IN      NS      K.ROOT-SERVERS.NET.
> .                       360 IN      NS      E.ROOT-SERVERS.NET.
> .                       360 IN      NS      D.ROOT-SERVERS.NET.
> .                       360 IN      NS      G.ROOT-SERVERS.NET.
> .                       360 IN      NS      L.ROOT-SERVERS.NET.
> .                       360 IN      NS      M.ROOT-SERVERS.NET.
> .                       360 IN      NS      I.ROOT-SERVERS.NET.
> .                       360 IN      NS      H.ROOT-SERVERS.NET.
> ;; Received 255 bytes from 127.0.0.1#53(127.0.0.1) in 0 ms
>
> com.                    172800  IN      NS      f.gtld-servers.net.
> com.                    172800  IN      NS      m.gtld-servers.net.
> com.                    172800  IN      NS      g.gtld-servers.net.
> com.                    172800  IN      NS      h.gtld-servers.net.
> com.                    172800  IN      NS      e.gtld-servers.net.
> com.                    172800  IN      NS      i.gtld-servers.net.
> com.                    172800  IN      NS      a.gtld-servers.net.
> com.                    172800  IN      NS      c.gtld-servers.net.
> com.                    172800  IN      NS      j.gtld-servers.net.
> com.                    172800  IN      NS      k.gtld-servers.net.
> com.                    172800  IN      NS      l.gtld-servers.net.
> com.                    172800  IN      NS      d.gtld-servers.net.
> com.                    172800  IN      NS      b.gtld-servers.net.
> com.                    86400   

RE: Seemingly random ServFail issues on a caching server

2011-08-31 Thread Florian CROUZAT
Florian CROUZAT wrote on 2011-08-25:

> Hi list,
>
> On a few domains (we'll consider only one domain for this example) I
> encounter sometimes (seemingly randoms) ServFails while resolving domain
> names. A client (192.168.147.2) asks my caching server (192.168.151.100)
> to resolve a target (www.leclercdrive.fr)
>
> Here are the relevant logs:
>
> Aug 24 17:14:19 ns named[24929]: 24-Aug-2011 17:14:19.377 queries: info:
> client 192.168.147.2#34502: view internal: query: www.leclercdrive.fr IN
> A + Aug 24 17:14:19 ns named[24929]: 24-Aug-2011 17:14:19.380 queries:
> info: client 192.168.147.2#34502: view internal: query:
> www.leclercdrive.fr IN A + Aug 24 17:14:19 ns named[24929]: 24-Aug-2011
> 17:14:19.382 queries: info: client 192.168.147.2#34502: view internal:
> query: www.leclercdrive.fr IN A +
>
>
> A tcpdump on the local side of the NS server shows the A request and the
> instant ServFail. A tcpdump on the external side of the NS server shows
> no traffic at all in this case meaning it fails internally and doesn't
> even try to forward the A request to the Internet.
>
> 17:14:19.377608 IP 192.168.147.2.34502 > 192.168.151.100.53: 26340+ A?
> www.leclercdrive.fr. (37) 17:14:19.378845 IP 192.168.151.100.53 >
> 192.168.147.2.34502: 26340 ServFail 0/0/0 (37) 17:14:19.380607 IP
> 192.168.147.2.34502 > 192.168.151.100.53: 52628+ A? www.leclercdrive.fr.
> (37) 17:14:19.381383 IP 192.168.151.100.53 > 192.168.147.2.34502: 52628
> ServFail 0/0/0 (37) 17:14:19.382605 IP 192.168.147.2.34502 >
> 192.168.151.100.53: 58933+ A? www.leclercdrive.fr. (37) 17:14:19.383406
> IP 192.168.151.100.53 > 192.168.147.2.34502: 58933 ServFail 0/0/0 (37)
>
> A few minutes before, or later, it worked just fine, see:
>
> 17:15:58.736177 IP 192.168.147.2.34502 > 192.168.151.100.53: 49610+ A?
> www.leclercdrive.fr. (37) 17:15:58.784470 IP 192.168.151.100.53 >
> 192.168.147.2.34502: 49610 3/3/6 CNAME[|domain]
>
> The TTL of the www.leclercdrive.fr entry is 300 - which seems short to
> me - maybe the ServFail happens when a request is treated at the exact
> time of the TTL reaching zero and the cache entry beeing flushed ? I
> tried flushing the cache using rndc but the first request after that
> worked just fine (of course...)
>
> Any ideas/hints are welcome.
>
> The DNS server runs 1:9.5.1.dfsg.P3-1+lenny1
> cat /etc/debian_version => 5.0.4
> (I have no control on the version of the tools)



I found in my logfiles a few other domains where the ServFails happen, their
respective TTL are all different, from 300 sec to 86400.
I still have no idea at all how to resolve this issue and as far as I
investigated, I haven't been able to identify a pattern in those ServFails.
I'm not even sure the TTL is involved since I saw two ServFail separated in
time by less than the TTL value of the entry...

Florian





___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users

Re: Seemingly random ServFail issues on a caching server

2011-08-31 Thread Lyle Giese

On 8/31/2011 8:40 AM, Florian CROUZAT wrote:

Florian CROUZAT wrote on 2011-08-25:


Hi list,

On a few domains (we'll consider only one domain for this example) I
encounter sometimes (seemingly randoms) ServFails while resolving domain
names. A client (192.168.147.2) asks my caching server (192.168.151.100)
to resolve a target (www.leclercdrive.fr)

Here are the relevant logs:

Aug 24 17:14:19 ns named[24929]: 24-Aug-2011 17:14:19.377 queries: info:
client 192.168.147.2#34502: view internal: query: www.leclercdrive.fr IN
A + Aug 24 17:14:19 ns named[24929]: 24-Aug-2011 17:14:19.380 queries:
info: client 192.168.147.2#34502: view internal: query:
www.leclercdrive.fr IN A + Aug 24 17:14:19 ns named[24929]: 24-Aug-2011
17:14:19.382 queries: info: client 192.168.147.2#34502: view internal:
query: www.leclercdrive.fr IN A +


A tcpdump on the local side of the NS server shows the A request and the
instant ServFail. A tcpdump on the external side of the NS server shows
no traffic at all in this case meaning it fails internally and doesn't
even try to forward the A request to the Internet.

17:14:19.377608 IP 192.168.147.2.34502>  192.168.151.100.53: 26340+ A?
www.leclercdrive.fr. (37) 17:14:19.378845 IP 192.168.151.100.53>
192.168.147.2.34502: 26340 ServFail 0/0/0 (37) 17:14:19.380607 IP
192.168.147.2.34502>  192.168.151.100.53: 52628+ A? www.leclercdrive.fr.
(37) 17:14:19.381383 IP 192.168.151.100.53>  192.168.147.2.34502: 52628
ServFail 0/0/0 (37) 17:14:19.382605 IP 192.168.147.2.34502>
192.168.151.100.53: 58933+ A? www.leclercdrive.fr. (37) 17:14:19.383406
IP 192.168.151.100.53>  192.168.147.2.34502: 58933 ServFail 0/0/0 (37)

A few minutes before, or later, it worked just fine, see:

17:15:58.736177 IP 192.168.147.2.34502>  192.168.151.100.53: 49610+ A?
www.leclercdrive.fr. (37) 17:15:58.784470 IP 192.168.151.100.53>
192.168.147.2.34502: 49610 3/3/6 CNAME[|domain]

The TTL of the www.leclercdrive.fr entry is 300 - which seems short to
me - maybe the ServFail happens when a request is treated at the exact
time of the TTL reaching zero and the cache entry beeing flushed ? I
tried flushing the cache using rndc but the first request after that
worked just fine (of course...)

Any ideas/hints are welcome.

The DNS server runs 1:9.5.1.dfsg.P3-1+lenny1
cat /etc/debian_version =>  5.0.4
(I have no control on the version of the tools)




I found in my logfiles a few other domains where the ServFails happen, their
respective TTL are all different, from 300 sec to 86400.
I still have no idea at all how to resolve this issue and as far as I
investigated, I haven't been able to identify a pattern in those ServFails.
I'm not even sure the TTL is involved since I saw two ServFail separated in
time by less than the TTL value of the entry...

Florian



The authorative name servers for leclercdrive.fr are a.dns.gandi.net, 
b.dns.gandi.net and c.dns.gandi.net.  I don't know how big gandi.net is, 
but traceroutes to those servers end up going through Level3 in 
Baltimore, MD from here.  They did have a hurricane go through there and 
I would not be surprised if traffic levels have been a bit high for the 
last few days.


Lyle
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


RE: Seemingly random ServFail issues on a caching server

2011-08-31 Thread Florian CROUZAT
Lyle Giese wrote on 2011-08-31:

> On 8/31/2011 8:40 AM, Florian CROUZAT wrote:
>> Florian CROUZAT wrote on 2011-08-25:
>>
>>> Hi list,
>>>
>>> On a few domains (we'll consider only one domain for this example) I
>>> encounter sometimes (seemingly randoms) ServFails while resolving
>>> domain names. A client (192.168.147.2) asks my caching server
>>> (192.168.151.100) to resolve a target (www.leclercdrive.fr)
>>>
>>> Here are the relevant logs:
>>>
>>> Aug 24 17:14:19 ns named[24929]: 24-Aug-2011 17:14:19.377 queries:
>>> info: client 192.168.147.2#34502: view internal: query:
>>> www.leclercdrive.fr IN A + Aug 24 17:14:19 ns named[24929]:
>>> 24-Aug-2011 17:14:19.380 queries: info: client 192.168.147.2#34502:
>>> view internal: query: www.leclercdrive.fr IN A + Aug 24 17:14:19 ns
>>> named[24929]: 24-Aug- 2011 17:14:19.382 queries: info: client
>>> 192.168.147.2#34502: view internal: query: www.leclercdrive.fr IN A +
>>>
>>>
>>> A tcpdump on the local side of the NS server shows the A request and
>>> the instant ServFail. A tcpdump on the external side of the NS server
>>> shows no traffic at all in this case meaning it fails internally and
>>> doesn't even try to forward the A request to the Internet.
>>>
>>> 17:14:19.377608 IP 192.168.147.2.34502>  192.168.151.100.53: 26340+ A?
>>> www.leclercdrive.fr. (37) 17:14:19.378845 IP 192.168.151.100.53>
>>> 192.168.147.2.34502: 26340 ServFail 0/0/0 (37) 17:14:19.380607 IP
>>> 192.168.147.2.34502>  192.168.151.100.53: 52628+ A?
>>> www.leclercdrive.fr. (37) 17:14:19.381383 IP 192.168.151.100.53>
>>> 192.168.147.2.34502: 52628 ServFail 0/0/0 (37) 17:14:19.382605 IP
>>> 192.168.147.2.34502> 192.168.151.100.53: 58933+ A?
>>> www.leclercdrive.fr. (37) 17:14:19.383406 IP 192.168.151.100.53>
>>> 192.168.147.2.34502: 58933 ServFail 0/0/0 (37)
>>>
>>> A few minutes before, or later, it worked just fine, see:
>>>
>>> 17:15:58.736177 IP 192.168.147.2.34502>  192.168.151.100.53: 49610+ A?
>>> www.leclercdrive.fr. (37) 17:15:58.784470 IP 192.168.151.100.53>
>>> 192.168.147.2.34502: 49610 3/3/6 CNAME[|domain]
>>>
>>> The TTL of the www.leclercdrive.fr entry is 300 - which seems short to
>>> me - maybe the ServFail happens when a request is treated at the exact
>>> time of the TTL reaching zero and the cache entry beeing flushed ? I
>>> tried flushing the cache using rndc but the first request after that
>>> worked just fine (of course...)
>>>
>>> Any ideas/hints are welcome.
>>>
>>> The DNS server runs 1:9.5.1.dfsg.P3-1+lenny1
>>> cat /etc/debian_version =>  5.0.4
>>> (I have no control on the version of the tools)
>>
>>
>>
>> I found in my logfiles a few other domains where the ServFails happen,
>> their respective TTL are all different, from 300 sec to 86400. I still
>> have no idea at all how to resolve this issue and as far as I
>> investigated, I haven't been able to identify a pattern in those
>> ServFails. I'm not even sure the TTL is involved since I saw two
>> ServFail separated in time by less than the TTL value of the entry...
>>
>> Florian
>>
>
> The authorative name servers for leclercdrive.fr are a.dns.gandi.net,
> b.dns.gandi.net and c.dns.gandi.net.  I don't know how big gandi.net is,
> but traceroutes to those servers end up going through Level3 in
> Baltimore, MD from here.  They did have a hurricane go through there and
> I would not be surprised if traffic levels have been a bit high for the
> last few days.
>
> Lyle

Well, it's a french registrar, my servers are in France and my clients are
french too so from here the traceroute is pretty neat.
Anyway my problem isn't (apparently) Gandi related, or even
www.leclercdrive.fr related since the ServFails happen internally and
instantanetly in my BIND which doesn't even try to forward the A request.


Florian





___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users

Re: RE: RE: what does dig +trace do?

2011-08-31 Thread Tom Schmitt

 Original-Nachricht 

> I believe what is missing the root cache file.  
> 
> The root server would have glue records point to GTLDs, like this
> 
> Then the GTLDs would have glue records pointing to nameserver of the
> domain you are trying to trace.
> 
> What you are seeing is your local nameservers, it seems to me they don't
> have access to the Internet or a firewall is blocking some of the 
> response or you don't have the root cache file to do hints or 
> combination of all the above. 

Hi Gary,

yes, all of the above. But this is no mistake, it's the intended architecture. 

My DNS-server is an internal one without any conection to the internet. There 
is no root hint file because I have a internal root zone on my own. And my root 
servers have the glue records in this root zone and the NS records for the TLDs 
as well. 

So dig +trace should work. Or has the trace-option the IP-addresses of the 
Internet-root-servers hardwired in the the sourcecode?




-- 
Empfehlen Sie GMX DSL Ihren Freunden und Bekannten und wir
belohnen Sie mit bis zu 50,- Euro! https://freundschaftswerbung.gmx.de
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: RE: what does dig +trace do?

2011-08-31 Thread Tom Schmitt

> >> What strikes me as odd is that the first query does return 4 (internal)
> >> root servers, but no glue records ?
> >
> >I have no idea why this is this way.
> 
> Because +trace only displays the answer section of the responses by
> default.
> Try "dig +trace +additional".

Hi Chris,

you are right, thank you. With this I see the glue records:

; <<>> DiG 9.8.0-P4 <<>> +trace example.com
;; global options: +cmd
.   10800   IN  NS  root1.
.   10800   IN  NS  root2.
.   10800   IN  NS  root3.
.   10800   IN  NS  root4.
root1. 10800 IN A  10.111.111.111
root2. 10800 IN A  10.111.112.112  
root3. 10800 IN A  10.111.113.113
root4. 10800 IN A  10.111.114.114
;; Received 159 bytes from 127.0.0.1#53(127.0.0.1) in 1 ms

;; connection timed out; no servers could be reached


The main problem is still the same though. The trace option fails with a 
timeout. Even thought I'm operating on the shell of one of the root-servers 
itself (so there is not much network in between to cause trouble).

-- 
NEU: FreePhone - 0ct/min Handyspartarif mit Geld-zurück-Garantie!   
Jetzt informieren: http://www.gmx.net/de/go/freephone
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users

forward question

2011-08-31 Thread CT

We have a public DNS in our DMZ

- Some of the servers in the DMZ provide certain services to services on 
the

inside.
- Currently, certain servers use the Internal AD DNS Servers for resolution
on a internal DNS domain to provide the services via firewall rules.

I would like all DMZ clients to use the Public DNS and "forward" the 
internal

DNS queries to the Internal AD DNS servers.

zone transfer to the Public DNS from Internal DNS is not an option..

*
zone "internal.zone" in {
type forward;
forwarders {
xxx.xxx.xxx.1;  // ad server 1
xxx.xxx.xxx.2; // ad server 2
};
};
*
Thx
CT


___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


BIND 9.8.1 is now available

2011-08-31 Thread Mark Andrews


Introduction

   BIND 9.8.1 is the current production release of BIND 9.8.

   This document summarizes changes from BIND 9.8.0 to BIND 9.8.1. Please
   see the CHANGES file in the source code release for a complete list of
   all changes.

Download

   The latest versions of BIND 9 software can always be found on our web
   site at http://www.isc.org/downloads/all. There you will find
   additional information about each release, source code, and some
   pre-compiled versions for certain operating systems.

Support

   Product support information is available on
   http://www.isc.org/services/support for paid support options. Free
   support is provided by our user community via a mailing list.
   Information on all public email lists is available at
   https://lists.isc.org/mailman/listinfo.

New Features

9.8.1

 * Added a new include file with function typedefs for the DLZ
   "dlopen" driver. [RT #23629]
 * Added a tool able to generate malformed packets to allow testing of
   how named handles them. [RT #24096]
 * The root key is now provided in the file bind.keys allowing DNSSEC
   validation to be switched on at start up by adding
   "dnssec-validation auto;" to named.conf. If the root key provided
   has expired, named will log the expiration and validation will not
   work. More information and the most current copy of bind.keys can
   be found at http://www.isc.org/bind-keys. *Please note this feature
   was actually added in 9.8.0 but was not included in the 9.8.0
   release notes. [RT #21727]

Security Fixes

9.8.1

 * If named is configured with a response policy zone (RPZ) and a
   query of type RRSIG is received for a name configured for RRset
   replacement in that RPZ, it will trigger an INSIST and crash the
   server. RRSIG. [RT #24280]
 * named, set up to be a caching resolver, is vulnerable to a user
   querying a domain with very large resource record sets (RRSets)
   when trying to negatively cache the response. Due to an off-by-one
   error, caching the response could cause named to crash. [RT #24650]
   [CVE-2011-1910]
 * Using Response Policy Zone (RPZ) to query a wildcard CNAME label
   with QUERY type SIG/RRSIG, it can cause named to crash. Fix is
   query type independant. [RT #24715]
 * Using Response Policy Zone (RPZ) with DNAME records and querying
   the subdomain of that label can cause named to crash. Now logs that
   DNAME is not supported. [RT #24766]
 * Change #2912 populated the message section in replies to UPDATE
   requests, which some Windows clients wanted. This exposed a latent
   bug that allowed the response message to crash named. With this
   fix, change 2912 has been reduced to copy only the zone section to
   the reply. A more complete fix for the latent bug will be released
   later. [RT #24777]

Feature Changes

9.8.1

 * Merged in the NetBSD ATF test framework (currently version 0.12)
   for development of future unit tests. Use configure --with-atf to
   build ATF internally or configure --with-atf=prefix to use an
   external copy. [RT #23209]
 * Added more verbose error reporting from DLZ LDAP. [RT #23402]
 * The DLZ "dlopen" driver is now built by default, no longer
   requiring a configure option. To disable it, use "configure
   --without-dlopen". (Note: driver not supported on win32.) [RT
   #23467]
 * Replaced compile time constant with STDTIME_ON_32BITS. [RT #23587]
 * Make --with-gssapi default for ./configure. [RT #23738]
 * Improved the startup time for an authoritative server with a large
   number of zones by making the zone task table of variable size
   rather than fixed size. This means that authoritative servers with
   lots of zones will be serving that zone data much sooner. [RT
   #24406]
 * Per RFC 6303, RFC 1918 reverse zones are now part of the built-in
   list of empty zones. [RT #24990]

Bug Fixes

9.8.1

 * During RFC5011 processing some journal write errors were not
   detected. This could lead to managed-keys changes being committed
   but not recorded in the journal files, causing potential
   inconsistencies during later processing. [RT #20256]
 * A potential NULL pointer deference in the DNS64 code could cause
   named to terminate unexpectedly. [RT #20256]
 * A state variable relating to DNSSEC could fail to be set during
   some infrequently-executed code paths, allowing it to be used
   whilst in an unitialized state during cache updates, with
   unpredictable results. [RT #20256]
 * A potential NULL pointer deference in DNSSEC signing code could
   cause named to terminate unexpectedly [RT #20256]
 * Several cosmetic code changes were made to silence warnings
   generated by a static code analysis tool. [RT #20256]
 * When using the -x (sign with only KSK) o

about the additional section

2011-08-31 Thread 风河
Hello,

I found that some queries have got the response which has additional
section, but some haven't.
For example, this query with www.google.com got the answer with
additional section set:

$ dig www.google.com

; <<>> DiG 9.6.1-P2 <<>> www.google.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 57737
;; flags: qr rd ra; QUERY: 1, ANSWER: 7, AUTHORITY: 4, ADDITIONAL: 1

;; QUESTION SECTION:
;www.google.com.IN  A

;; ANSWER SECTION:
www.google.com. 604672  IN  CNAME   www.l.google.com.
www.l.google.com.   172 IN  A   74.125.71.104
www.l.google.com.   172 IN  A   74.125.71.105
www.l.google.com.   172 IN  A   74.125.71.106
www.l.google.com.   172 IN  A   74.125.71.147
www.l.google.com.   172 IN  A   74.125.71.99
www.l.google.com.   172 IN  A   74.125.71.103

;; AUTHORITY SECTION:
google.com. 172672  IN  NS  ns4.google.com.
google.com. 172672  IN  NS  ns2.google.com.
google.com. 172672  IN  NS  ns3.google.com.
google.com. 172672  IN  NS  ns1.google.com.

;; ADDITIONAL SECTION:
ns1.google.com. 345563  IN  A   216.239.32.10

;; Query time: 0 msec
;; SERVER: 119.147.163.133#53(119.147.163.133)
;; WHEN: Thu Sep  1 10:50:45 2011
;; MSG SIZE  rcvd: 236



But this same query get the answer without additional section:
$ dig www.google.com

; <<>> DiG 9.6.1-P2 <<>> www.google.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 33219
;; flags: qr rd ra; QUERY: 1, ANSWER: 7, AUTHORITY: 4, ADDITIONAL: 0

;; QUESTION SECTION:
;www.google.com.IN  A

;; ANSWER SECTION:
www.google.com. 604800  IN  CNAME   www.l.google.com.
www.l.google.com.   300 IN  A   74.125.71.106
www.l.google.com.   300 IN  A   74.125.71.147
www.l.google.com.   300 IN  A   74.125.71.99
www.l.google.com.   300 IN  A   74.125.71.103
www.l.google.com.   300 IN  A   74.125.71.104
www.l.google.com.   300 IN  A   74.125.71.105

;; AUTHORITY SECTION:
google.com. 172800  IN  NS  ns4.google.com.
google.com. 172800  IN  NS  ns3.google.com.
google.com. 172800  IN  NS  ns1.google.com.
google.com. 172800  IN  NS  ns2.google.com.

;; Query time: 399 msec
;; SERVER: 119.147.163.133#53(119.147.163.133)
;; WHEN: Thu Sep  1 10:48:36 2011
;; MSG SIZE  rcvd: 220


My question is under what condition the nameserver answers the query
with additional section set?

Thank you.
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


[Solved] was: what does dig +trace do?

2011-08-31 Thread Tom Schmitt

I think I found the reason why dig +trace always failed with a timeout.
From the announcement of Bind 9.8.1 from earlier today:

 * If the server has an IPv6 address but does not have IPv6
   connectivity to the internet, dig +trace could fail attempting to
   use IPv6 addresses. [RT #23297]

So I only have to update to the new version of named and dig +trace will work. 
:-)



 Original-Nachricht 
> Datum: Wed, 31 Aug 2011 17:36:46 +0200
> Von: "Tom Schmitt" 
> An: bind-users@lists.isc.org
> Betreff: Re: RE: what does dig +trace do?

> 
> > >> What strikes me as odd is that the first query does return 4
> (internal)
> > >> root servers, but no glue records ?
> > >
> > >I have no idea why this is this way.
> > 
> > Because +trace only displays the answer section of the responses by
> > default.
> > Try "dig +trace +additional".
> 
> Hi Chris,
> 
> you are right, thank you. With this I see the glue records:
> 
> ; <<>> DiG 9.8.0-P4 <<>> +trace example.com
> ;; global options: +cmd
> .   10800   IN  NS  root1.
> .   10800   IN  NS  root2.
> .   10800   IN  NS  root3.
> .   10800   IN  NS  root4.
> root1. 10800 IN A  10.111.111.111
> root2. 10800 IN A  10.111.112.112  
> root3. 10800 IN A  10.111.113.113
> root4. 10800 IN A  10.111.114.114
> ;; Received 159 bytes from 127.0.0.1#53(127.0.0.1) in 1 ms
> 
> ;; connection timed out; no servers could be reached
> 
> 
> The main problem is still the same though. The trace option fails with a
> timeout. Even thought I'm operating on the shell of one of the root-servers
> itself (so there is not much network in between to cause trouble).
> 
> -- 
> NEU: FreePhone - 0ct/min Handyspartarif mit Geld-zurück-Garantie! 
> Jetzt informieren: http://www.gmx.net/de/go/freephone
> ___
> Please visit https://lists.isc.org/mailman/listinfo/bind-users to
> unsubscribe from this list
> 
> bind-users mailing list
> bind-users@lists.isc.org
> https://lists.isc.org/mailman/listinfo/bind-users

-- 
NEU: FreePhone - 0ct/min Handyspartarif mit Geld-zurück-Garantie!   
Jetzt informieren: http://www.gmx.net/de/go/freephone
___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users

RE: forward question

2011-08-31 Thread Marc Lampo
Hello,

Do add "forward only;" to this zone statement.

Is this name server available/visible to the Internet ?
--> add "allow-query" statement to limit who can query for your internal
zone.

Kind regards,

Marc Lampo
Security Officer
EURid



-Original Message-
From: CT [mailto:gro...@obsd.us] 
Sent: 31 August 2011 11:17 PM
To: bind-users@lists.isc.org
Subject: forward question

We have a public DNS in our DMZ

- Some of the servers in the DMZ provide certain services to services on 
the
inside.
- Currently, certain servers use the Internal AD DNS Servers for
resolution
on a internal DNS domain to provide the services via firewall rules.

I would like all DMZ clients to use the Public DNS and "forward" the 
internal
DNS queries to the Internal AD DNS servers.

zone transfer to the Public DNS from Internal DNS is not an option..

*
zone "internal.zone" in {
 type forward;
 forwarders {
 xxx.xxx.xxx.1;  // ad server 1
 xxx.xxx.xxx.2; // ad server 2
 };
};
*
Thx
CT



___
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe 
from this list

bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users