Re: request timeout

2009-04-28 Thread Jonathan Petersson
IIRC it's 3 seconds.

On Tue, Apr 28, 2009 at 12:42 AM, Jeff Pang hostmas...@duxieweb.com wrote:
 When a Bind requests another Bind for a name resolving, what's the
 timeout value for this resuest?
 I mean, within how many seconds peer Bind doesn't answer it, this Bind
 will give up the query?

 Thanks.
 Regards.

 ___
 bind-users mailing list
 bind-users@lists.isc.org
 https://lists.isc.org/mailman/listinfo/bind-users

___
bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: How to forward domain totally not using CNAME?

2009-04-28 Thread Chris Buxton

On Apr 28, 2009, at 2:39 AM, Larry wrote:

MontyRee wrote:


Hello, all.


I would like to CNAME like below.

example.com.  IN CNAMEexample2.com.


But I know that this is wrong.
then, is there any way or solution to solve this problem?


I searched and found that below is a similar solution.


* IN CNAMEexample2.com.

but in this case, only .example.com works well
and example.com doesn't work well.


Any comment?


Thanks in advance.



use

example.com.  IN DNAMEexample2.com.


That still doesn't cover example.com itself, only *.example.com (and  
forwards them to *.example2.com, not example2.com).


Replicate the example2.com records (A record, MX record, whatever) for  
example.com. It's the only thing you can do.


Chris Buxton
Professional Services
Men  Mice

___
bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: approach on parsing the query-log file

2009-04-28 Thread Jonathan Petersson
The problem I'm seeing with this is that we'll get data that may be
inconsistent. Just because a query is sent to a server doesn't mean
that there's a name-server there to answer, I believe querying the
log-file one way or another would give a more accurate picture of load
etc.

On Tue, Apr 28, 2009 at 2:33 AM, Chris Buxton cbux...@menandmice.com wrote:
 On Apr 28, 2009, at 5:26 AM, Jonathan Petersson wrote:

 Hi all,

 I'm thinking of writing a quick tool to archive the query-log in a
 database to allow for easier reports.

 If it were me, I would turn off query logging and use a packet sniffer.

 Chris Buxton
 Professional Services
 Men  Mice


___
bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: approach on parsing the query-log file

2009-04-28 Thread Gregory Hicks

 From: Jonathan Petersson jpeters...@garnser.se
 Date: Tue, 28 Apr 2009 08:13:25 -0700
 Subject: Re: approach on parsing the query-log file
 To: niall.orei...@ucd.ie
 Cc: Bind Mailing bind-users@lists.isc.org
 
 Yeah I've thought about using tail but I'm not sure how locking would
 be managed when logrotate kicks in, does anyone know?

I use tail -f log-file

When the log rotates, the tail is still running against the rotated 
file.  I have to manually change to the current file. (^C-!! works)

A better way to do it might be to have the 'logfile' be a pipe and have 
the parsing intelligence on the other side of the pipe.  Have the log 
rotation smarts be on the other side of the pipe also.  (At one $JOB, 
I used this technique to separate out different log messages from 
simultaneously running SMTP processes.)

Regards,
GRegory Hicks
 
 On Tue, Apr 28, 2009 at 3:41 AM, Niall O'Reilly niall.orei...@ucd.ie 
wrote:
  On Mon, 2009-04-27 at 22:26 -0700, Jonathan Petersson wrote:
  The obvious question that occurs is; What would be what's the best
  approach to do this?
 
         I've not used it, but a colleague is very keen on File::Tail
         (http://search.cpan.org/~mgrabnar/File-Tail-0.99.3/Tail.pm).
         Apparently, it looks after log-file roll-over and 'just 
works'.
 
         /Niall
 
 
 
 ___
 bind-users mailing list
 bind-users@lists.isc.org
 https://lists.isc.org/mailman/listinfo/bind-users
 

-
Gregory Hicks   | Principal Systems Engineer
| Direct:   408.569.7928

People sleep peaceably in their beds at night only because rough men
stand ready to do violence on their behalf -- George Orwell

The price of freedom is eternal vigilance.  -- Thomas Jefferson

The best we can hope for concerning the people at large is that they
be properly armed. --Alexander Hamilton

___
bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: approach on parsing the query-log file

2009-04-28 Thread Jonathan Petersson
I don't think the cost is that great having querylogging enabled,
running the same test using dnsperf there's a 43% performance-increase
but 70 000 queries per second is still acceptable with query-logging
enabled.

/Jonathan

On Tue, Apr 28, 2009 at 10:05 AM, Alan Clegg alan_cl...@isc.org wrote:
 Jonathan Petersson wrote:
 So I gave tail a try in perl both via File::Tail and by putting tail
 -f in a pipe.

 As was stated previously in this thread, you are going down a bad path
 by using query-log for any purpose beyond short debugging sessions.

 The loss in performance is rather painful.

 The use of a network sniffing package is much preferable.

 [Just to see, try running your million queries with and without query
 logging turned on and see if you are happy with the results]

 But, if that's what you want to do, I wish you luck.

 AlanC


 ___
 bind-users mailing list
 bind-users@lists.isc.org
 https://lists.isc.org/mailman/listinfo/bind-users

___
bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: approach on parsing the query-log file

2009-04-28 Thread JINMEI Tatuya / 神明達哉
At Tue, 28 Apr 2009 10:01:02 -0700,
Jonathan Petersson jpeters...@garnser.se wrote:

 So I gave tail a try in perl both via File::Tail and by putting tail
 -f in a pipe. Neither seems to be handling the logrotation well. In my
 case I'm running a test sending 1 million queries, of those half is
 picked up by File::Tail if you define how often it should re-read the
 file but using tail -f straight or File::Tail without arguments just
 stops once the log has rotated as it doesn't seam to figure out to
 continue onto the new file.

I've never tried it, but how about letting named dump log messages to
syslog, and letting syslogd forward all messages to a separate process
via a pipe (assuming your syslogd supports that)?

---
JINMEI, Tatuya
Internet Systems Consortium, Inc.
___
bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: approach on parsing the query-log file

2009-04-28 Thread Jonathan Petersson
I did try to run the following option:
syslog named;

but when matching on named.* in syslog.conf there's no output.

/Jonathan

2009/4/28 JINMEI Tatuya / 神明達哉 jinmei_tat...@isc.org:
 At Tue, 28 Apr 2009 10:01:02 -0700,
 Jonathan Petersson jpeters...@garnser.se wrote:

 So I gave tail a try in perl both via File::Tail and by putting tail
 -f in a pipe. Neither seems to be handling the logrotation well. In my
 case I'm running a test sending 1 million queries, of those half is
 picked up by File::Tail if you define how often it should re-read the
 file but using tail -f straight or File::Tail without arguments just
 stops once the log has rotated as it doesn't seam to figure out to
 continue onto the new file.

 I've never tried it, but how about letting named dump log messages to
 syslog, and letting syslogd forward all messages to a separate process
 via a pipe (assuming your syslogd supports that)?

 ---
 JINMEI, Tatuya
 Internet Systems Consortium, Inc.

___
bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: approach on parsing the query-log file

2009-04-28 Thread Jeremy C. Reed
On Tue, 28 Apr 2009, Jonathan Petersson wrote:

 I did try to run the following option:
 syslog named;

syslog should define a syslog facility.

Look in the openlog, syslog and/or syslog.conf manual pages to see lists 
of facilities. The ARM says:   The syslog destination clause directs the 
channel to the system log. Its argument is a syslog facility as described 
in the syslog man page. Known facilities are kern, user, mail, daemon, 
auth, syslog, lpr, news, uucp, cron, authpriv, ftp, local0, local1, 
local2, local3, local4, local5, local6 and local7, however not all 
facilities are supported on all operating systems.

 but when matching on named.* in syslog.conf there's no output.
___
bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: approach on parsing the query-log file

2009-04-28 Thread Jonathan Petersson
Ah i.e. I'm using an incorrect logfacility... that would explain things.

Either way, I did try to parse tcpdump for queries, the problem I'm
getting is that perl isn't the best option for this so I'm going to
look into wether things could get sped up with python or something.

/Jonathan

2009/4/28 Jeremy C. Reed jeremy_r...@isc.org:
 On Tue, 28 Apr 2009, Jonathan Petersson wrote:

 I did try to run the following option:
 syslog named;

 syslog should define a syslog facility.

 Look in the openlog, syslog and/or syslog.conf manual pages to see lists
 of facilities. The ARM says:   The syslog destination clause directs the
 channel to the system log. Its argument is a syslog facility as described
 in the syslog man page. Known facilities are kern, user, mail, daemon,
 auth, syslog, lpr, news, uucp, cron, authpriv, ftp, local0, local1,
 local2, local3, local4, local5, local6 and local7, however not all
 facilities are supported on all operating systems.

 but when matching on named.* in syslog.conf there's no output.

___
bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: approach on parsing the query-log file

2009-04-28 Thread Scott Haneda
I have read the other posts here, and it looks like you are setting on  
tail, or a pipe, but that log rotation is causing you headaches.


I have had to deal with things like this in the past, and took a  
different approach.  Here are some ideas to think about.


Since you mentioned below you wanted this in real time, and that  
parsing an old log file is out, what about setting up a second log in  
named, of the same data, but do not rotate the log at all?


This gives you a log that you can run tail on.  It probably is going  
to grow too large.  I solved this for a different server in the past,  
by telling the log that was a clone to be be limited in size.  In this  
way, it was not rolled out, but rather, truncated.


I am not sure how named would do this.  If it will not truncate it,  
you can write a small script to do it for you.  Now that you have a  
log that is maintained at a fixed size that is manageable, you can do  
your tail business on it.


I also seem to remember, tail has some flags that may help you with  
dealing with the log ration issues.  I only remember them vaguely, as  
they were not applicable to what I was doing at the time.


Hope this helps some.

On Apr 27, 2009, at 10:26 PM, Jonathan Petersson wrote:


Hi all,

I'm thinking of writing a quick tool to archive the query-log in a
database to allow for easier reports.

The obvious question that occurs is; What would be what's the best
approach to do this?

Running scripts that parses through the query-log would cause locking
essentially killing BIND on a heavy loaded server and only parsing
archived files wouldn't allow real-time information, also re-parsing
the same set of data over and over again until the log has rotated
would cause unnecessary I/O load. I'm guessing the best would be to
have BIND write directly to a script that dumps the data where-ever it
makes sense to.

I've used BIND statistics and found it highly useful but then again it
doesn't allow me to make breakdowns based on host/query.

If anyone has done something like this or having pointers on how this
could achieved any information is welcome!


--
Scott * If you contact me off list replace talklists@ with scott@ *

___
bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: approach on parsing the query-log file

2009-04-28 Thread Jonathan Petersson
After feedback and running some tests today I've found that the most
cost-effective approach as far as performance goes is to use the
native querylog and rotate it often enough to have as live data as
possible.

Some quick notes (all tests done with perl):
- Parse the querylog 500 000k queries: 3 seconds
- Parse tcpdump while running 1 million queries: 300k picked up the
rest lost due to too high CPU load

I haven't tried to pipe querylog through stderr but it feels like that
could look a bit ugly running something that os more layered is
favored.

At this point I'll have to make the sacrifice of having real-time
data, parsing the querylog is the most efficient way as I see it based
on my tests.

Thanks for all the feedback on this, I'll publish my code once I'm finished.

/Jonathan

On Tue, Apr 28, 2009 at 5:24 PM, Scott Haneda talkli...@newgeo.com wrote:
 I have read the other posts here, and it looks like you are setting on tail,
 or a pipe, but that log rotation is causing you headaches.

 I have had to deal with things like this in the past, and took a different
 approach.  Here are some ideas to think about.

 Since you mentioned below you wanted this in real time, and that parsing an
 old log file is out, what about setting up a second log in named, of the
 same data, but do not rotate the log at all?

 This gives you a log that you can run tail on.  It probably is going to grow
 too large.  I solved this for a different server in the past, by telling the
 log that was a clone to be be limited in size.  In this way, it was not
 rolled out, but rather, truncated.

 I am not sure how named would do this.  If it will not truncate it, you can
 write a small script to do it for you.  Now that you have a log that is
 maintained at a fixed size that is manageable, you can do your tail business
 on it.

 I also seem to remember, tail has some flags that may help you with dealing
 with the log ration issues.  I only remember them vaguely, as they were not
 applicable to what I was doing at the time.

 Hope this helps some.

 On Apr 27, 2009, at 10:26 PM, Jonathan Petersson wrote:

 Hi all,

 I'm thinking of writing a quick tool to archive the query-log in a
 database to allow for easier reports.

 The obvious question that occurs is; What would be what's the best
 approach to do this?

 Running scripts that parses through the query-log would cause locking
 essentially killing BIND on a heavy loaded server and only parsing
 archived files wouldn't allow real-time information, also re-parsing
 the same set of data over and over again until the log has rotated
 would cause unnecessary I/O load. I'm guessing the best would be to
 have BIND write directly to a script that dumps the data where-ever it
 makes sense to.

 I've used BIND statistics and found it highly useful but then again it
 doesn't allow me to make breakdowns based on host/query.

 If anyone has done something like this or having pointers on how this
 could achieved any information is welcome!

 --
 Scott * If you contact me off list replace talklists@ with scott@ *


___
bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


stop zone transfers from coming in

2009-04-28 Thread Chris Henderson
My server works as a secondary for a zone. I asked the master server's
admin to stop the zone transfer; I didn't get any reply and thus
commented out the zone's section in my named.conf. But I'm still
getting zone files coming in to my server.

Here is what I have commented out:

#  zone example.com {
#   type slave;
#   file extra/example.com;
#masters {
#   xxx.xxx.xx.xx;
#   };
#  };

I commented out for some other zones as well and they have stopped
coming but not this one.
How do I stop this?

Thanks.
___
bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: stop zone transfers from coming in

2009-04-28 Thread Jonathan Petersson
I would honestly look for a typo since you're saying that it does work
for some. Either way unless the admin turn it off you will get
zone-transfers, the question lies in wether your name-server accepts
them and propagates them down.

Check in the log for transfer or notification refusals and make sure
that you don't have any global variables that could cause issues.

/Jonathan

On Tue, Apr 28, 2009 at 9:38 PM, Chris Henderson henders...@gmail.com wrote:
 My server works as a secondary for a zone. I asked the master server's
 admin to stop the zone transfer; I didn't get any reply and thus
 commented out the zone's section in my named.conf. But I'm still
 getting zone files coming in to my server.

 Here is what I have commented out:

 #  zone example.com {
 #       type slave;
 #       file extra/example.com;
 #        masters {
 #               xxx.xxx.xx.xx;
 #       };
 #  };

 I commented out for some other zones as well and they have stopped
 coming but not this one.
 How do I stop this?

 Thanks.
 ___
 bind-users mailing list
 bind-users@lists.isc.org
 https://lists.isc.org/mailman/listinfo/bind-users

___
bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users