At 11:38 AM 2/20/99 -0800, Kent Crispin wrote:
>On Sat, Feb 20, 1999 at 09:32:54AM -0800, Greg Skinner wrote:
>
>I am actually responding to Roeland's comment here...
>
>> Roeland Meyer wrote:
>> > Sure, but first you'll have to prove that there is a problem, Chicken
>> > Little. Show me a failure mode that I can repeat.
>
>There is no failure mode that you can repeat, because the failure 
>mode exists in the entire DNS, not in a particular node.  Caching 
>behavior is a global property of DNS, not just a property of a 
>single running copy of bind.  Your failure to comprehend this simple 
>fact is a fundamental error.

Read again Kent, I said "mode", not "node". Of course, I was talking about
the system. I've been working in meta-clusters for about fourteen years
now, I think I know what a "system" is. FYI, since we are deploying a new
root-server cluster, I had five dual Pentium 450 boxes available, with two
NIC's each, and a 100baseTX FDX switch. Each NIC has four IP addresses. We
built a set of scripts that hammered BIND pretty good. Also, a K6-200 to
test with a slower machine. In various configurations, that gives us 28
virtual hosts to play with. We instrumented various parts of BIND and ran
the suite at maximum speed. We did this for about three weeks, testing
various configurations. However, we only tested BIND4 in this way.

That's all the equipment and time I was willing to put to the effort. Was
it a scientific rigorous test? Nope. Did they fail at any time? Nope. Hell,
I couldn't even saturate all the links, the caching worked too good. I
called it quits when we hit 1960 TLDs because I figured that we'd net have
that many in the near term. By the time we got there Merced would be out
with 64-bit Linux.

>> >  Point to code that shows
>> > the architectural flaw. Yes, there is one small section, in the caching
>> > code, that is slightly non-deterministic in certain conditions.
>> >  However, my
>> > personal examination did not yield any failure modes in the code. testing
>> > specifically, and generally, also did not reveal any flaws.
>
>Your personal examination of the code is largely irrelevant.  This 
>is a problem inherent in the global structure of DNS, not the bind 
>code. 

The global structure of the DNS is supported/enabled by the code.

>Are you familiar with the smurf attack, where the attacker can cause
>thousands of pings per second to flood into a particular address?
>That's not a failure mode that is visible in the networking code that
>replies to broadcast addresses, or something you would detect by looking 
>at the source for "ping"; it's a failure mode in the overall
>system design.  There are what?, 43,000,000 nodes in the DNS, and
>they are all caching almost everything, so the vast majority of
>requests never go through the roots.  

Well, you were able to describe the smurf pretty good there. How come you
can not describe the expected failure mode equally well? So please,
relinquish the failure mode you are complaining about. Where is the flaw in
the overall system design?

>It is well-known that a small change in caching characteristics can
>sometimes cause a drastic change in performance -- it's a non-linear
>response.  One example of this phenomenon is how network performance
>degrades precipitously past a certain threshold, when retries start
>to clog the system.  Another example is disk thrashing when the
>working set for a code is significantly bigger than the page cache. 

Nice generalities, but where's the beef?

>Note that DNS traffic is already a significant fraction of the
>traffic on the backbones.

Do you have access to a backbone router?

___________________________________________________ 
Roeland M.J. Meyer - 
e-mail:                                      mailto:[EMAIL PROTECTED]
Internet phone:                                hawk.lvrmr.mhsc.com
Personal web pages:             http://staff.mhsc.com/~rmeyer
Company web-site:                           http://www.mhsc.com
___________________________________________________ 
                       KISS ... gotta love it!

Reply via email to