well, for a moment, I did have to think about this draft being a joke or not. But the acronym was perfect. :)

On 7/3/2013 1:30 PM, Warren Kumari wrote:

On Jul 2, 2013, at 1:58 AM, Matthijs Mekking <matth...@nlnetlabs.nl> wrote:

This sounds a lot like prefetch in unbound, and the configuration option
gives some analysis on increased traffic.
prefetch: <yes or no>
     If yes, message cache elements are prefetched before they expire
     to  keep  the  cache  up to date.  Default is no.  Turning it on
     gives about 10 percent more traffic and load on the machine, but
     popular items do not expire from the cache.

Doh, sorry, I was not aware that Unbound did this. I'd like to recognize this 
in the draft, any idea who actually suggested it / added it to the code?


Also, if the original TTL of the RR is less than STOP * HAMMER_TIME then
the cache entry, it cannot be used anymore and the resolver should
"Break it down".


I got a few off-list questions asking about the odd naming (and I also remember 
the IETF Diversity thread on Discuss).
For folk who have no idea what the hell we are on about:
http://www.youtube.com/watch?v=GbKAaSf6e10
http://www.youtube.com/watch?v=otCpCn0l4Wo



Best regards,
Matthijs

On 07/02/2013 03:44 AM, John Levine wrote:
We would like to draw your attention to a new draft.

It looks like it should work, assuming that your cache uses the
existing logic to remember queries in progress so it doesn't hammer a
record that's already in the process of being refetched.

Hmm. Probably worth mentioning.


My main observation is that I have no idea what the tradeoffs are
between the increased DNS traffic and faster responses to clients.
Have you done simulations or live experiments?
Nope.

This doesn't really change the average very much, but it should help decrease 
the jitter / spikes for the unlucky few who hit the resolver just as the cache 
entry expires.
The draft specified HAMMER_TIME as a time, not a percentage because:
A: I wanted to use the phrase HAMMER_TIME (and STOP HAMMER_TIME!) :-P
B: The main "issue" that the draft tries to solve is increased resolution times for the 
unlucky few who hit the resolver when the record has just expired. In order to decrease the added 
load caused by this, I chose a default that is somewhat longer than a "bad" resolution 
experience with a few failures along the way.
C: STOP was added to prevent doing a cache-fill request on every recursive request 
for records that have TTL of < HAMMER_TIME.
D: I was (tilting at windmills and) trying to get folk to not use TTLs of < a 
few seconds in the records :-)

From reading the unbound prefetch thing, I suspect maybe a better solution 
would be:
HAMMER_TIME defaults to 2 (or 1 or something) seconds
If the original TTL is < HAMMER_TIME, then only do the cache-fill when the TTL 
is 10% of the original TTL.

This means that:
if HAMMER_TIME is 2 and the original TTL is 600 seconds -- the auth servers 
will see approximately 0.3% additional traffic.
if HAMMER_TIME is 2 and the original TTL is 60 seconds -- the auth servers will 
see approximately 3.3% additional traffic.
if HAMMER_TIME is 2 and the original TTL is 1 seconds -- the auth servers will 
see approximately 10% additional traffic.


W



R's,
John
_______________________________________________
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


_______________________________________________
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


--
"Go on, prove me wrong. Destroy the fabric of the universe. See if I care."  -- 
Terry Prachett


_______________________________________________
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop



_______________________________________________
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop

Reply via email to