Hi All, just wanted to add a few things to what Ashish said.  Christian 
Biesinger wrote:

> I think this is something we need to be really careful about, because this 
> would effectively double (or triple, etc) the load on the DNS servers we 
> use, I am not sure that the owners of those servers would be happy. 

As Patrick noted, we are explicitly trading bandwidth for latency.  This is no 
different than, say, DNS prefetching -- you're using extra bandwidth that might 
have been unnecessary but that on average saves time.  At a high level, 
bandwidth is cheap and latency is expensive so this ends up being a good 
tradeoff.  (In fact if you work out the numbers, it seems to be a good tradeoff 
even if you're paying for bandwidth on a cell connection; that's what we tried 
to work out in the short paper Ashish linked to.)  Many of the owners of the 
DNS servers would be happy because it makes their sites faster.

As an aside, one advantage of having browsers do redundant DNS queries (as 
opposed to the OS doing it) is that the browser can decide when redundancy is 
worthwhile (i.e. there's a human waiting) and when it might be OK just to send 
a single query (e.g. DNS prefetching, when you might have a few seconds to 
spare).

> There are two obvious scenarios you see improvement from - 1 is just 
> identifying a faster path, but the other is in having a parallel query 
> going when one encounters a drop and has to retry.. just a few of those 
> retries could seriously change your mean. Do you have data to tease these 
> things apart? Retries could conceivably also be addressed with aggressive 
> timers. 

Yes, redundant queries are certainly useful to protect against both drops and 
slow lookups.

You could use an aggressive timer.  But if the timer is set to less than the 
typical DNS resolution time, you might as well have just sent multiple queries 
to begin with.  So the best you could do is set the timer equal to the 
resolution time, which means you're always going to be paying this extra delay 
if your first query ends up being slow/failed.  And you're still spending some 
extra bandwidth because with an aggressive timer, the first query will 
sometimes end up succeeding just after you send the second.  So aggressive 
timers will not get as good latency and presumably don't have as good a 
bandwidth/latency tradeoff; but, we have not explicitly experimented with that. 
 Have you experimented with timer aggressiveness in Firefox?  I'd be very 
curious to see the results.

All that said, in Ashish's experiments there's a cliff after 1 second so there 
are certainly timer effects.  See Figure 15:
http://conferences.sigcomm.org/co-next/2013/program/p283.pdf

>> One of my concerns is that, while I wish it weren't true, there really is 
>> more than 1 DNS root on the Internet and the host resolver doesn't 
>> necessarily have insight into that - coporate split horizon dns is a 
>> definite thing. So silently adding more resolvers to that list will result 
>> in inconsistent views. 
> Agreed, that is an issue.  One, more limited, implementation that would still 
> be feasible would be to see if the OS has multiple DNS servers configured, 
> and if yes only replicate queries to those servers.  Of course, this is quite 
> different from the scenario we tested and would require careful evaluation to 
> see if there’s any benefit. 

This would also allow the user to configure more DNS servers if they choose, 
using the same mechanism they already use on their OS.

Both the issue of which DNS servers are acceptable to use, and the issue of 
measuring performance improvement in more realistic use cases, are important.  
Does Firefox have mechanisms for testing experimental technology like this in 
realistic environments?

Thanks,
~Brighten


_______________________________________________
dev-tech-network mailing list
[email protected]
https://lists.mozilla.org/listinfo/dev-tech-network

Reply via email to