On 13/12/2010 11:08 AM, W.C.A. Wijngaards wrote:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Hi Peter,
On 12/13/2010 04:03 PM, Peter Koch wrote:
On Tue, Nov 30, 2010 at 04:20:28PM +0100, W.C.A. Wijngaards wrote:
Yes. It caches what the authority server sends. For speed reasons it
does not (try to) remove duplicates. Except in special corner cases
where it does remove duplicates (where it tries to make sense of RRSIGs
that are in the wrong section of the message, and when it thus adjusts
the message it removes duplicates).
this is another challenge for the robustness principle, but RFC 2181
introduced the "RRSet" and deprecated (even recommended removing
duplicate RRs. This was later confirmed (in DNSSEC context, though)
by section 6.3 of RFC 4034. More importantly, it appears more
consumer/application friendly to me to suppress the duplicates. YMMV.
So, unbound does not introduce duplicates itself. It does transmit the
upstream duplicates to clients. As a feature it could suppress the
duplicates; is that really worth it? It makes RR parsing O(n^2) for the
number of RRs in an RRset; or for more O(nlogn) solutions the overhead
becomes high as well; thus I think performance would suffer. I figured
an authority server that sends out duplicates can then have duplicates
for their domain and the issues ..
Best regards,
Wouter
I think Unbound is doing the right thing. Authoritative servers sending
duplicate records should be exposed to the end systems.
If a validator "fails" to duplicates before verifying the RRset may or
may not fail as it is just as likely that the set was signed with
duplicates in it.
The principle no DNS protocol element should not change RRsets that
originate at another protocol element.
Olafur
_______________________________________________
Unbound-users mailing list
[email protected]
http://unbound.nlnetlabs.nl/mailman/listinfo/unbound-users