> Even RFC 1034 says "Bound the amount of work", but explicitly
> prescribed numbers in an RFC may end up being arbitrary.
> 
> When there is a difference between something working well and not
> working well, it may either be due to too much work (too much NS
> lookup indirection for example) or implementation inefficiency (use
> of data structures that are inefficient, or resource limits on that
> particular platform such as amount of memory available).

That is a different issue. The issue is that recursive resolvers implement
arbitrary default limits to avoid issues. Any zone that exceeds those
limits is in trouble.

At any moment a popular resolver can release a new version of the software
with lower limits. Breaking zones that worked before.

That is a very unstable situation that in can be avoided mostly by only
increasing limits.

But when new attacks suggest that limits need to be lowered, this becomes
a real risk.

> So a limit on section counts may be appropriate, but the limit that
> an implementation is able to perform well with is best decided by
> its implementors. There may be implementations that may perform
> well with thousands of RRs on commodity hardware and support very
large RRsets.

How that help a zone owner? A zone that only works with some resolvers?

_______________________________________________
DNSOP mailing list -- dnsop@ietf.org
To unsubscribe send an email to dnsop-le...@ietf.org

Reply via email to