Thanks a lot for your pertinent comments, :-)

Ted Lemon <mel...@fugue.com>于2017年8月17日周四 下午9:56写道:

> El 17 ag 2017, a les 0:09, Lanlan Pan <abby...@gmail.com> va escriure:
>
> We can use SWILD to optimize it, not need to detecting, just remove items
> which SWILD marked, to save cost.
>
>
> So, can you talk about how your proposal saves cost over using a heuristic?
>
It can be used with cache aging heuristic.
Heuristic read in aaa/bbb/ccc.foo.com, expire and move out;  then read in
xxx/yyy/zzz.foo.com,  expire and move out;  loop...
=> Map aaa/bbb/ccc/xxx/yy/zzz.foo.com to *.foo.com when heuristic read, it
will reduce the load of move in/out.

>
> 2) cache miss
> All of temporary subdomain wildcards will encounter cache miss.
> Query xxx.foo.com,  then query yyy.foo.com, zzz.foo.com, ...
> We can use SWILD to optimize it,  only query xxx.foo.com for the first
> time and get SWILD, avoid to send yyy/zzz.foo.com queries to
> authoritative server.
>
>
> Can you characterize why sending these queries to the authoritative server
> is a problem?
>

Ok, Similar to RFC8198 section 6
<https://datatracker.ietf.org/doc/html/rfc8198#section-6>
Benefit but not problem,  directly return from cache, avoid send queries to
authoritative, and wait for response, reduce laterncy.

> 3) DDoS risk
> The botnet ddos risk and defense is like NSEC aggressive wildcard, or NSEC
> unsigned.
> For example,  [0-9]+.qzone.qq.com is a popular SNS website in China, like
> facebook. If botnets send "popular website wildcards" to recursive,  the
> cache size of recursive will rise, recursive can not just simply remove
> them like some other random label attack.
> We prefer recursive directly return the IP of subdomain wildcards, and not
> rise recursive cach, not send repeat query to authoritative.
>
>
> Why do you prefer this?   Just saying "we prefer ..." is not a reason for
> the IETF to standardize something.
>

Sorry, my expression is fault.

More details:
1) All of the attack botnets were customers of ISP, sent queries to ISP
recursive with low rate, so all of the client's IP addresses were
"legitimate", could not simply use ACL.
2) Normal users also visit [0-9]+.qzone.qq.com, all the the random queries
domain seems to "legitimate".
=> The client ip addresses and the random subdomains are all in the
whitelist, not in blacklist.
3) ISP didn't have any DNS firewall equipment ( very sad situation, but it
was true ) to take over the response of "*.qzone.qq.com".

In this weaker scenario,  it will be better if give recursive more
information to directly answer queries from cache, and make recursive not
to send/cache many subdomains query/response.
Of course, we can defense the attack with professional operation, solve the
problem very well. But there are also many more weaker recursive only run
bind software, without any protection...


> There are a bunch of problems with your proposal, as I'm sure others have
> remarked before.   It breaks DNSSEC validation for stub resolvers that
> aren't aware of SWILD.  In the absence of DNSSEC validation, it creates a
> new and very effective spoofing attack (poison the cache with bogus SWILD
> records).   Etc.
>


> So you need to clearly explain why it is that you prefer this approach,
> and not just say that it's something you like.   Are you using it in
> production?   Do you have data on what it does?   Do you have data on the
> behavior of real-world caches that you can cite that shows that SWILD would
> produce more of an improvement than just using a better cache aging
> heuristic?
>

I will reconsider these problems of the proposal, make the improvement
analysis on real-world caches before next step.
-- 
致礼  Best Regards

潘蓝兰  Pan Lanlan
_______________________________________________
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop

Reply via email to