> Section 7.5:
>
> One might even do something
> inverse-exponentially, sending reports for each of the first ten
> incidents, then every tenth incident up to 100, then every 100th
> incident up to 1000, etc. until some period of relative quiet after
> which the limitation resets.
I've been wandering about such period. If we use the record's
Time-To-Live (TTL) we can specify stateless reporting like so:
On diagnosing a failure, the agent generates a random number R in the
interval [0, 1] (or sets R=0.5). It then computes a value P using
numeric values from the retrieved RR and a formula to be specified in
authfailure-report. If P >= R, then the agent generates and sends the
report, otherwise does nothing.
P may be computed so as to be near to 1 for newly retrieved records
and then decreasing more or less rapidly, according to the value of
ri. The following gnuplot snippet displays suitably scaled Gaussian
curves (aka bump functions) for a few values of ri
-----8<----------8<-------cut-here-------8<----------8<-----
#! /usr/bin/gnuplot -p
reset
set title 'P = p(ri*(1 - TTL/TTLMAX)) for TTLMAX=86400'
set xrange [0:86400]
set key left
e=exp(1)
p(x) = abs(x)<1? e*exp(-1/(1-x**2)): 0
plot p(1*(1 - x/86400)), \
p(2*(1 - x/86400)), \
p(3*(1 - x/86400)), \
p(4*(1 - x/86400)), \
p(5*(1 - x/86400))
-----8<----------8<-------cut-here-------8<----------8<-----
pros:
* supports stateless agents,
* probability decreases exponentially,
* automatic period reset,
* leverages DNS cache, thus favoring reporting to "new" (not cached)
domains, i.e. from infrequently targeted agents whose A-Rs are
more likely to be yet unknown.
cons:
* Issuers might need a different TTL for reasons independent of
failure reporting,
* agents using a resolver that may deliver second hand RRs would
seldom get a TTL near to TTLMAX, and
* TTLMAX (or similar value) has to be explicitly written in the RR.
Thoughts?
_______________________________________________
marf mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/marf