> I've been wandering about such period. If we use the record's Time-To-Live > (TTL) we can specify stateless reporting like so:
Piggybacking on the TTL field seems like a bad idea. A big system might be loafing at X reports per second while the same load could kill a small system or saturate a smaller link. So you have to distribute something like a scale factor. I'm assuming that would be done over DNS. At that point you might as well distribute the real data. -------- > On diagnosing a failure, the agent generates a random number R in the > interval [0, 1] (or sets R=0.5). It then computes a value P using numeric > values from the retrieved RR and a formula to be specified in > authfailure-report. If P >= R, then the agent generates and sends the > report, otherwise does nothing. > P may be computed so as to be near to 1 for newly retrieved records and then > decreasing more or less rapidly, according to the value of ri. The > following gnuplot snippet displays suitably scaled Gaussian curves (aka bump > functions) for a few values of ri What are the goals of this section? I assume the main idea is to avoid overloading (DoS) the receiving system. There are two parts to that. How many reports are coming from each system, and how many systems are contributing to the overall load. I like the idea of an exponential backoff. What are the appropriate parameters? What data would the sending system need in order to do the right thing? Should this type of reporting be moved to a separate socket or separate IP Address? (so a TCP level reject/timeout can be used to trigger the backoff) --------- Would it help to batch the data (at the report stage)? If you are the receiving system, what fraction of your CPU/whatever resources are spent processing the connection vs processing the data for a "report" transmitted over that connection? If I have 100 reports per hour, would you like to get them batched in one message rather than 100 separate messages? -- These are my opinions, not necessarily my employer's. I hate spam. _______________________________________________ marf mailing list [email protected] https://www.ietf.org/mailman/listinfo/marf
