Mark Seger wrote: >>Sampling every second does not occasionally give you an invalid >>value as you suggest - the value it gives is 100% valid, just >>unexpected ! Just like a lot of 'amateur statistics' manage to come >>to invalid conclusions with valid data. >I guess I have to differ on your conclusion. When one has a tool >that is reporting bytes/sec and it occasionally reports an invalid >number like 200MB/sec on a 1G link, they at least owe an explanation >to their users why this is the case.
Which tools ? It's unclear from your previous postings what tools you are using to produce the figures. Have you reported to the issue to the package maintainers ? >Since many people do not monitor at that fine grained of a level - >and believe me, they have no idea how much they're losing by not >doing so - I suspect very few people even notice. I guess that's >why I have a problem with any data sampled at 1 or even 5 minute >intervals - it really doesn't tell me anything about what my system >is really doing. Personally I cannot see what is useful about such fine grained data (for most people and most systems). Even on what might normally be considered a 'steady' data flow, actual data rates will fluctuate wildly at that level of inspection. Very few network topologies are deterministic - ethernet certainly is not. Transit delays through routers are even less deterministic, not to mention all the other circuits a packet must pass through. Oh yes, did I omit to mention the task scheduler queue, disk i/o queue, network output queue, ... all these things will conspire to give a randomness to your output with a lot of variables - even an ntp update will have an effect as the task wakes up, sends a packet, waits for a response, and updates the status files on disk. I would reasonably expect the output of almost any real-world system to appear pseudo-random ! _______________________________________________ rrd-users mailing list rrd-users@lists.oetiker.ch https://lists.oetiker.ch/cgi-bin/listinfo/rrd-users