Clarifying on (4) units problem - that refers to the software industry in whole, not just AMQ.
Art On Wed, Dec 22, 2021 at 12:24 PM Arthur Naseef <[email protected]> wrote: > Thoughts that come to mind (not advocating anything, just putting down > thoughts); > > 1. Semver + breaking-changes = new major number > 2. Exposing a new metric while leaving the original unchanged can > eliminate the backward-compatibility problem at the expense of additional > complexity/tech-debt going forward > 3. A configuration setting to switch the metric between original and > new units could do the job, but - again - at the cost of additional > complexity/tech-debt > 4. The units problem hasn't really been solved well; the best I've > seen is to standardize on one unit (such as milliseconds) > 5. Is there a need here being addressed, or is the intent for cleanup? > 6. At the very least, a breaking change here warrants very clear > communication (the changes, once-upon-a-time, to the MBean naming > unexpectedly hit me for a few hours of work - and that was just 1 person > who actually wasn't that heavily invested in it) > > Art > > > On Wed, Dec 22, 2021 at 12:06 PM Matt Pavlovich <[email protected]> > wrote: > >> Hi Art- >> >> The metric is arguably broken right now, so my thought is that “fixing” >> it in 5.17.0 should be the default moving forward. >> >> What else would you suggest? >> >> -Matt Pavlovich >> >> > On Dec 22, 2021, at 11:25 AM, Arthur Naseef <[email protected]> wrote: >> > >> > Hmm, what about the impact to all the consumers of that metric today? >> > >> > That's potentially a huge amount of change. >> > >> > Any thoughts on mitigating the problems for users? >> > >> > Art >> > >> > >> > On Wed, Dec 22, 2021 at 7:32 AM Matt Pavlovich <[email protected]> >> wrote: >> > >> >> Using nanos would eliminate the math division. Might be worth it to cut >> >> out a math operations on longs >> >> >> >> Checking for overflow risk.. Java Long.MAX_VALUE in nanoseconds is 292 >> >> years. >> >> >> >> We should be good with nanos as default vs microseconds. >> >> >> >> -Matt >> >> >> >>> On Dec 22, 2021, at 6:52 AM, Christopher Shannon < >> >> [email protected]> wrote: >> >>> >> >>> +1, I'm not sure if it makes sense to keep the default as millis or >> make >> >>> the new default as nanoseconds. >> >>> >> >>> On Wed, Dec 22, 2021 at 2:09 AM Jean-Baptiste Onofre <[email protected] >> > >> >>> wrote: >> >>> >> >>>> +1 >> >>>> >> >>>> It makes sense. >> >>>> >> >>>> Regards >> >>>> JB >> >>>> >> >>>>> Le 20 déc. 2021 à 16:44, Matt Pavlovich <[email protected]> a >> écrit : >> >>>>> >> >>>>> Currently, KahaDB stats are in ms and we get invalid rollup values >> for >> >>>> minTime, averageTime, and totalTime, since a large number of >> operations >> >>>> take < 1ms on modern hardware. I propose we convert the units to be >> >>>> microseconds to provide better granularity and correctness. I have >> >> created >> >>>> a JIRA to track this change: >> >>>> https://issues.apache.org/jira/browse/AMQ-8414 < >> >>>> https://issues.apache.org/jira/browse/AMQ-8414> >> >>>>> >> >>>>> For comparison, Apache CXF also uses microseconds for metrics for >> >>>> service operations. >> >>>>> >> >>>>> Sample: >> >>>>> Broker uptimeMillis: 835951271 <-- 9 days >> >>>>> KahaDB "totalTime": 62568920797, <-- 724.177324039352 days >> >>>>> >> >>>>> >> >>>>> { >> >>>>> "writeTime": { >> >>>>> "maxTime": 5812, >> >>>>> "averageTime": 16.418624299081607, >> >>>>> "minTime": 0, >> >>>>> "totalTime": 62568920797, >> >>>>> "count": 3810850389, >> >>>>> "averagePerSecond": 60.906442694832606, >> >>>>> "averagePerSecondExMinMax": 60.9064483204415, >> >>>>> "averageTimeExMinMax": 16.418622782579472 >> >>>>> }, >> >>>>> "readTime": { >> >>>>> "maxTime": 517, >> >>>>> "averageTime": 0.27722760803497465, >> >>>>> "minTime": 0, >> >>>>> "totalTime": 264011084, >> >>>>> "count": 952326090, >> >>>>> "averagePerSecond": 3607.144350045546, >> >>>>> "averagePerSecondExMinMax": 3607.1514061783746, >> >>>>> "averageTimeExMinMax": 0.2772270657359121 >> >>>>> } >> >>>>> } >> >>>>> >> >>>>> Thanks, >> >>>>> Matt Pavlovich >> >>>> >> >>>> >> >> >> >> >> >>
