On Sun, Jan 3, 2021 at 4:10 PM Drew Stephens <[email protected]> wrote:

> Agreed that visualization is the hard part and that the existing Jenkins
> options aren’t great.
>
> I’ll start by getting the benchmarks project setup to run automatically, a
> system (probably Jenkins) to do that running (probably just nightly on
> master & 2.12…I still haven’t run the whole thing to see how long it takes).
>

Yes, I hope some of the settings in "results-pojo-2.12-home.txt" (and
others) help. Warmup times of ~5 second and total runtime per test of
something like 30 - 60 seconds (I think most are 5 second runs with 10
repeats, i.e. 50 seconds) seem to produce stable enough results.
I'm sure there is also some trade-off between low variability (longer runs)
and frequent full test suite runs (shorter ones).

Test suite actually has quite a few tests that are not included in the set
I use. So if the "main test" (one with "MediaItem") can be automated easily
enough, it'd be possible to consider adding more alternate tests.

General test variations currently included are:

* Read / write (deser/ser)
* Different models (POJO, TreeNode ["Node"], Object ["Untyped"])
* Different formats (for some formats, only POJO)
* Afterburner / regular ("vanilla")

and then some limited variations just for JSON:

* "Wasteful" read/write: discard ObjectMapper after every iteration
* Alternate input sources -- DataInput, String (regular test always runs
from byte[])

Some aspects that would be good to cover but aren't yet:

* Non-Java variants: Kotlin and Scala module (with / without Afterburner)
    - note: it is possible to run individual tests in profiler; I do this
quite frequently myself -- could help find optimization targets
* Blackbird (replacement for Afterburner)
* (for JSON) with/without indentation?
* Tests for various annotations: basic tests use minimal annotations, and
none (f.ex) use constructors for deserialization


> If I get all that sorted, we can have some ongoing results to figure out
> how to make some graphs from.  A gnuplot graph of total runtime over time
> should be easy enough to generate, and we could make drill-downs for each
> test suite or some other simple dimensions that would be useful.
> Thereafter we can figure out how to figure out how to present the many
> other dimensions, because you’re definitely right that we’ll want those to
> be able to really figure out where things have changed.
>

That makes sense.

Another thing, related to trends: not sure if it is practical, but since
performance of released versions should not change
a lot after release (except maybe for different JDK), it might make sense
to have separate runs for snapshots/branches, and for releases:

1. For snapshots, frequent but shorter runs, to give general idea; but also
trends over date to possibly spot performance change
2. For released versions, longer runs trying to get stable "official"
numbers after release? (in theory, also: multiple full runs, try to merge?
Or pick fastest run per test type)

These are just general ideas that may or may not make sense. But ones I've
had over time.

Also: please let me know if and how I can help! This is one area that I am
very excited about, and where automation could help a lot.

-+ Tatu +-



>
> -Drew
>
> On January 3, 2021 at 6:44:37 PM, Tatu Saloranta ([email protected])
> wrote:
>
> This is something I have quite often thought about as something that'd be
> really cool, but never figured out exactly how to go about it. Would love
> to see something in this space.
>
> Getting tests to run is probably not super difficult (any CI system could
> trigger it), and could be also limited to specific branches/versions for
> practical purposes.
> There would no doubt be some challenges in this part too; possible number
> of tests is actually huge (even for a single version), across formats,
> possible test cases, read/write, afterburner/none, string/byte
> source/target.
> And having dedicated CPU resources would be a must for stable results.
>
> To me, big challenges seemed to be about result processing, visualization;
> how to group test runs and so on.
> Jenkins plug-ins tend to be pretty bad (just IMO) in displaying meaningful
> breakdowns, trends; it is easy to create something to impress a project
> manager, but less so to produce something to show important actual trends.
> But even without trends, it'd be essential to be able to compare more than
> one result set to see diffs between certain versions.
>
> And of course, it would also be great not to require local resources but
> use cloud platforms iff they could provide fully static cpu resources
> (tests fortunately do not use lots of i/o or network or even memory).
>
> -+ Tatu +-
>
>
>
>
> On Sun, Jan 3, 2021 at 12:30 PM [email protected] <
> [email protected]> wrote:
>
>> On Sunday, January 3, 2021 at 1:10:55 PM UTC-5 marshall wrote:
>>
>>> SGTM. Arm64 will produce _different_ results than x64, but the point for
>>> performance regressions is simply to know if things change relative to
>>> yesterday’s test, so I think a Pi 4 is reasonable as long as it’s in a case
>>> with a hefty heat sink so it doesn’t downclock when it gets hot.
>>
>>
>> Indeed, RPi4s really need cooling to maintain their highest clockspeed.
>> It would probably be good to check whether any throttling occurred during
>> the test run.
>>
>> -Drew
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "jackson-dev" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to [email protected].
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/jackson-dev/d0285396-5e9f-4788-8c43-e06095d4bbfbn%40googlegroups.com
>> <https://groups.google.com/d/msgid/jackson-dev/d0285396-5e9f-4788-8c43-e06095d4bbfbn%40googlegroups.com?utm_medium=email&utm_source=footer>
>> .
>>
> --
> You received this message because you are subscribed to a topic in the
> Google Groups "jackson-dev" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/jackson-dev/e3GdN9l7cf4/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> [email protected].
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/jackson-dev/CAGrxA27T08gNzkuEkXKNWNsKStQnMs3br5%3D43kMJ%3DPtR73W2Ew%40mail.gmail.com
> <https://groups.google.com/d/msgid/jackson-dev/CAGrxA27T08gNzkuEkXKNWNsKStQnMs3br5%3D43kMJ%3DPtR73W2Ew%40mail.gmail.com?utm_medium=email&utm_source=footer>
> .
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"jackson-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jackson-dev/CAGrxA27rtnb_JfP2FqMnFXAdRNn3aXJJexwn3_JuRa2JSuhv1w%40mail.gmail.com.

Reply via email to