> * i modified benchmark program not to report 'time per op’ but > rather 'cumulative time per N iterations' > * changed the table design > * sentence 'smaller values are better’ is present > * embed a small CSS fragment at the top of the page > * linked to the original baseline and benchmark `.txt` > * everything is being created in the build directory
Nice, thanks! Now the next problem: For the same commit IDs, I see differences in percentage up to 47% in your HTML file! This essentially means that the delivered numbers are still completely meaningless – the differences must be at most a few percent or even smaller, given that the tests are run on exactly the same machine. Please investigate how to improve that, probably by modifying the benchmark test options, or probably even by implementing per-test options so that the single tests can be fine-tuned. Perhaps you should do some internet research to find how other, similar benchmark tests are constructed to get meaningful numbers. Werner