Benjamin-Philip commented on issue #5801:
URL: https://github.com/apache/couchdb/issues/5801#issuecomment-3605472706

   Looking through the built-in benchmark, I noticed a few flaws:
   
   Firstly, the overhead of generating the random bytes and (some of) the 
benchmarking logic is included in the benchmark. This was mentioned in the 
Readme.
   
   Now, the overhead itself is fine if it is constant - the difference in bytes 
per second is the speed difference. However, `:crypto:rand` takes longer in an 
increase in `N`. Even if `N` was constant, we don't know if the *deviation* in 
the time it takes is significant. This is the second flaw.
   
   Finally, we're benchmarking the two implementations with two random 
different inputs. Is this still an apples to apples comparison? Maybe a better 
approach would be to benchmark with a common random input?
   
   I'm not sure if my benchmark suffers from some of the same flaws, but maybe 
moving from handwritten benchmarks to some benchmarking framework would be 
better?
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to