Another benchmarking rule is the operating system may be dynamically 
adjusting performance for power saving or other reasons. Be sure to disable 
any such features before running the benchmark.

Matt

On Monday, January 15, 2018 at 9:04:11 AM UTC-6, Jesper Louis Andersen 
wrote:
>
> General rules:
>
> * If possible, use the benchmark system inside the "testing" package. It 
> will eliminate some common mistakes in benchmarking, but keep your head 
> straight while doing it, so you don't accidentally measure setup time, or 
> something else which is outside the thing you are trying to test.
>
> * A fast benchmark is sensitive to variance on the machine. The "testing" 
> packages smooths it out by running the test more than once, but another 
> trick is to compute outlier variance through bootstrapping methods and 
> figure out if the result is likely to be skewed by outliers.
>
> * Likely, variance can occur from multiple sources:
>   - Servers which are virtualized compete with resources from neighboring 
> systems.
>   - Desktop computers can have your benchmark compete with e.g., your 
> music player or browsing while tests are being run.
>
> * When you report numbers, beware of the average. At least report the 
> median, and if it differs from the average, you should be worried since the 
> data set is unlikely to be normally distributed.
>
> * Prefer visual methods: Compute a kernel density plot of the data to see 
> how it distributes. You want to look out for odd shapes.
>
> * If your data sets have the same variance, are following a normal 
> distribution and are independent, you can compare them with e.g., 
> student's-t. FreeBSD has the ministat(1) tool by Poul-Henning Kamp for 
> this, and I know it is easily ported. But a bit of R or Python also does 
> the trick.
>
>
>
> On Mon, Jan 15, 2018 at 3:07 AM <asaa...@gmail.com <javascript:>> wrote:
>
>> All:
>>
>> I am testing an application that has a latency of the order of few to 
>> several microseconds.  I wrote a benchmark and measured the performance.  I 
>> also computed the elapsed time using time.Now() and time.Since().  (I will 
>> change this to time,Nanoseconds and try shortly.
>>
>> I found the computed latencies had a large variation.
>>
>> Set size: 22024 
>>
>> Min: 20, Max: 936 Average: 32 SD: 43 all in microseconds 
>>
>>
>>  It's skewed towards the lower end of the spectrum.  go test itself 
>> reported 56 microseconds as the response time.
>>
>>    1. Is this variation due to garbage collection?
>>    1. If so, can the variation be somehow reduced by tuning the garbage 
>>       collector?
>>    
>>
>> I will appreciate any references or thoughts.
>>
>> With regards,
>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "golang-nuts" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to golang-nuts...@googlegroups.com <javascript:>.
>> For more options, visit https://groups.google.com/d/optout.
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to