On Tue, Mar 20, 2018 at 08:59:21AM +0100, Dominique Martinet wrote:
> Hi,
> 
> Girjesh Rajoria wrote on Mon, Mar 19, 2018 at 10:47:09PM +0530:
> >> + tail -21 ../dbenchTestLog.txt
> >>
> >>  Operation                Count    AvgLat    MaxLat
> >>  --------------------------------------------------
> >>  Deltree                    102     9.799    27.590
> >>  Flush                   284316     1.637   203.259
> >>  Close                  2979801     0.007     0.330
> >>  LockX                    13208     0.007     0.079
> >>  Mkdir                       51     0.011     0.059
> >>  Rename                  171774     0.073     0.463
> >>  ReadX                  6358865     0.010    38.319
> >>  WriteX                 2022375     0.048    40.888
> >>  Unlink                  819204     0.090    38.363
> >>  UnlockX                  13208     0.006     0.063
> >>  FIND_FIRST             1421549     0.044    38.320
> >>  SET_FILE_INFORMATION    330438     0.024     0.310
> >>  QUERY_FILE_INFORMATION  644319     0.004     0.242
> >>  QUERY_PATH_INFORMATION 3676827     0.015    40.851
> >>  QUERY_FS_INFORMATION    674193     0.010    37.783
> >>  NTCreateX              4056560     0.049   122.097
> >>
> >>
> >> Where are the iozone results from ../ioZoneLog.txt?
> >
> > iozone suite doesn't give outputs result as dbench. So iozone test checks
> > for successful completion and print message of success in the log. In cases
> > where the test fails, it'll print error due to which test failed from
> > ../ioZoneLog.txt.
> 
> I think it's great to have this kind of dbench stats, and would be
> awesome if we can have some raw figures from iozone as well (I think it
> can output the results in csv format at least?)
> 
> 
> jenkins can also take performance metrics from jobs and we could have
> graphs of the performance over time if it keeps these metrics a bit
> longer than the actual jobs (for example with the performance plugin[1],
> but there might be other ways)
> 
> On an individual basis as the tests are on VMs with various loads the
> results will probably flicker a bit, but on a whole we should be able to
> identify what week(s) introduced slowdowns/speedups after the fact quite
> nicely if we can achieve that! :)

Note that the tests in the CentOS CI run on different physical hosts.
When a test is started, one or more machines are requested, and the
scheduler (called Duffy) just returns a random system. This means that
the performance results might differ quite a bit between runs, even for
the same change-set.

See https://wiki.centos.org/QaWiki/PubHardware for details about the
hardware.

So except for the performance results, it may be useful to gather some
details about the hardware that was used.

Niels

------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel

Reply via email to