Hi,

doing some tests to compare performance I've found some weird results. I've
seen this in different tests, but probably the more clear an easier to
reproduce is to use smallfile tool to create files.

The test command is:

# python smallfile_cli.py --operation create --files-per-dir 100
--file-size 32768 --threads 16 --files 256 --top <mountpoint> --stonewall no


I've run this test 5 times sequentially using the same initial conditions
(at least this is what I think): bricks cleared, all gluster processes
stopped, volume destroyed and recreated, caches emptied.

This is the data I've obtained for each execution:

Time     us      sy     ni     id     wa     hi     si     st     read
  write      use
 435   1.80    3.70   0.00  81.62  11.06   0.00   0.00   0.00   32.931
 608715.575   97.632
 450   1.67    3.62   0.00  80.67  12.19   0.00   0.00   0.00   30.989
 589078.308   97.714
 425   1.74    3.75   0.00  81.85  10.76   0.00   0.00   0.00   37.588
 622034.812   97.706
 320   2.47    5.06   0.00  82.84   7.75   0.00   0.00   0.00   46.406
 828637.359   96.891
 365   2.19    4.44   0.00  84.45   7.12   0.00   0.00   0.00   45.822
 734566.685   97.466


Time is in seconds. us, sy, ni, id, wa, hi, si and st are the CPU times, as
reported by top. read and write are the disk throughput in KiB/s. use is
the disk usage percentage.

Based on this we can see that there's a big difference between the best and
the worst cases. But it seems more relevant that when it performed better,
in fact disk utilization and CPU wait time were a bit lower.

Disk is a NVMe and I used a recent commit from master (2b86da69). Volume
type is a replica 3 with 3 bricks.

I'm not sure what can be causing this. Any idea ? can anyone try to
reproduce it to see if it's a problem in my environment or it's a common
problem ?

Thanks,

Xavi
_______________________________________________
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

Reply via email to