On Thu, May 2, 2019 at 1:21 PM Pascal Suter wrote:
> Hi Amar
>
> thanks for rolling this back up. Actually i have done some more
> benchmarking and fiddled with the config to finally reach a performance
> figure i could live with. I now can squeeze about 3GB/s out of that server
> which seems to
Hi Amar
thanks for rolling this back up. Actually i have done some more
benchmarking and fiddled with the config to finally reach a performance
figure i could live with. I now can squeeze about 3GB/s out of that
server which seems to be close to what i can get out of its network
uplink (using
Hi Pascal,
Sorry for complete delay in this one. And thanks for testing out in
different scenarios. Few questions before others can have a look and
advice you.
1. What is the volume info output ?
2. Do you see any concerning logs in glusterfs log files?
3. Please use `gluster volume profile` w
i continued my testing with 5 clients, all attached over 100Gbit/s
omni-path via IP over IB. when i run the same iozone benchmark across
all 5 clients where gluster is mounted using the glusterfs client, i get
an aggretated write throughput of only about 400GB/s and an aggregated
read throughpu
I just noticed i left the most important parameters out :)
here's the write command with filesize and recordsize in it as well :)
./iozone -i 0 -t 1 -F /mnt/gluster/storage/thread1 -+n -c -C -e -I -w
-+S 0 -s 200G -r 16384k
also i ran the benchmark without direct_io which resulted in an even
Hi all
I am currently testing gluster on a single server. I have three bricks,
each a hardware RAID6 volume with thin provisioned LVM that was aligned
to the RAID and then formatted with xfs.
i've created a distributed volume so that entire files get distributed
across my three bricks.
fir