[ceph-users] Re: Odd CephFS Performance

2020-04-03 Thread Gabryel Mason-Williams
Hi Mark, Sorry for the delay, I didn't see your response. Yes, the pools are all using 1X replication, I have tried changing the num jobs and io-depth with no prevail. This is using kernel cephfs. Gabryel ___ ceph-users mailing list --

[ceph-users] Re: Unable to increase PG numbers

2020-02-24 Thread Gabryel Mason-Williams
Have you tried making a smaller increment instead of jumping from 8 to 128 as that is quite a big leap? ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: minimum osd size?

2019-10-25 Thread gabryel . mason-williams
10G should be fine on bluestore the smallest size you can have is about 2GB since LVM takes up about 1GB of space at that size so at that point it most of the disk is taken up with LVM. I have seen/recorded performance benefits in some cases when using small OSD sizes on bluestore instead of

[ceph-users] RDMA

2019-10-14 Thread gabryel . mason-williams
Hello, I was wondering what user experience was with using Ceph over RDMA? - How you set it up? - Documentation used to set it up? - Known issues when using it? - If you still use it? Kind regards Gabryel Mason-Williams ___ ceph-users mailing