Hi Mark,
Sorry for the delay, I didn't see your response.
Yes, the pools are all using 1X replication, I have tried changing the num jobs
and io-depth with no prevail. This is using kernel cephfs.
Gabryel
___
ceph-users mailing list --
Have you tried making a smaller increment instead of jumping from 8 to 128 as
that is quite a big leap?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
10G should be fine on bluestore the smallest size you can have is about 2GB
since LVM takes up about 1GB of space at that size so at that point it most of
the disk is taken up with LVM. I have seen/recorded performance benefits in
some cases when using small OSD sizes on bluestore instead of
Hello,
I was wondering what user experience was with using Ceph over RDMA?
- How you set it up?
- Documentation used to set it up?
- Known issues when using it?
- If you still use it?
Kind regards
Gabryel Mason-Williams
___
ceph-users mailing