[ceph-users] RBD Object Size for BlueStore OSD

2019-09-29 Thread Lazuardi Nasution
Hi, Is 4MB default RBD object size still relevant for BlueStore OSD? Any guideline for best RBD object size for BlueStore OSD especially on high performance media (SSD, NVME)? Best regards, ___ ceph-users mailing list ceph-users@lists.ceph.com http://li

[ceph-users] PG is stuck in repmapped and degraded

2019-09-29 Thread 展荣臻(信泰)
Hi,all which mail servers i should to send? ceph-users@lists.ceph.com or ceph-us...@ceph.io.I sent the same mail to ceph-us...@ceph.io yesterday. But i can't find it in mail list sent to me today. I send it again. we use openstack + ceph(hammer) in my production.There are 22 osds on a host

Re: [ceph-users] KVM userspace-rbd hung_task_timeout on 3rd disk

2019-09-29 Thread ceph
I guess this depends in your Cluster Setup... you have slow request also? - Mehmet Am 11. September 2019 12:22:08 MESZ schrieb Ansgar Jazdzewski : >Hi, > >we are running ceph version 13.2.4 and qemu 2.10, we figured out that >on VMs with more than three disks IO fails with hung task timeout, >we

[ceph-users] Nautilus Ceph Status Pools & Usage

2019-09-29 Thread Lazuardi Nasution
Hi, I'm starting with Nautilus and do create and delete some pools. When I check with "ceph status" I find something weird with "pools" number when tall pools have been deleted. I the meaning of "pools" number different than Luminous? As there is no pool and PG, why there is usage on "ceph status"

[ceph-users] Commit and Apply latency on nautilus

2019-09-29 Thread Alex Litvak
Hello everyone, I am running a number of parallel benchmark tests against the cluster that should be ready to go to production. I enabled prometheus to monitor various information and while cluster stays healthy through the tests with no errors or slow requests, I noticed an apply / commit late

[ceph-users] How to limit radosgw user privilege to read only mode?

2019-09-29 Thread Charles Alva
Hi Cephalopods, I'm in the process of migrating radosgw Erasure Code pool from old cluster to Replica pool on new cluster. To avoid user write new object to old pool, I want to set the radosgw user privilege to read only. Could you guys please share how to limit radosgw user privilege to read onl