On 20/7/19 11:53 pm, Marc Roos wrote:
> Reverting back to filestore is quite a lot of work and time again. Maybe
> see first if with some tuning of the vms you can get better results?
None of the VMs are particularly disk-intensive. There's two users
accessing the system over a WiFi network for
On 20/7/19 11:53 pm, Marc Roos wrote:
> Reverting back to filestore is quite a lot of work and time again. Maybe
> see first if with some tuning of the vms you can get better results?
None of the VMs are particularly disk-intensive. There's two users
accessing the system over a WiFi network for
Thank you gentlemen. I will give this a shot and reply with what worked.
On Jul 19, 2019, at 11:11 AM, Tarek Zegar
mailto:tze...@us.ibm.com>> wrote:
On the host with the osd run:
ceph-volume lvm list
"☣Adam" ---07/18/2019 03:25:05 PM---The block device can be found
in /var/lib/ceph/osd
I have two queries,
1) I have a rbd mirroring setup with two primary and secondary clusters as
peers and I have enabled image mode.., In this i creates rbd image enabled
with journaling.
But whenever i enable mirroring on the image, I m getting error in osd.log
Primary osd log: failed to get o
Hi ceph users:
I was doing write benchmark, and found some io will be blocked for a
very long time. The following log is one op , it seems to wait for
replica to finish. My ceph version is 12.2.4, and the pool is 3+2 EC .
Does anyone give me some adives about how I sould debug next ?
{
"ops":
Reverting back to filestore is quite a lot of work and time again. Maybe
see first if with some tuning of the vms you can get better results?
What you also can try is for io intensive vm's add an ssd pool? I moved
some exchange servers on them. Tuned down the logging, because that is
writing