Re: [ceph-users] RBD image has no active watchers while OpenStack KVM VM is running

2017-11-29 Thread Logan Kuhn
We've seen this. Our environment isn't identical though, we use oVirt and connect to ceph (11.2.1) via cinder (9.2.1), but it's so very rare that we've never had any luck in pin pointing it and have a lot less VMs, <300. Regards, Logan - On Nov 29, 2017, at 7:48 AM, Wido den Hollander

Re: [ceph-users] Prioritise recovery on specific PGs/OSDs?

2017-06-20 Thread Logan Kuhn
Is there a way to prioritize specific pools during recovery? I know there are issues open for it, but I wasn't aware it was implemented yet... Regards, Logan - On Jun 20, 2017, at 8:20 AM, Sam Wouters wrote: | Hi, | Are they all in the same pool? Otherwise you could

Re: [ceph-users] Brainstorming ideas for Python-CRUSH

2017-03-21 Thread Logan Kuhn
I like the idea Being able to play around with different configuration options and using this tool as a sanity checker or showing what will change as well as whether or not the changes could cause health warn or health err. For example, if I were to change the replication level of a pool,

Re: [ceph-users] Cephfs with large numbers of files per directory

2017-02-21 Thread Logan Kuhn
We have a very similar configuration at one point. I was fairly new when we started to move away from it, but what happened to us is that anytime a directory needed to stat, backup, ls, rsync, etc. It would take minutes to return and while it was waiting CPU load would spike due to iowait.