Seems like we are hitting the qemu file descriptor limits. Thanks for
pointing out.
Thanks,
Jeyaganesh.
On 7/15/15, 2:12 AM, "Jan Schermer" wrote:
>We are getting the same log message as you, but not too often, and from
>what I gathered it is normal to see that.
>Not sure how often you are seei
We are getting the same log message as you, but not too often, and from what I
gathered it is normal to see that.
Not sure how often you are seeing those.
How many disks are connected to that VM? Take a look in /proc/pid/fd and count
the file descriptors, then compare that to /proc/pid/limits. M
On 7/15/15, 12:17 AM, "Jan Schermer" wrote:
>Do you have a comparison of the same workload on a different storage than
>CEPH?
I dont have a direct comparison data with a different storage. I did ran
similar workload with single VM which was booted off from a local disk and
i didn't see any iss
Do you have a comparison of the same workload on a different storage than CEPH?
I am asking because those messages indicate slow request - basically some
operation took 120s (which would look insanely high to a storage administrator,
but is sort of expected in a cloud environment). And even on a
On 7/14/15, 4:56 PM, "ceph-users on behalf of Wido den Hollander"
wrote:
>On 07/15/2015 01:17 AM, Jeya Ganesh Babu Jegatheesan wrote:
>> Hi,
>>
>> We have a Openstack + Ceph cluster based on Giant release. We use ceph
>>for the VMs volumes including the boot volumes. Under load, we see the
>>w
On 07/15/2015 01:17 AM, Jeya Ganesh Babu Jegatheesan wrote:
> Hi,
>
> We have a Openstack + Ceph cluster based on Giant release. We use ceph for
> the VMs volumes including the boot volumes. Under load, we see the write
> access to the volumes stuck from within the VM. The same would work after
Hi,
We have a Openstack + Ceph cluster based on Giant release. We use ceph for the
VMs volumes including the boot volumes. Under load, we see the write access to
the volumes stuck from within the VM. The same would work after a VM reboot.
The issue is seen with and without rbd cache. Let me kno