On 03/25/17 23:01, Nick Fisk wrote:
>
>> I think I owe you another graph later when I put all my VMs on there
>> (probably finally fixed my rbd snapshot hanging VM issue ...worked around it
>> by disabling exclusive-lock,object-map,fast-diff). The bandwidth hungry ones
>> (which hung the most
Thanks for your response Peter, comments in line
> -Original Message-
> From: Peter Maloney [mailto:peter.malo...@brockmann-consult.de]
> Sent: 23 March 2017 22:45
> To: n...@fisk.me.uk; 'ceph-users' <ceph-users@lists.ceph.com>
> Subject: Re: [ceph-users] Precond
On Wed, Mar 22, 2017 at 6:05 AM Peter Maloney <
peter.malo...@brockmann-consult.de> wrote:
> Does iostat (eg. iostat -xmy 1 /dev/sd[a-z]) show high util% or await
> during these problems?
>
It does, from watching atop.
>
> Ceph filestore requires lots of metadata writing (directory splitting
2017 10:06
> *To:* Alex Gorbachev <a...@iss-integration.com>; ceph-users
> <ceph-users@lists.ceph.com>
> *Subject:* Re: [ceph-users] Preconditioning an RBD image
>
>
>
> Does iostat (eg. iostat -xmy 1 /dev/sd[a-z]) show high util% or await
> during these p
triple write
overhead?
Nick
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
Peter Maloney
Sent: 22 March 2017 10:06
To: Alex Gorbachev <a...@iss-integration.com>; ceph-users
<ceph-users@lists.ceph.com>
Subject: Re: [ceph-users] Preconditioning an RBD im
Does iostat (eg. iostat -xmy 1 /dev/sd[a-z]) show high util% or await
during these problems?
Ceph filestore requires lots of metadata writing (directory splitting
for example), xattrs, leveldb, etc. which are small sync writes that
HDDs are bad at (100-300 iops), and SSDs are good at (cheapo
I wanted to share the recent experience, in which a few RBD volumes,
formatted as XFS and exported via Ubuntu NFS-kernel-server performed
poorly, even generated an "out of space" warnings on a nearly empty
filesystem. I tried a variety of hacks and fixes to no effect, until
things started