Can it be this bug :
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2017-October/021676.html
In most of the OSDs buffer anon is high
},
"buffer_anon": {
"items": 268443,
"bytes": 1421912265
Karun Josy
On Sun, Feb 4, 2018 at 7:03 AM, Karun Josy wrote:
> And can see t
And can see this in error log :
Feb 2 16:41:28 ceph-las1-a4-osd kernel: bstore_kv_sync: page allocation
stalls for 14188ms, order:0,
mode:0x14280ca(GFP_HIGHUSER_MOVABLE|__GFP_ZERO), nodemask=(null)
Karun Josy
On Sun, Feb 4, 2018 at 6:19 AM, Karun Josy wrote:
> Hi,
>
> We are using EC profile
Hi,
We are using EC profile in our cluster.
We are seeing very high RAM usage in 1 OSD server.
Sometimes it goes too low and server hangs. We have to restart the daemons
which frees up the memory, but in very short time get used up again
Memory usage of daemons from issue server
-
P
On Sat, 3 Feb 2018, Wido den Hollander wrote:
> Hi,
>
> I just wanted to inform people about the fact that Monitor databases can grow
> quite big when you have a large cluster which is performing a very long
> rebalance.
>
> I'm posting this on ceph-users and ceph-large as it applies to both, but
Hi,
I just wanted to inform people about the fact that Monitor databases can
grow quite big when you have a large cluster which is performing a very
long rebalance.
I'm posting this on ceph-users and ceph-large as it applies to both, but
you'll see this sooner on a cluster with a lof of OSDs
Migration was complete flawless without any issues and slow requests.
Thanks.
k
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Good morning,
after another disk failure, we currently have 7 inactive pgs [1], which
are stalling IO from the affected VMs.
It seems that ceph, when rebuilding does not focus on repairing
the inactive PGs first, which surprised us quite a lot:
It does not repair the inactive first, but mixes i