I had a similar problem with some relatively underpowered servers (2x
E5-2603 6 core 1.7ghz no HT, 12-14 2TB OSDs per server, 32Gb RAM)
There was a process on a couple of the servers that would hang and chew up
all available CPU. When that happened, I started getting scrub errors on
those
On 05. mars 2018 14:45, Jan Marquardt wrote:
Am 05.03.18 um 13:13 schrieb Ronny Aasen:
i had some similar issues when i started my proof of concept. especialy
the snapshot deletion i remember well.
the rule of thumb for filestore that i assume you are running is 1GB ram
per TB of osd. so with
Am 05.03.18 um 13:13 schrieb Ronny Aasen:
> i had some similar issues when i started my proof of concept. especialy
> the snapshot deletion i remember well.
>
> the rule of thumb for filestore that i assume you are running is 1GB ram
> per TB of osd. so with 8 x 4TB osd's you are looking at 32GB
On 05. mars 2018 11:21, Jan Marquardt wrote:
Hi,
we are relatively new to Ceph and are observing some issues, where
I'd like to know how likely they are to happen when operating a
Ceph cluster.
Currently our setup consists of three servers which are acting as
OSDs and MONs. Each server has two
Hi,
we are relatively new to Ceph and are observing some issues, where
I'd like to know how likely they are to happen when operating a
Ceph cluster.
Currently our setup consists of three servers which are acting as
OSDs and MONs. Each server has two Intel Xeon L5420 (yes, I know,
it's not state