2016-01-14 11:25 GMT+02:00 Magnus Hagdorn :
> On 13/01/16 13:32, Andy Allan wrote:
>
>> On 13 January 2016 at 12:26, Magnus Hagdorn
>> wrote:
>>
>>> Hi there,
>>> we recently had a problem with two OSDs failing because of I/O errors of
>>> the
>>> underlying disks. We run a small ceph cluster wit
On 13/01/16 13:32, Andy Allan wrote:
On 13 January 2016 at 12:26, Magnus Hagdorn wrote:
Hi there,
we recently had a problem with two OSDs failing because of I/O errors of the
underlying disks. We run a small ceph cluster with 3 nodes and 18 OSDs in
total. All 3 nodes are dell poweredge r515 ser
On 13 January 2016 at 12:26, Magnus Hagdorn wrote:
> Hi there,
> we recently had a problem with two OSDs failing because of I/O errors of the
> underlying disks. We run a small ceph cluster with 3 nodes and 18 OSDs in
> total. All 3 nodes are dell poweredge r515 servers with PERC H700 (MegaRAID
>
So let me get this straight!
You have 3 hosts with 6 drives each in raid 0. So you have set 3 OSDs in
crushmap, right?
You said replication level is 2, so you have 2 copies of the original data!
So the pool size is 3, right?
You said 2 out of 3 OSD are down. So you are left with only one copy of th
Hi there,
we recently had a problem with two OSDs failing because of I/O errors of
the underlying disks. We run a small ceph cluster with 3 nodes and 18
OSDs in total. All 3 nodes are dell poweredge r515 servers with PERC
H700 (MegaRAID SAS 2108) RAID controllers. All disks are configured as
s