The error found in that thread, iirc, is that the block size of the disk
does not match the block size of the FS and is trying to access the rest of
a block at the end of a disk. I also remember that the error didn't cause
any problems.

Why raid 6? Rebuilding a raid 6 seems like your cluster would have worse
degraded performance while rebuilding the raid after a dead drive than if
you only had individual osds and list a drive. I suppose you wouldn't be in
a situation of the cluster seeing degraded objects/PGs, so if that is your
use case need them it makes sense. From a cross architecture sense, it
doesn't make sense.

On Tue, Aug 15, 2017, 5:39 AM Hauke Homburg <hhomb...@w3-creative.de> wrote:

> Hello,
>
>
> I found some error in the Cluster with dmes -T:
>
> attempt to access beyond end of device
>
> I found the following Post:
>
> https://www.mail-archive.com/ceph-users@lists.ceph.com/msg39101.html
>
> Is this a Problem with the Size of the Filesystem itself oder "only"
> eine Driver Bug? I ask becaue we habe in each Node 8 HDD with a Hardware
> RAID 6 running. In this RAID we have the XFS Partition.
>
> Also we have one big Filesystem in 1 OSD in each Server instead of 1
> Filesystem per HDD at 8 HDD in each Server.
>
> greetings
>
> Hauke
>
>
> --
> www.w3-creative.de
>
> www.westchat.de
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to