That message indicates that the checksums of messages between your kernel
client and OSD are incorrect. It could be actual physical transmission
errors, but if you don't see other issues then this isn't fatal; they can
recover from it.

On Wed, Oct 4, 2017 at 8:52 AM Josy <j...@colossuscloudtech.com> wrote:

> Hi,
>
> We have setup a cluster with 8 OSD servers (31 disks)
>
> Ceph health is Ok.
> --------------
> [root@las1-1-44 ~]# ceph -s
>    cluster:
>      id:     de296604-d85c-46ab-a3af-add3367f0e6d
>      health: HEALTH_OK
>
>    services:
>      mon: 3 daemons, quorum ceph-las-mon-a1,ceph-las-mon-a2,ceph-las-mon-a3
>      mgr: ceph-las-mon-a1(active), standbys: ceph-las-mon-a2
>      osd: 31 osds: 31 up, 31 in
>
>    data:
>      pools:   4 pools, 510 pgs
>      objects: 459k objects, 1800 GB
>      usage:   5288 GB used, 24461 GB / 29749 GB avail
>      pgs:     510 active+clean
> ====================
>
> We created a pool and mounted it as RBD in one of the client server.
> While adding data to it, we see this below error :
>
> ========
> [939656.039750] libceph: osd20 10.255.0.9:6808 bad crc/signature
> [939656.041079] libceph: osd16 10.255.0.8:6816 bad crc/signature
> [939735.627456] libceph: osd11 10.255.0.7:6800 bad crc/signature
> [939735.628293] libceph: osd30 10.255.0.11:6804 bad crc/signature
>
> =========
>
> Can anyone explain what is this and if I can fix it ?
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to