Hello,

You are running 10.0.2.5 or 10.2.5?


If you are running 10.2 you can can read this documentation 
http://docs.ceph.com/docs/master/rados/troubleshooting/troubleshooting-pg/#pgs-inconsistent


'rados list-inconsistent-obj' will give you the reason of this scrub error.


And I would not use Ceph with raid6.

Your data should already be safe with Ceph.


Etienne


________________________________
From: ceph-users <ceph-users-boun...@lists.ceph.com> on behalf of Hauke Homburg 
<hhomb...@w3-creative.de>
Sent: Tuesday, July 4, 2017 17:41
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Ceph Cluster with Deeo Scrub Error

Am 02.07.2017 um 13:23 schrieb Hauke Homburg:
Hello,

Ich have Ceph Cluster with 5 Ceph Servers, Running unter CentOS 7.2 and ceph 
10.0.2.5. All OSD running in a RAID6.
In this Cluster i have Deep Scrub Error:
/var/log/ceph/ceph-osd.6.log-20170629.gz:389 .356391 7f1ac4c57700 -1 
log_channel(cluster) log [ERR] : 1.129 deep-scrub 1 errors

This Line is the inly Line i Can find with the Error.

I tried to repair with withceph osd deep-scrub osd and ceph pg repair.
Both didn't fiy the error.

What can i do to repair the Error?

Regards

Hauke

--
www.w3-creative.de<http://www.w3-creative.de>

www.westchat.de<http://www.westchat.de>



_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



Hello,

Today i made a ceph osd scrub <OG NUM>. So i had after some hours a running 
Ceph.

I wonder why the run takes so a long time for one OSD. Does ceph have queries 
at this Point?

regards

Hauke

--
www.w3-creative.de<http://www.w3-creative.de>

www.westchat.de<http://www.westchat.de>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to