Re: [ceph-users] Ceph Cluster with Deeo Scrub Error

2017-08-14 Thread Hauke Homburg
Am 04.07.2017 um 17:58 schrieb Etienne Menguy: > rados list-inconsistent-ob Hello, Sorry for my late reply. We installed some new Server and now wie have osd pool default size = 3. At this Point i tried again to repair the with ceph pg rair and ceph pg deep-srub. I tried to delete again rados

Re: [ceph-users] Ceph Cluster with Deeo Scrub Error

2017-07-04 Thread Etienne Menguy
t: Re: [ceph-users] Ceph Cluster with Deeo Scrub Error Am 02.07.2017 um 13:23 schrieb Hauke Homburg: Hello, Ich have Ceph Cluster with 5 Ceph Servers, Running unter CentOS 7.2 and ceph 10.0.2.5. All OSD running in a RAID6. In this Cluster i have Deep Scrub Error: /var/log/ceph/ceph-osd.6.log-201

Re: [ceph-users] Ceph Cluster with Deeo Scrub Error

2017-07-04 Thread Hauke Homburg
Am 02.07.2017 um 13:23 schrieb Hauke Homburg: > Hello, > > Ich have Ceph Cluster with 5 Ceph Servers, Running unter CentOS 7.2 > and ceph 10.0.2.5. All OSD running in a RAID6. > In this Cluster i have Deep Scrub Error: > /var/log/ceph/ceph-osd.6.log-20170629.gz:389 .356391 7f1ac4c57700 -1 >

[ceph-users] Ceph Cluster with Deeo Scrub Error

2017-07-02 Thread Hauke Homburg
Hello, Ich have Ceph Cluster with 5 Ceph Servers, Running unter CentOS 7.2 and ceph 10.0.2.5. All OSD running in a RAID6. In this Cluster i have Deep Scrub Error: /var/log/ceph/ceph-osd.6.log-20170629.gz:389 .356391 7f1ac4c57700 -1 log_channel(cluster) log [ERR] : 1.129 deep-scrub 1 errors This