I have updated a test cluster by just updating the rpm and issueing a 
ceph osd require-osd-release because it was mentioned in the status. Is 
there more you need to do?


- update on all nodes the packages
sed -i 's/Kraken/Luminous/g' /etc/yum.repos.d/ceph.repo
yum update

- then on each node first restart the monitor
systemctl restart ceph-mon@X

- then on each node restart the osds
ceph osd tree
systemctl restart ceph-osd@X

- then on each node restart the mds
systemctl restart ceph-mds@X

ceph osd require-osd-release luminous



-----Original Message-----
From: Hauke Homburg [mailto:hhomb...@w3-creative.de] 
Sent: zondag 2 juli 2017 13:24
To: ceph-users@lists.ceph.com
Subject: [ceph-users] Ceph Cluster with Deeo Scrub Error

Hello,

Ich have Ceph Cluster with 5 Ceph Servers, Running unter CentOS 7.2 and 
ceph 10.0.2.5. All OSD running in a RAID6.
In this Cluster i have Deep Scrub Error:
/var/log/ceph/ceph-osd.6.log-20170629.gz:389 .356391 7f1ac4c57700 -1 
log_channel(cluster) log [ERR] : 1.129 deep-scrub 1 errors

This Line is the inly Line i Can find with the Error.

I tried to repair with withceph osd deep-scrub osd and ceph pg repair.
Both didn't fiy the error.

What can i do to repair the Error?

Regards

Hauke

--
www.w3-creative.de

www.westchat.de


_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to