Re: [ceph-users] howto delete a pg
Hi, thanks for the advice. This option will only work, if you have more replicas which are ok and less which are not ok. In my case, this number is equal ( replicate 2 ). So that does not work. I will definitly have to make a decision which to drop. I am just unsure about how to drop it properly. -- Mit freundlichen Gruessen / Best regards Oliver Dzombic IP-Interactive mailto:i...@ip-interactive.de Anschrift: IP Interactive UG ( haftungsbeschraenkt ) Zum Sonnenberg 1-3 63571 Gelnhausen HRB 93402 beim Amtsgericht Hanau Geschäftsführung: Oliver Dzombic Steuer Nr.: 35 236 3622 1 UST ID: DE274086107 Am 16.04.2016 um 06:39 schrieb huang jun: > for your cluster warning message, it's a pg's some objects have > inconsistent in primary and replicas, > so you can try 'ceph pg repair $PGID'. > > 2016-04-16 9:04 GMT+08:00 Oliver Dzombic : >> Hi, >> >> i meant of course >> >> 0.e6_head >> 0.e6_TEMP >> >> in >> >> /var/lib/ceph/osd/ceph-12/current >> >> sry... >> >> >> -- >> Mit freundlichen Gruessen / Best regards >> >> Oliver Dzombic >> IP-Interactive >> >> mailto:i...@ip-interactive.de >> >> Anschrift: >> >> IP Interactive UG ( haftungsbeschraenkt ) >> Zum Sonnenberg 1-3 >> 63571 Gelnhausen >> >> HRB 93402 beim Amtsgericht Hanau >> Geschäftsführung: Oliver Dzombic >> >> Steuer Nr.: 35 236 3622 1 >> UST ID: DE274086107 >> >> >> Am 16.04.2016 um 03:03 schrieb Oliver Dzombic: >>> Hi, >>> >>> pg 0.e6 is active+clean+inconsistent, acting [12,7] >>> >>> /var/log/ceph/ceph-osd.12.log:36:2016-04-16 01:08:40.058585 7f4f6bc70700 >>> -1 log_channel(cluster) log [ERR] : 0.e6 deep-scrub stat mismatch, got >>> 4476/4477 objects, 133/133 clones, 4476/4477 dirty, 1/1 omap, 0/0 >>> hit_set_archive, 0/0 whiteouts, 18467422208/18471616512 bytes,0/0 >>> hit_set_archive bytes. >>> >>> >>> i tried to follow >>> >>> https://ceph.com/planet/ceph-manually-repair-object/ >>> >>> did not really work for me. >>> >>> How do i kill this pg completely from osd.12 ? >>> >>> Can i simply delete >>> >>> 0.6_head >>> 0.6_TEMP >>> >>> in >>> >>> /var/lib/ceph/osd/ceph-12/current >>> >>> and ceph will take the other copy and multiply it again, and all is fine ? >>> >>> Or would that be the start of the end ? ^^; >>> >>> Thank you ! >>> >> ___ >> ceph-users mailing list >> ceph-users@lists.ceph.com >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > > ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Re: [ceph-users] howto delete a pg
for your cluster warning message, it's a pg's some objects have inconsistent in primary and replicas, so you can try 'ceph pg repair $PGID'. 2016-04-16 9:04 GMT+08:00 Oliver Dzombic : > Hi, > > i meant of course > > 0.e6_head > 0.e6_TEMP > > in > > /var/lib/ceph/osd/ceph-12/current > > sry... > > > -- > Mit freundlichen Gruessen / Best regards > > Oliver Dzombic > IP-Interactive > > mailto:i...@ip-interactive.de > > Anschrift: > > IP Interactive UG ( haftungsbeschraenkt ) > Zum Sonnenberg 1-3 > 63571 Gelnhausen > > HRB 93402 beim Amtsgericht Hanau > Geschäftsführung: Oliver Dzombic > > Steuer Nr.: 35 236 3622 1 > UST ID: DE274086107 > > > Am 16.04.2016 um 03:03 schrieb Oliver Dzombic: >> Hi, >> >> pg 0.e6 is active+clean+inconsistent, acting [12,7] >> >> /var/log/ceph/ceph-osd.12.log:36:2016-04-16 01:08:40.058585 7f4f6bc70700 >> -1 log_channel(cluster) log [ERR] : 0.e6 deep-scrub stat mismatch, got >> 4476/4477 objects, 133/133 clones, 4476/4477 dirty, 1/1 omap, 0/0 >> hit_set_archive, 0/0 whiteouts, 18467422208/18471616512 bytes,0/0 >> hit_set_archive bytes. >> >> >> i tried to follow >> >> https://ceph.com/planet/ceph-manually-repair-object/ >> >> did not really work for me. >> >> How do i kill this pg completely from osd.12 ? >> >> Can i simply delete >> >> 0.6_head >> 0.6_TEMP >> >> in >> >> /var/lib/ceph/osd/ceph-12/current >> >> and ceph will take the other copy and multiply it again, and all is fine ? >> >> Or would that be the start of the end ? ^^; >> >> Thank you ! >> > ___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com -- Thank you! HuangJun ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Re: [ceph-users] howto delete a pg
Hi, i meant of course 0.e6_head 0.e6_TEMP in /var/lib/ceph/osd/ceph-12/current sry... -- Mit freundlichen Gruessen / Best regards Oliver Dzombic IP-Interactive mailto:i...@ip-interactive.de Anschrift: IP Interactive UG ( haftungsbeschraenkt ) Zum Sonnenberg 1-3 63571 Gelnhausen HRB 93402 beim Amtsgericht Hanau Geschäftsführung: Oliver Dzombic Steuer Nr.: 35 236 3622 1 UST ID: DE274086107 Am 16.04.2016 um 03:03 schrieb Oliver Dzombic: > Hi, > > pg 0.e6 is active+clean+inconsistent, acting [12,7] > > /var/log/ceph/ceph-osd.12.log:36:2016-04-16 01:08:40.058585 7f4f6bc70700 > -1 log_channel(cluster) log [ERR] : 0.e6 deep-scrub stat mismatch, got > 4476/4477 objects, 133/133 clones, 4476/4477 dirty, 1/1 omap, 0/0 > hit_set_archive, 0/0 whiteouts, 18467422208/18471616512 bytes,0/0 > hit_set_archive bytes. > > > i tried to follow > > https://ceph.com/planet/ceph-manually-repair-object/ > > did not really work for me. > > How do i kill this pg completely from osd.12 ? > > Can i simply delete > > 0.6_head > 0.6_TEMP > > in > > /var/lib/ceph/osd/ceph-12/current > > and ceph will take the other copy and multiply it again, and all is fine ? > > Or would that be the start of the end ? ^^; > > Thank you ! > ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
[ceph-users] howto delete a pg
Hi, pg 0.e6 is active+clean+inconsistent, acting [12,7] /var/log/ceph/ceph-osd.12.log:36:2016-04-16 01:08:40.058585 7f4f6bc70700 -1 log_channel(cluster) log [ERR] : 0.e6 deep-scrub stat mismatch, got 4476/4477 objects, 133/133 clones, 4476/4477 dirty, 1/1 omap, 0/0 hit_set_archive, 0/0 whiteouts, 18467422208/18471616512 bytes,0/0 hit_set_archive bytes. i tried to follow https://ceph.com/planet/ceph-manually-repair-object/ did not really work for me. How do i kill this pg completely from osd.12 ? Can i simply delete 0.6_head 0.6_TEMP in /var/lib/ceph/osd/ceph-12/current and ceph will take the other copy and multiply it again, and all is fine ? Or would that be the start of the end ? ^^; Thank you ! -- Mit freundlichen Gruessen / Best regards Oliver Dzombic IP-Interactive mailto:i...@ip-interactive.de Anschrift: IP Interactive UG ( haftungsbeschraenkt ) Zum Sonnenberg 1-3 63571 Gelnhausen HRB 93402 beim Amtsgericht Hanau Geschäftsführung: Oliver Dzombic Steuer Nr.: 35 236 3622 1 UST ID: DE274086107 ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com