Anyone have any thoughts on this???  It looks like I may have to wipe
out the OSDs effected and rebuild them, but I'm afraid that may result
in data loss because of the old OSD first crush map in place :(.

On Fri, Feb 8, 2013 at 1:36 PM, Mandell Degerness
<mand...@pistoncloud.com> wrote:
> We ran into an error which appears very much like a bug fixed in 0.44.
>
> This cluster is running version:
>
> ceph version 0.48.1argonaut (commit:a7ad701b9bd479f20429f19e6fea7373ca6bba7c)
>
> The error line is:
>
> Feb  8 18:50:07 192.168.8.14 ceph-osd: 2013-02-08 18:50:07.545682
> 7f40f9f08700  0 filestore(/mnt/osd97)  error (17) File exists not
> handled on operation 20 (11279344.0.0, or op 0, counting from 0)
>
> A more complete log is attached.
>
> First question: is this a know bug fixed in more recent versions?
>
> Second question: is there any hope of recovery?
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to