Ah scratch that, my first paragraph about replicated pools is actually 
incorrect. If it’s a replicated pool and it shows incomplete, it means the most 
recent copy of the PG is missing. So ideal would be to recover the PG from dead 
OSDs in any case if possible.

Matthias Grandl
Head Storage Engineer
matthias.gra...@croit.io

> On 17. Jun 2024, at 16:56, Matthias Grandl <matthias.gra...@croit.io> wrote:
> 
> Hi Pablo,
> 
> It depends. If it’s a replicated setup, it might be as easy as marking dead 
> OSDs as lost to get the PGs to recover. In that case it basically just means 
> that you are below the pools min_size.
> 
> If it is an EC setup, it might be quite a bit more painful, depending on what 
> happened to the dead OSDs and whether they are at all recoverable.
> 
> 
> Matthias Grandl
> Head Storage Engineer
> matthias.gra...@croit.io
> 
>> On 17. Jun 2024, at 16:46, David C. <david.cas...@aevoo.fr> wrote:
>> 
>> Hi Pablo,
>> 
>> Could you tell us a little more about how that happened?
>> 
>> Do you have a min_size >= 2 (or E/C equivalent) ?
>> ________________________________________________________
>> 
>> Cordialement,
>> 
>> *David CASIER*
>> 
>> ________________________________________________________
>> 
>> 
>> 
>> Le lun. 17 juin 2024 à 16:26, cellosof...@gmail.com <cellosof...@gmail.com>
>> a écrit :
>> 
>>> Hi community!
>>> 
>>> Recently we had a major outage in production and after running the
>>> automated ceph recovery, some PGs remain in "incomplete" state, and IO
>>> operations are blocked.
>>> 
>>> Searching in documentation, forums, and this mailing list archive, I
>>> haven't found yet if this means this data is recoverable or not. We don't
>>> have any "unknown" objects or PGs, so I believe this is somehow an
>>> intermediate stage where we have to tell ceph which version of the objects
>>> to recover from.
>>> 
>>> We are willing to work with a Ceph Consultant Specialist, because the data
>>> at stage is very critical, so if you're interested please let me know
>>> off-list, to discuss the details.
>>> 
>>> Thanks in advance
>>> 
>>> Best Regards
>>> Pablo
>>> _______________________________________________
>>> ceph-users mailing list -- ceph-users@ceph.io
>>> To unsubscribe send an email to ceph-users-le...@ceph.io
>>> 
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@ceph.io
>> To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to