> first of all, I'd still recommend to use the orchestrator to deploy OSDs.
> Building
> OSDs manually and then adopt them is redundant. Or do you have issues with
> the drivegroups?
I am having to do it this way because I couldn't find any doco on how to
specify a separate DB/WAL device when
Martin Conway wrote:
> I find that backfilling and possibly scrubbing often comes to a halt for no
> apparent
> reason. If I put a server into maintenance mode or kill and restart OSDs it
> bursts back
> into life again.
>
> Not sure how to diagnose why the recovery proces
e related to the container image tag that Eugen filed has also been
> fixed on reef. Thanks for filing that.
>
> Martin you may want to retry things after the next reef release.
> Unfortunately, I don't know when that is planned but I think it's soonish.
>
> >
> > Z
I just had another look through the issues tracker and found this bug already
listed.
https://tracker.ceph.com/issues/59428
I need to go back to the other issues I am having and figure out if they are
related or something different.
Hi
I wrote before about issues I was having with cephadm i
b4e",
line 217, in __getattr__
return super().__getattribute__(name)
AttributeError: 'CephadmContext' object has no attribute 'fsid'
I am running into other issues as well, but I think they may point back to this
issue of "'CephadmContext' object has no attr
eph/ceph:v18:v18.2.0
Ceph orch upgrade start quay.io/ceph/ceph:v18.2.0
Does work as expected.
Let me know if there is any other information that would be helpful, but I have
since worked around these issues and have my ceph back in a happy state.
Regards,
Martin Conway
IT and Digital Media Man