[ceph-users] Re: cephadm orch thinks hosts are offline

2022-06-29 Thread Thomas Roth
ve. > ssh-copy-id -f -i /etc/ceph/ceph.pub root@lxbk0374 > ceph orch host add lxbk0374 10.20.2.161 -> 'ceph orch host ls' shows that node no longer Offline. -> Repeat with all the other hosts, and everything looks fine also from the orch view. My question: Did I miss this procedure

[ceph-users] Re: cephadm orch thinks hosts are offline

2022-06-27 Thread Thomas Roth
f there is no longer an issue connecting to the host, should mark the host online again. Thanks, - Adam King On Thu, Jun 23, 2022 at 12:30 PM Thomas Roth wrote: Hi all, found this bug https://tracker.ceph.com/issues/51629 (Octopus 15.2.13), reproduced it in Pacific and now again in Quincy

[ceph-users] cephadm orch thinks hosts are offline

2022-06-23 Thread Thomas Roth
. Cheers Thomas -- Thomas Roth Department: Informationstechnologie Location: SB3 2.291 GSI Helmholtzzentrum für Schwerionenforschung GmbH Planckstraße 1, 64291 Darmstadt, Germany, www.gsi.de Commercial Register

[ceph-users] active+undersized+degraded due to OSD size differences?

2022-06-19 Thread Thomas Roth
st, whatever? This is all Quincy, cephadm, so there is no ceph.conf anymore, and I did not find the command to inject my failure domain into the config database... Regards Thomas -- ---- Thomas Roth IT-HPC-Linux Location: SB3 2.291

[ceph-users] ceph.pub not presistent over reboots?

2022-06-15 Thread Thomas Roth
ly believe that. Regards Thomas -- -------- Thomas Roth Department: Informationstechnologie Location: SB3 2.291 GSI Helmholtzzentrum für Schwerionenforschung GmbH Planckstraße 1, 64291 Darmstadt, Germany, www.gsi.de Commercial Register / Hande

[ceph-users] set configuration options in the cephadm age

2022-06-14 Thread Thomas Roth
https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-pg/ talks about changing 'osd_crush_chooseleaf_type' before creating monitors or OSDs, for the special case of a 1-node-cluster. However, the documentation fails to explain how/where to set this option, seeing that with

[ceph-users] Re: v17.2.0 Quincy released

2022-05-25 Thread Thomas Roth
Hello, just found that this "feature" is not restricted to upgrades - I just tried to bootstrap an entirely new cluster with Quincy, also with the fatal switch to non-root-user: adding the second mon results in > Unable to write lxmon1:/etc/ceph/ceph.conf: scp: /tmp/etc/ceph/ceph.conf.new:

[ceph-users] Re: Multipath and cephadm

2022-01-30 Thread Thomas Roth
more ideal), hence there is no obvious way to use separate db_devices, but this does look to work for me as far as it goes. Hope that helps Peter Childs On Tue, 25 Jan 2022, 17:53 Thomas Roth, wrote: Would like to know that as well. I have the same setup - cephadm, Pacific, CentOS8,

[ceph-users] Re: Multipath and cephadm

2022-01-25 Thread Thomas Roth
s-le...@ceph.io -- -------- Thomas Roth HPC Department GSI Helmholtzzentrum für Schwerionenforschung GmbH Planckstr. 1, 64291 Darmstadt, http://www.gsi.de/ Gesellschaft mit beschraenkter Haftung Sitz der Gesellschaft / Registered Office:Darmstadt

[ceph-users] Re: HDD <-> OSDs

2021-06-22 Thread Thomas Roth
Thomas -- ---- Thomas Roth Department: IT GSI Helmholtzzentrum für Schwerionenforschung GmbH www.gsi.de ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@c

[ceph-users] HDD <-> OSDs

2021-06-22 Thread Thomas Roth
to try cephfs on ~10 servers with 70 HDD each. That would make each system having to deal with 70 OSDs, on 70 LVs? Really no aggregation of the disks? Regards, Thomas -- Thomas Roth Department: IT GSI Helmholtzzentrum für