[ceph-users] Re: osdspec_affinity error in the Cephadm module

2023-08-31 Thread Adam Huffman
on code will fail. This can happen when a device has multiple LVs > where some of are used by Ceph and at least one LV isn't used by Ceph." so > maybe you can start there in terms of finding a potential workaround for > now. > > On Wed, Aug 16, 2023 at 12:05 PM Adam Huffman <

[ceph-users] osdspec_affinity error in the Cephadm module

2023-08-16 Thread Adam Huffman
I've been having fun today trying to invite a new disk that replaced a failing one into a cluster. One of my attempts to apply an OSD spec was clearly wrong, because I now have this error: Module 'cephadm' has failed: 'osdspec_affinity' and this was the traceback in the mgr logs: Traceback

[ceph-users] Re: cephadm problem with MON deployment

2023-07-11 Thread Adam Huffman
the request, the new MONs were created. On Tue, 11 Jul 2023 at 08:57, Adam Huffman wrote: > Forgot to say we're on Pacific 16.2.13. > > On Tue, 11 Jul 2023 at 08:55, Adam Huffman > wrote: > >> Hello >> >> I'm trying to add MONs in advance of a planned downtime.

[ceph-users] Re: cephadm problem with MON deployment

2023-07-11 Thread Adam Huffman
Forgot to say we're on Pacific 16.2.13. On Tue, 11 Jul 2023 at 08:55, Adam Huffman wrote: > Hello > > I'm trying to add MONs in advance of a planned downtime. > > This has actually ended up removing an existing MON, which isn't helpful. > > The error I'm seeing is: >

[ceph-users] cephadm problem with MON deployment

2023-07-11 Thread Adam Huffman
Hello I'm trying to add MONs in advance of a planned downtime. This has actually ended up removing an existing MON, which isn't helpful. The error I'm seeing is: Invalid argument: /var/lib/ceph/mon/ceph-/store.db: does not exist (create_if_missing is false) error opening mon data directory at

[ceph-users] Re: Unclear on metadata config for new Pacific cluster

2022-02-23 Thread Adam Huffman
was just a few GBs in use immediately after creation. > Zitat von Adam Huffman : > > > Hello > > > > We have a new Pacific cluster configured via Cephadm. > > > > For the OSDs, the spec is like this, with the intention for DB and WAL to > > be on N

[ceph-users] Unclear on metadata config for new Pacific cluster

2022-02-22 Thread Adam Huffman
Hello We have a new Pacific cluster configured via Cephadm. For the OSDs, the spec is like this, with the intention for DB and WAL to be on NVMe: spec: data_devices: rotational: true db_devices: model: SSDPE2KE032T8L filter_logic: AND objectstore: bluestore wal_devices: