[ceph-users] Re: cephadm bootstraps cluster with bad CRUSH map(?)

2024-05-20 Thread Anthony D'Atri
> On May 20, 2024, at 2:24 PM, Matthew Vernon wrote: > > Hi, > > Thanks for your help! > > On 20/05/2024 18:13, Anthony D'Atri wrote: > >> You do that with the CRUSH rule, not with osd_crush_chooseleaf_type. Set >> that back to the default value of `1`. This option is marked `dev` for a

[ceph-users] Re: cephadm bootstraps cluster with bad CRUSH map(?)

2024-05-20 Thread Matthew Vernon
Hi, Thanks for your help! On 20/05/2024 18:13, Anthony D'Atri wrote: You do that with the CRUSH rule, not with osd_crush_chooseleaf_type. Set that back to the default value of `1`. This option is marked `dev` for a reason ;) OK [though not obviously at

[ceph-users] Re: cephadm bootstraps cluster with bad CRUSH map(?)

2024-05-20 Thread Anthony D'Atri
> >>> This has left me with a single sad pg: >>> [WRN] PG_AVAILABILITY: Reduced data availability: 1 pg inactive >>>pg 1.0 is stuck inactive for 33m, current state unknown, last acting [] >>> >> .mgr pool perhaps. > > I think so > >>> ceph osd tree shows that CRUSH picked up my racks OK,

[ceph-users] Re: cephadm bootstraps cluster with bad CRUSH map(?)

2024-05-20 Thread Matthew Vernon
Hi, On 20/05/2024 17:29, Anthony D'Atri wrote: On May 20, 2024, at 12:21 PM, Matthew Vernon wrote: This has left me with a single sad pg: [WRN] PG_AVAILABILITY: Reduced data availability: 1 pg inactive pg 1.0 is stuck inactive for 33m, current state unknown, last acting [] .mgr pool

[ceph-users] Re: cephadm bootstraps cluster with bad CRUSH map(?)

2024-05-20 Thread Anthony D'Atri
> On May 20, 2024, at 12:21 PM, Matthew Vernon wrote: > > Hi, > > I'm probably Doing It Wrong here, but. My hosts are in racks, and I wanted > ceph to use that information from the get-go, so I tried to achieve this > during bootstrap. > > This has left me with a single sad pg: > [WRN]

[ceph-users] cephadm bootstraps cluster with bad CRUSH map(?)

2024-05-20 Thread Matthew Vernon
Hi, I'm probably Doing It Wrong here, but. My hosts are in racks, and I wanted ceph to use that information from the get-go, so I tried to achieve this during bootstrap. This has left me with a single sad pg: [WRN] PG_AVAILABILITY: Reduced data availability: 1 pg inactive pg 1.0 is stuck

[ceph-users] Re: Cephfs over internet

2024-05-20 Thread Marc
> Hi all, > Due to so many reasons (political, heating problems, lack of space > aso.) we have to > plan for our ceph cluster to be hosted externaly. > The planned version to setup is reef. > Reading up on documentation we found that it was possible to run in > secure mode. > > Our ceph.conf file

[ceph-users] Cephfs over internet

2024-05-20 Thread Marcus
Hi all, Due to so many reasons (political, heating problems, lack of space aso.) we have to plan for our ceph cluster to be hosted externaly. The planned version to setup is reef. Reading up on documentation we found that it was possible to run in secure mode. Our ceph.conf file will state

[ceph-users] CEPH quincy 17.2.5 with Erasure Code

2024-05-20 Thread Andrea Martra
Hi everyone, I'm managing a Ceph Quincy 17.2.5 cluster, waiting to upgrade it to version 17.2.7, composed and configured as follows: - 16 identical nodes 256 GB RAM, 32 CPU Cores (64 threads), 12 x rotary HDD (BLOCK) + 4 x Sata SSD (RocksDB/WAL) - Erasure Code 11+4 (Jerasure) - 10 x S3 RGW

[ceph-users] Re: MDS behind on trimming every 4-5 weeks causing issue for ceph filesystem

2024-05-20 Thread Kotresh Hiremath Ravishankar
Please share the mds per dump as requested. We need to understand what's happening before suggesting anything. Thanks & Regards, Kotresh H R On Fri, May 17, 2024 at 5:35 PM Akash Warkhade wrote: > @Kotresh Hiremath Ravishankar > > Can you please help on above > > > > On Fri, 17 May, 2024,