[ceph-users] Re: Questions about PG auto-scaling and node addition

2023-09-15 Thread Christophe BAILLON
Thanks for your reply - Mail original - > De: "Kai Stian Olstad" > À: "Christophe BAILLON" > Cc: "ceph-users" > Envoyé: Jeudi 14 Septembre 2023 21:44:57 > Objet: Re: [ceph-users] Questions about PG auto-scaling and node addition > On W

[ceph-users] Questions about PG auto-scaling and node addition

2023-09-13 Thread Christophe BAILLON
Hello, We have a cluster with 21 nodes, each having 12 x 18TB, and 2 NVMe for db/wal. We need to add more nodes. The last time we did this, the PGs remained at 1024, so the number of PGs per OSD decreased. Currently, we are at 43 PGs per OSD. Does auto-scaling work correctly in Ceph versio

[ceph-users] Re: db/wal pvmoved ok, but gui show old metadatas

2023-07-03 Thread Christophe BAILLON
}, { "device": "/dev/sdc", "device_id": "SEAGATE_ST18000NM004J_ZR52TT83C148JFSJ" } ] - Mail original - > De: "Christophe BAILLON" > À: "ceph-users" > Envoyé: Vendredi 30 Juin 2023 15:33:41 > Objet

[ceph-users] db/wal pvmoved ok, but gui show old metadatas

2023-06-30 Thread Christophe BAILLON
Hello, we have a Ceph 17.2.5 cluster with a total of 26 nodes, where 15 nodes that have faulty NVMe drives, where the db/wal resides (one NVMe for the first 6 OSDs and another for the remaining 6). We replaced them with new drives and pvmoved it to avoid losing the OSDs. So far, there are n

[ceph-users] Advices for the best way to move db/wal lv from old nvme to new one

2023-03-22 Thread Christophe BAILLON
Hello, We have a cluster with 26 nodes, and 15 nodes have a bad batch of 2 nvme wheree we have for each 6 lv for db/wal. We have to change it, because they fail one by one... The defective nvme are M2 samsung enterprise. When they fail, we got sense errors, and the nvme disappear, if we power o

[ceph-users] How to monitor growing of db/wal partitions ?

2022-11-14 Thread Christophe BAILLON
Hello, How to simply monitor the growing of db/wal partitions ? We have 2 nmve shared for 12 osd by host (1 nvme for 6 osd), and we want to monitor the growing. We use cephadm to manage ours clusters Thanks for advance -- Christophe BAILLON Mobile :: +336 16 400 522 Work :: https://eyona.com

[ceph-users] Re: SMB and ceph question

2022-10-27 Thread Christophe BAILLON
Thanks, it's fine > De: "Wyll Ingersoll" > À: "Christophe BAILLON" > Cc: "Eugen Block" , "ceph-users" > Envoyé: Jeudi 27 Octobre 2022 22:49:18 > Objet: Re: [ceph-users] Re: SMB and ceph question > No - the recommendation is ju

[ceph-users] Re: SMB and ceph question

2022-10-27 Thread Christophe BAILLON
t; there aren't any guides yet on docs.ceph.com. > > Regards, > Eugen > > [1] https://documentation.suse.com/ses/7.1/single-html/ses-admin/#cha-ses-cifs > > Zitat von Christophe BAILLON : > >> Hello, >> >> For a side project, we need to exp

[ceph-users] SMB and ceph question

2022-10-27 Thread Christophe BAILLON
that, can you help me to find the best way to deploy samba on top ? Regards -- Christophe BAILLON Mobile :: +336 16 400 522 Work :: https://eyona.com Twitter :: https://twitter.com/ctof ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscri

[ceph-users] Re: Status of Quincy 17.2.5 ?

2022-10-19 Thread Christophe BAILLON
___ > ceph-users mailing list -- ceph-users@ceph.io > To unsubscribe send an email to ceph-users-le...@ceph.io -- Christophe BAILLON Mobile :: +336 16 400 522 Work :: https://eyona.com Twitter :: https://twitter.com/ctof ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: cephadm automatic sizing of WAL/DB on SSD

2022-09-15 Thread Christophe BAILLON
nd 30GB of extra room for compaction? >> >> I don't use cephadm, but it's maybe related to this regression: >> https://tracker.ceph.com/issues/56031. At list the symptoms looks very >> similar... >> >> Cheers, >> >> -- >&g

[ceph-users] Re: Advice to create a EC pool with 75% raw capacity usable

2022-09-08 Thread Christophe BAILLON
ld storage used only with cephfs and where we store only big files. We cannot have a full replicated cluster, and we need maximum uptime... Regards - Mail original - > De: "Anthony D'Atri" > À: "Bailey Allison" > Cc: "Danny Webb" , "Ch

[ceph-users] Advice to create a EC pool with 75% raw capacity usable

2022-09-07 Thread Christophe BAILLON
Hello, I need advice on the creation of an EC profile and the associate crush rule, for a cluster of 15 nodes, each with 12 x 18Tb disks with the objective of being able to lose 2 hosts or 4 disks. I would like to have the most space available, a 75% ratio would be ideal If you can give me some

[ceph-users] Re: CephPGImbalance: deviates by more than 30%

2022-07-11 Thread Christophe BAILLON
atjana Dehler >> SUSE Software Solutions Germany GmbH >> Frankenstraße 146 >> 90461 Nuremberg >> Germany >> >> (HRB 36809, AG Nürnberg) >> Managing Director: Ivo Totev, Andrew Myers, Andrew McDonald, Boudien >> Moerman >> __

[ceph-users] Re: Suggestion to build ceph storage

2022-06-20 Thread Christophe BAILLON
Quincy. > > Gr. Stefan > > [1]: > https://docs.ceph.com/en/quincy/rados/configuration/bluestore-config-ref/#sizing > ___ > ceph-users mailing list -- ceph-users@ceph.io > To unsubscribe send an email to ceph-users-le...@cep

[ceph-users] Re: Many errors about PG deviate more than 30% on a new cluster deployed by cephadm

2022-06-08 Thread Christophe BAILLON
> email. The autoscaler will increase pg_num as soon as you push data > into it, no need to tear the cluster down. > > Zitat von Christophe BAILLON : > >> Hello, >> >> thanks for your reply >> >> No did not stop autoscaler >> >> root@stor

[ceph-users] Re: Many errors about PG deviate more than 30% on a new cluster deployed by cephadm

2022-06-07 Thread Christophe BAILLON
t; very low (too low) PG numbers per OSD (between 0 and 6), did you stop > the autoscaler at an early stage? If you don't want to use the > autoscaler you should increase the pg_num, but you could set > autoscaler to warn mode and see what it suggests. > > > Zitat von Chris

[ceph-users] Many errors about PG deviate more than 30% on a new cluster deployed by cephadm

2022-06-06 Thread Christophe BAILLON
Hi all I got many error about PG deviation more than 30% on a new installed cluster This cluster is managed by cephadm all box 15 have : 12 x 18Tb 2 x nvme 2 x ssd for boot Our main pool is on EC 6 + 2 for exclusive use with cephfs created with this method : ceph orch apply -i osd_spec.yaml wi

[ceph-users] Re: Problem with ceph-volume

2022-05-31 Thread Christophe BAILLON
l original ----- > De: "Christophe BAILLON" > À: "ceph-users" > Envoyé: Mardi 31 Mai 2022 18:15:15 > Objet: [ceph-users] Problem with ceph-volume > Hello > > On a new cluster, installed with cephadm, I have prepared news osd for > separate > al and db

[ceph-users] Problem with ceph-volume

2022-05-31 Thread Christophe BAILLON
Hello On a new cluster, installed with cephadm, I have prepared news osd for separate al and db To do it I follow this doc : https://docs.ceph.com/en/quincy/rados/configuration/bluestore-config-ref/ I run ceph version 17.2.0 When I shoot the ceph-volume creation I got this error root@store-par