[ceph-users] Re: User + Dev Meetup Tomorrow!

2024-05-24 Thread Frédéric Nass
packages? > * Do you run bare-metal or virtualized? > > Best, > Sebastian > > Am 24.05.24 um 12:28 schrieb Frédéric Nass: >> Hello everyone, >> >> Nice talk yesterday. :-) >> >> Regarding containers vs RPMs and orchestration, and the related discussio

[ceph-users] Re: User + Dev Meetup Tomorrow!

2024-05-24 Thread Frédéric Nass
Hello everyone, Nice talk yesterday. :-) Regarding containers vs RPMs and orchestration, and the related discussion from yesterday, I wanted to share a few things (which I wasn't able to share yesterday on the call due to a headset/bluetooth stack issue) to explain why we use cephadm and ceph

[ceph-users] Re: Problem with take-over-existing-cluster.yml playbook

2024-05-14 Thread Frédéric Nass
/*.yaml files. You can also try adding multiple - on the ansible-playbook command and see if you get something useful. Regards, Frédéric. De : vladimir franciz blando Envoyé : mardi 14 mai 2024 21:23 À : Frédéric Nass Cc: Eugen Block; ceph-users Objet : Re

[ceph-users] Re: Problem with take-over-existing-cluster.yml playbook

2024-05-14 Thread Frédéric Nass
t work either. > Regards, > [ https://about.me/vblando | Vlad Blando ] > On Tue, May 14, 2024 at 4:10 PM Frédéric Nass < [ > mailto:frederic.n...@univ-lorraine.fr | frederic.n...@univ-lorraine.fr ] > > wrote: >> Hello Vlad, >> We've seen this before a while bac

[ceph-users] Re: Problem with take-over-existing-cluster.yml playbook

2024-05-14 Thread Frédéric Nass
Hello Vlad, We've seen this before a while back. Not sure to recall how we got around this but you might want to try setting 'ip_version: ipv4' in your all.yaml file since this seems to be a condition to the facts setting. - name: Set_fact _monitor_addresses - ipv4 ansible.builtin.set_fact:

[ceph-users] Re: MDS crash

2024-04-26 Thread Frédéric Nass
Hello, 'almost all diagnostic ceph subcommands hang!' -> this triggered my bell. We've had a similar issue with many ceph commands hanging due to a missing L3 ACL between MGRs and a new MDS machine that we added to the cluster. I second Eugen analysis: network issue, whatever the OSI layer.

[ceph-users] Re: Impact of large PG splits

2024-04-25 Thread Frédéric Nass
too easy to forget to >> reduce them later, or think that it's okay to run all the time with >> reduced headroom. >> >> Until a host blows up and you don't have enough space to recover into. >> >>> On Apr 12, 2024, at 05:01, Frédéric Nass >>> wrote: >

[ceph-users] Re: Orchestrator not automating services / OSD issue

2024-04-24 Thread Frédéric Nass
Hello Michael, You can try this: 1/ check that the host shows up on ceph orch ls with the right label 'osds' 2/ check that the host is OK with ceph cephadm check-host . It should look like: (None) ok podman (/usr/bin/podman) version 4.6.1 is present systemctl is present lvcreate is present

[ceph-users] Re: Why CEPH is better than other storage solutions?

2024-04-23 Thread Frédéric Nass
. Regards, Frédéric. - Le 23 Avr 24, à 13:04, Janne Johansson icepic...@gmail.com a écrit : > Den tis 23 apr. 2024 kl 11:32 skrev Frédéric Nass > : >> Ceph is strongly consistent. Either you read/write objects/blocs/files with >> an >> insured strong consistency OR yo

[ceph-users] Re: Why CEPH is better than other storage solutions?

2024-04-23 Thread Frédéric Nass
Hello, My turn ;-) Ceph is strongly consistent. Either you read/write objects/blocs/files with an insured strong consistency OR you don't. Worst thing you can expect from Ceph, as long as it's been properly designed, configured and operated is a temporary loss of access to the data. There

[ceph-users] Re: cephadm custom jinja2 service templates

2024-04-17 Thread Frédéric Nass
Hello Felix, You can download haproxy.cfg.j2 and keepalived.conf.j2 from here [1], tweak them to your needs and set them via: ceph config-key set mgr/cephadm/services/ingress/haproxy.cfg -i haproxy.cfg.j2 ceph config-key set mgr/cephadm/services/ingress/keepalived.conf -i keepalived.conf.j2

[ceph-users] Re: PG inconsistent

2024-04-12 Thread Frédéric Nass
- Le 12 Avr 24, à 15:17, Albert Shih albert.s...@obspm.fr a écrit : > Le 12/04/2024 à 12:56:12+0200, Frédéric Nass a écrit >> > Hi, > >> >> Have you check the hardware status of the involved drives other than with >> smartctl? Like with the manufacturer

[ceph-users] Re: PG inconsistent

2024-04-12 Thread Frédéric Nass
Hello Albert, Have you check the hardware status of the involved drives other than with smartctl? Like with the manufacturer's tools / WebUI (iDrac / perccli for DELL hardware for example). If these tools don't report any media error (that is bad blocs on disks) then you might just be facing

[ceph-users] Re: Impact of large PG splits

2024-04-12 Thread Frédéric Nass
ell enough or not) after adding new OSDs. BTW, what ceph version is this? You should make sure you're running v16.2.11+ or v17.2.4+ before splitting PGs to avoid this nasty bug: https://tracker.ceph.com/issues/53729 Cheers, Frédéric. - Le 12 Avr 24, à 10:41, Frédéric Nass frederic

[ceph-users] Re: Impact of large PG splits

2024-04-12 Thread Frédéric Nass
Hello Eugen, Is this cluster using WPQ or mClock scheduler? (cephadm shell ceph daemon osd.0 config show | grep osd_op_queue) If WPQ, you might want to tune osd_recovery_sleep* values as they do have a real impact on the recovery/backfilling speed. Just lower osd_max_backfills to 1 before

[ceph-users] Re: Reef (18.2): Some PG not scrubbed/deep scrubbed for 1 month

2024-03-23 Thread Frédéric Nass
seems relevant. CC'ing Sridhar to have his thoughts. Cheers, Frédéric. - Le 22 Mar 24, à 19:37, Kai Stian Olstad ceph+l...@olstad.com a écrit : > On Fri, Mar 22, 2024 at 06:51:44PM +0100, Frédéric Nass wrote: >> >>> The OSD run bench and update osd_mclock_max_capacity_io

[ceph-users] Re: Reef (18.2): Some PG not scrubbed/deep scrubbed for 1 month

2024-03-22 Thread Frédéric Nass
--- De: Kai à: Frédéric Cc: Michel ; Pierre ; ceph-users Envoyé: vendredi 22 mars 2024 18:32 CET Sujet : Re: [ceph-users] Re: Reef (18.2): Some PG not scrubbed/deep scrubbed for 1 month On Fri, Mar 22, 2024 at 04:29:21PM +0100, Frédéric Nass wrote: >A/ these incredibly low values were calcu

[ceph-users] Re: Reef (18.2): Some PG not scrubbed/deep scrubbed for 1 month

2024-03-22 Thread Frédéric Nass
e using the same recent HW/HDD). > > Thanks for these informations. I'll follow your suggestions to rerun > the benchmark and report if it improved the situation. > > Best regards, > > Michel > > Le 22/03/2024 à 12:18, Frédéric Nass a écrit : >> Hello Michel, >

[ceph-users] Re: Reef (18.2): Some PG not scrubbed/deep scrubbed for 1 month

2024-03-22 Thread Frédéric Nass
also very low (all OSDs are using the same recent HW/HDD). Thanks for these informations. I'll follow your suggestions to rerun the benchmark and report if it improved the situation. Best regards, Michel Le 22/03/2024 à 12:18, Frédéric Nass a écrit : > Hello Michel, > > Pi

[ceph-users] Re: Reef (18.2): Some PG not scrubbed/deep scrubbed for 1 month

2024-03-22 Thread Frédéric Nass
Hello Michel, Pierre also suggested checking the performance of this OSD's device(s) which can be done by running a ceph tell osd.x bench. One think I can think of is how the scrubbing speed of this very OSD could be influenced by mclock sheduling, would the max iops capacity calculated by

[ceph-users] Leaked clone objects

2024-03-19 Thread Frédéric Nass
  Hello,   Over the last few weeks, we have observed a abnormal increase of a pool's data usage (by a factor of 2). It turns out that we are hit by this bug [1].   In short, if you happened to take pool snapshots and removed them by using the following command   'ceph osd pool rmsnap

[ceph-users] Re: [Urgent] Ceph system Down, Ceph FS volume in recovering

2024-03-16 Thread Frédéric Nass
  Hello Van Diep,   I read this after you got out of trouble.   According to your ceph osd tree, it looks like your problems started when the ceph orchestrator created osd.29 on node 'cephgw03' because it looks very unlikely that you created a 100MB OSD on a node that's named after

[ceph-users] Re: How does mclock work?

2024-01-16 Thread Frédéric Nass
Sridhar,   Thanks a lot for this explantation. It's clearer now.   So at the end of the day (at least with balanced profile) it's a lower bound and no upper limit and a balanced distribution between client and cluster IOPS.   Regards, Frédéric. -Message original- De:

[ceph-users] Re: Ceph Nautilous 14.2.22 slow OSD memory leak?

2024-01-12 Thread Frédéric Nass
are using the valilla Linux 4.19 LTS version. Do you think we may be suffering from the same bug?   best regards,   Samuel   huxia...@horebdata.cn  From: Frédéric Nass Date: 2024-01-12 09:19 To:  huxiaoyu CC: ceph-users Subject: Re: [ceph-users] Ceph Nautilous 14.2.22 slow OSD memory

[ceph-users] Re: 3 DC with 4+5 EC not quite working

2024-01-12 Thread Frédéric Nass
Hello Torkil,   We're using the same ec scheme than yours with k=5 and m=4 over 3 DCs with the below rule:   rule ec54 {         id 3         type erasure         min_size 3         max_size 9         step set_chooseleaf_tries 5         step set_choose_tries 100         step take

[ceph-users] Re: Ceph Nautilous 14.2.22 slow OSD memory leak?

2024-01-12 Thread Frédéric Nass
Hello,   We've had a similar situation recently where OSDs would use way more memory than osd_memory_target and get OOM killed by the kernel. It was due to a kernel bug related to cgroups [1].   If num_cgroups below keeps increasing then you may hit this bug.   $ cat /proc/cgroups | grep

[ceph-users] How does mclock work?

2024-01-09 Thread Frédéric Nass
for reads.   With HDD only setups (RocksDB+WAL+Data on HDD), if mclock only considers write performance, the OSD may not take advantage of higher read performance.   Can someone please shed some light on this?   Best regards, Frédéric Nass Sous-direction Infrastructures et Services

[ceph-users] Re: Critical Information: DELL/Toshiba SSDs dying after 70,000 hours of operation

2023-09-01 Thread Frédéric Nass
to import foreign config by pressing 'F' key on the next start) Many thanks to DELL French TAMs and DELL engineering for providing this firmware in a short time. Best regards, Frédéric. - Le 19 Juin 23, à 10:46, Frédéric Nass a écrit : > Hello, > This message does not concer

[ceph-users] Critical Information: DELL/Toshiba SSDs dying after 70,000 hours of operation

2023-06-19 Thread Frédéric Nass
Hello, This message does not concern Ceph itself but a hardware vulnerability which can lead to permanent loss of data on a Ceph cluster equipped with the same hardware in separate fault domains. The DELL / Toshiba PX02SMF020, PX02SMF040, PX02SMF080 and PX02SMB160 SSD drives of the 13G

[ceph-users] Re: Crushmap rule for multi-datacenter erasure coding

2023-04-05 Thread Frédéric Nass
Hello Michel, What you need is: step choose indep 0 type datacenter step chooseleaf indep 2 type host step emit I think you're right about the need to tweak the crush rule by editing the crushmap directly. Regards Frédéric. - Le 3 Avr 23, à 18:34, Michel Jouvin

[ceph-users] Re: iscsi target lun error

2023-01-12 Thread Frédéric Nass
Hi Xiubo, Randy, This is due to ' host.containers.internal' being added to the container's /etc/hosts since Podman 4.1+. The workaround consists of either downgrading Podman package to v4.0 (on RHEL8, dnf downgrade podman-4.0.2-6.module+el8.6.0+14877+f643d2d6) or adding the --no-hosts option

[ceph-users] Re: Increase the recovery throughput

2022-12-26 Thread Frédéric Nass
Hi Monish, You might also want to check the values of osd_recovery_sleep_* if they are not the default ones. Regards, Frédéric. - Le 12 Déc 22, à 11:32, Monish Selvaraj mon...@xaasability.com a écrit : > Hi Eugen, > > We tried that already. the osd_max_backfills is in 24 and the >

[ceph-users] Do not use VMware Storage I/O Control with Ceph iSCSI GWs!

2022-01-26 Thread Frédéric Nass
e I/O Control **and** statistics collection" on each Datastore. Regards, Frédéric. -- Cordialement, Frédéric Nass Direction du Numérique Sous-direction Infrastructures et Services Tél : 03.72.74.11.35 ___ ceph-users mailing list -- ceph-use

[ceph-users] Re: Moving all s3 objects from an ec pool to a replicated pool using storage classes.

2022-01-25 Thread Frédéric Nass
Le 25/01/2022 à 18:28, Casey Bodley a écrit : On Tue, Jan 25, 2022 at 11:59 AM Frédéric Nass wrote: Le 25/01/2022 à 14:48, Casey Bodley a écrit : On Tue, Jan 25, 2022 at 4:49 AM Frédéric Nass wrote: Hello, I've just heard about storage classes and imagined how we could use them

[ceph-users] Re: Moving all s3 objects from an ec pool to a replicated pool using storage classes.

2022-01-25 Thread Frédéric Nass
Le 25/01/2022 à 14:48, Casey Bodley a écrit : On Tue, Jan 25, 2022 at 4:49 AM Frédéric Nass wrote: Hello, I've just heard about storage classes and imagined how we could use them to migrate all S3 objects within a placement pool from an ec pool to a replicated pool (or vice-versa) for data

[ceph-users] Re: CephFS keyrings for K8s

2022-01-25 Thread Frédéric Nass
Le 25/01/2022 à 12:09, Frédéric Nass a écrit : Hello Michal, With cephfs and a single filesystem shared across multiple k8s clusters, you should subvolumegroups to limit data exposure. You'll find an example of how to use subvolumegroups in the ceph-csi-cephfs helm chart [1]. Essentially

[ceph-users] Re: CephFS keyrings for K8s

2022-01-25 Thread Frédéric Nass
see any clever/safer caps to use. Regards, Frédéric. [1] https://github.com/ceph/ceph-csi/blob/devel/charts/ceph-csi-cephfs/values.yaml#L20 [2] https://github.com/ceph/ceph-csi/blob/devel/charts/ceph-csi-rbd/values.yaml#L20 [3] https://github.com/ceph/ceph-csi/blob/devel/docs/capabilities.md

[ceph-users] Moving all s3 objects from an ec pool to a replicated pool using storage classes.

2022-01-25 Thread Frédéric Nass
, Frédéric Nass Direction du Numérique Sous-direction Infrastructures et Services Tél : 03.72.74.11.35 ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: osd_memory_target=level0 ?

2021-09-30 Thread Frédéric Nass
created with 32k alloc size then it might explain the unexpected overspilling with a lot of objects in the cluster. Hope that helps, Regards, Frédéric. -- Cordialement, Frédéric Nass Direction du Numérique Sous-direction Infrastructures et Services Tél : 03.72.74.11.35 Le 30/09/2021 à 10:02

[ceph-users] Re: Cephfs metadata and MDS on same node

2021-03-26 Thread Frédéric Nass
is local to the MDS the client's talking to which in realy life is impossible to achieve as you cannot pin cephfs trees and their related metadata objects to specific PGs. Best regards, Frédéric. -- Cordialement, Frédéric Nass Direction du Numérique Sous-Direction Infrastructures et Services

[ceph-users] Re: CephFS max_file_size

2021-03-25 Thread Frédéric Nass
rding to the file size) really existed. The max_file_size setting prevents users from creating files that appear to be eg. exabytes in size, causing load on the MDS as it tries to enumerate the objects during operations like stats or deletes." Thought it might help. -- Cordialement, Fré

[ceph-users] Re: Ceph Outage (Nautilus) - 14.2.11

2020-12-16 Thread Frédéric Nass
Hi Suresh, 24 HDDs backed by only by 2 NVMes looks like a high ratio. What triggers my bell in your post is "upgraded from Luminous to Nautilus" and "Elasticsearch" which mainly reads to index data and also "memory leak". You might want to take a look at the current value of

[ceph-users] Re: OSD reboot loop after running out of memory

2020-12-16 Thread Frédéric Nass
, Frédéric Nass a écrit : Hi Sefan, This has me thinking that the issue your cluster may be facing is probably with bluefs_buffered_io set to true, as this has been reported to induce excessive swap usage (and OSDs flapping or OOMing as consequences) in some versions starting from Nautilus I

[ceph-users] Re: OSD reboot loop after running out of memory

2020-12-16 Thread Frédéric Nass
state. Thanks, Stefan On 12/14/20, 3:35 PM, "Frédéric Nass" wrote: Hi Stefan, Initial data removal could also have resulted from a snapshot removal leading to OSDs OOMing and then pg remappings leading to more removals after OOMed OSDs rejoined the clus

[ceph-users] Re: OSD reboot loop after running out of memory

2020-12-14 Thread Frédéric Nass
I forgot to mention "If with bluefs_buffered_io=false, the %util is over 75% most of the time ** during data removal (like snapshot removal) **, then you'd better change it to true." Regards, Frédéric. Le 14/12/2020 à 21:35, Frédéric Nass a écrit : Hi Stefan, Initial data rem

[ceph-users] Re: OSD reboot loop after running out of memory

2020-12-14 Thread Frédéric Nass
Hi Stefan, Initial data removal could also have resulted from a snapshot removal leading to OSDs OOMing and then pg remappings leading to more removals after OOMed OSDs rejoined the cluster and so on. As mentioned by Igor : "Additionally there are users' reports that recent default value's

[ceph-users] Re: NoSuchKey on key that is visible in s3 list/radosgw bk

2020-11-23 Thread Frédéric Nass
Hi Denis, You might want to look at rgw_gc_obj_min_wait from [1] and try increasing the default value of 7200s (2 hours) to whatever suits your need < 2^64. Just remind that at some point you'll have to get these objects processed by the gc. Or manually through the API [2]. One thing that

[ceph-users] Re: OverlayFS with Cephfs to mount a snapshot read/write

2020-11-13 Thread Frédéric Nass
s _do_ if that's an > option for you. > -- Jeff > > On Mon, 2020-11-09 at 19:21 +0100, Frédéric Nass wrote: >> I feel lucky to have you on this one. ;-) Do you mean applying a >> specific patch on 3.10 kernel? Or is this one too old to have it working >> anyways. >>

[ceph-users] Re: OverlayFS with Cephfs to mount a snapshot read/write

2020-11-11 Thread Frédéric Nass
kernels have that patch (so far). Newer RHEL8 kernels _do_ if that's an option for you. -- Jeff On Mon, 2020-11-09 at 19:21 +0100, Frédéric Nass wrote: I feel lucky to have you on this one. ;-) Do you mean applying a specific patch on 3.10 kernel? Or is this one too old to have it working anyways

[ceph-users] Re: OverlayFS with Cephfs to mount a snapshot read/write

2020-11-09 Thread Frédéric Nass
I feel lucky to have you on this one. ;-) Do you mean applying a specific patch on 3.10 kernel? Or is this one too old to have it working anyways. Frédéric. Le 09/11/2020 à 19:07, Luis Henriques a écrit : Frédéric Nass writes: Hi Luis, Thanks for your help. Sorry I forgot about

[ceph-users] Re: OverlayFS with Cephfs to mount a snapshot read/write

2020-11-09 Thread Frédéric Nass
Luis, I gave RHEL 8 and kernel 4.18 a try and it's working perfectly! \o/ Same commands, same mount options. Does anyone know why and if there's any chances I can have this working with CentOS/RHEL 7 and 3.10 kernel? Best regards, Frédéric. Le 09/11/2020 à 15:04, Frédéric Nass a écrit

[ceph-users] Re: OverlayFS with Cephfs to mount a snapshot read/write

2020-11-09 Thread Frédéric Nass
ile: upperdir user.name="upperdir" Are you able to modify the content of a snapshot directory using overlayfs on your side? Frédéric. Le 09/11/2020 à 12:39, Luis Henriques a écrit : Frédéric Nass writes: Hello, I would like to use a cephfs snapshot as a read/write volume without ha

[ceph-users] OverlayFS with Cephfs to mount a snapshot read/write

2020-11-09 Thread Frédéric Nass
Hello, I would like to use a cephfs snapshot as a read/write volume without having to clone it first as the cloning operation is - if I'm not mistaken - still inefficient as of now. This is for a data restore use case with Moodle application needing a writable data directory to start. The