[ceph-users] Re: Snap_schedule does not always work.

2023-09-27 Thread Milind Changire
the filesystem path specified for scheduling snapshots is incorrect see below ... On Thu, Sep 28, 2023 at 9:52 AM Kushagr Gupta wrote: > > Hi Milind,Team > > Thank you for the response @Milind. > > >>Snap-schedule no longer accepts a --subvol argument, > Thank you for the information. > >

[ceph-users] Re: cephfs health warn

2023-09-27 Thread Venky Shankar
Hi Ben, On Tue, Sep 26, 2023 at 6:02 PM Ben wrote: > > Hi, > see below for details of warnings. > the cluster is running 17.2.5. the warnings have been around for a while. > one concern of mine is num_segments growing over time. Any config changes related to trimming that was done? A slow

[ceph-users] Re: Snap_schedule does not always work.

2023-09-27 Thread Milind Changire
Hello Kushagr, Snap-schedule no longer accepts a --subvol argument, so it's not easily possible to schedule snapshots for subvolumes. Could you tell the commands used to schedule snapshots for subvolumes ? -- Milind On Wed, Sep 27, 2023 at 11:13 PM Kushagr Gupta wrote: > > Hi Teams, > >

[ceph-users] Re: After upgrading from 17.2.6 to 18.2.0, OSDs are very frequently restarting due to livenessprobe failures

2023-09-27 Thread sbengeri
Hi Igor, I have copied three OSD logs to https://drive.google.com/file/d/1aQxibFJR6Dzvr3RbuqnpPhaSMhPSL--F/view?usp=sharing Hopefully they include some meaningful information. Thank you. Sudhin ___ ceph-users mailing list -- ceph-users@ceph.io To

[ceph-users] Re: Not able to find a standardized restoration procedure for subvolume snapshots.

2023-09-27 Thread Gregory Farnum
Unfortunately, there’s not any such ability. We are starting long-term work on making this smoother, but CephFS snapshots are read-only and there’s no good way to do a constant-time or low-time “clone” operation, so you just have to copy the data somewhere and start work on it from that position

[ceph-users] Snap_schedule does not always work.

2023-09-27 Thread Kushagr Gupta
Hi Teams, *Ceph-version*: Quincy, Reef *OS*: Almalinux 8 *Issue*: snap_schedule doesn't create the scheduled snapshots consistently. *Description:* Hi team, We are currently working in a 3-node ceph cluster. We are currently exploring the scheduled snapshot capability of the ceph-mgr module.

[ceph-users] Re: Not able to find a standardized restoration procedure for subvolume snapshots.

2023-09-27 Thread Kushagr Gupta
Hi Team, Any update on this? Thanks and Regards, Kushagra Gupta On Thu, Sep 14, 2023 at 9:19 AM Kushagr Gupta wrote: > Hi Team, > > Any update on this? > > Thanks and Regards, > Kushagra Gupta > > On Tue, Sep 5, 2023 at 10:51 AM Kushagr Gupta < > kushagrguptasps@gmail.com> wrote: > >>

[ceph-users] Re: cephfs health warn

2023-09-27 Thread Ben
some further investigation about three mds with trimming behind problem: logs captured over two days show that, some log segments are stuck in trimming process. It looks like a bug with trimming log segment? Any thoughts? ==log capture 9/26: debug 2023-09-26T16:50:59.004+

[ceph-users] Re: Quincy NFS ingress failover

2023-09-27 Thread John Mulligan
On Tuesday, September 26, 2023 6:00:23 AM EDT Ackermann, Christoph wrote: > Dear list members,, > > after upgrading to reef (18.2.0) I spent some time with CephFS, NFS & > HA(Ingress). I can confirm that Ingress (count either 1 or 2) works well > IF only ONE backend server is configured. But

[ceph-users] Re: CVE-2023-43040 - Improperly verified POST keys in Ceph RGW?

2023-09-27 Thread Gregory Farnum
We discussed this in the CLT today and Casey can talk more about the impact and technical state of affairs. This was disclosed on the security list and it’s rated as a bug that did not require hotfix releases due to the limited escalation scope. -Greg On Wed, Sep 27, 2023 at 1:37 AM Christian

[ceph-users] Ceph leadership team notes 9/27

2023-09-27 Thread Gregory Farnum
Hi everybody, The CLT met today as usual. We only had a few topics under discussion: * the User + Dev relaunch went off well! We’d like reliable recordings and have found Jitsi to be somewhat glitchy; Laura will communicate about workarounds for that while we work on a longer-term solution

[ceph-users] Dashboard daemon logging not working

2023-09-27 Thread Thomas Bennett
Hey, Has anyone else had issues with exploring Loki after deploying ceph monitoring services ? I'm running 17.2.6. When clicking on the Ceph dashboard daemon logs (i.e Cluster -> Logs -> Daemon Logs), it took me through to an embedded

[ceph-users] Specify priority for active MGR and MDS

2023-09-27 Thread Nicolas FONTAINE
Hi everyone, Is there a way to specify which MGR and which MDS should be the active one? Thanks, Nicolas. ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Cephadm specs application order

2023-09-27 Thread Luis Domingues
Hi, We are playing a little bit with OSD specs on a test cluster, and we ended up having nodes that match more than 1 OSD spec. (currently 4 or 5). And there is something we did not get yet. Is there any order cephadm will apply the sepcs? Are the specs sorted in any way inside cephadm? We

[ceph-users] CVE-2023-43040 - Improperly verified POST keys in Ceph RGW?

2023-09-27 Thread Christian Rohmann
Hey Ceph-users, i just noticed there is a post to oss-security (https://www.openwall.com/lists/oss-security/2023/09/26/10) about a security issue with Ceph RGW. Signed by IBM / Redhat and including a patch by DO. I also raised an issue on the tracker (https://tracker.ceph.com/issues/63004)

[ceph-users] Re: set proxy for ceph installation

2023-09-27 Thread Eugen Block
You'll need a containers.conf file: # cat /etc/containers/containers.conf [engine] env = ["http_proxy=:", "https_proxy=:", "no_proxy=localhost"] Restarting the container should apply the change. Make sure you also have the correct no_proxy settings, for example so the ceph servers don't

[ceph-users] Re: set proxy for ceph installation

2023-09-27 Thread Dario Graña
Hi Majid, You can try to manually execute the command */usr/bin/podman pull quay.io/ceph/ceph:v17 * and start debugging the problem from there. Regards! On Tue, Sep 26, 2023 at 3:42 PM Majid Varzideh wrote: > hi friends > i have deployed my first node in cluster.