[ceph-users] Re: ceph-mgr client.0 error registering admin socket command: (17) File exists

2024-02-26 Thread Eugen Block
Hi, I see these messages regularly but haven't looked to deep into the cause. It appears to be related to short interruptions like log rotation or a mgr failover. I think they're harmless. Regards, Eugen Zitat von Denis Polom : Hi, running Ceph Quincy 17.2.7 on Ubuntu Focal LTS,

[ceph-users] Re: Ceph & iSCSI

2024-02-26 Thread Xiubo Li
Hi Michael, Please see the previous threads about the same question: https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/message/GDJJL7VSDUJITPM3JV7RCVXVOIQO2CAN/ https://www.spinics.net/lists/ceph-users/msg73969.html Thanks - Xiubo On 2/27/24 11:22, Michael Worsham wrote: I was

[ceph-users] Re: OSD with dm-crypt?

2024-02-26 Thread Michael Worsham
I was setting up the Ceph cluster via this URL (https://computingforgeeks.com/install-ceph-storage-cluster-on-ubuntu-linux-servers/) and didn't know if there was a way to do it via the "ceph orch daemon add osd ceph-osd-01:/dev/sdb" command or not? Is it possible to set the OSD to encryption

[ceph-users] Re: OSD with dm-crypt?

2024-02-26 Thread Alex Gorbachev
If you are using a service spec, just set encrypted: true If using ceph-volume, pass this flag: --dmcrypt You can verify similar to https://smithfarm-thebrain.blogspot.com/2020/03/how-to-verify-that-encrypted-osd-is.html -- Alex Gorbachev ISS/Storcium On Mon, Feb 26, 2024 at 10:25 PM

[ceph-users] OSD with dm-crypt?

2024-02-26 Thread Michael Worsham
Is there a how-to document or cheat sheet on how to enable OSD encryption using dm-crypt? -- Michael This message and its attachments are from Data Dimensions and are intended only for the use of the individual or entity to which it is addressed, and may contain information that is

[ceph-users] Ceph & iSCSI

2024-02-26 Thread Michael Worsham
I was reading on the Ceph site that iSCSI is no longer under active development since November 2022. Why is that? https://docs.ceph.com/en/latest/rbd/iscsi-overview/ -- Michael This message and its attachments are from Data Dimensions and are intended only for the use of the individual or

[ceph-users] Sata SSD trim latency with (WAL+DB on NVME + Sata OSD)

2024-02-26 Thread Özkan Göksu
Hello. With the SSD drives without tantalum capacitors Ceph faces trim latency on every write. I wonder if the behavior is the same if we locate WAL+DB on NVME drives with "Tantalum capacitors" ? Do I need to use NVME + SAS SSD to avoid this latency issue? Best regards.

[ceph-users] Re: Some questions about cephadm

2024-02-26 Thread Adam King
In regards to > > From the reading you gave me I have understood the following : > 1 - Set osd_memory_target_autotune to true then set > autotune_memory_target_ratio to 0.2 > 2 - Or do the math. For my setup I have 384Go per node, each node has 4 > nvme disks of 7.6To, 0.2 of memory is 19.5G. So

[ceph-users] Re: pacific 16.2.15 QE validation status

2024-02-26 Thread Yuri Weinstein
Thank you all! We want to merge the PR with whitelisting added https://github.com/ceph/ceph/pull/55717 and will start the 16.2.15 build/release afterward. On Mon, Feb 26, 2024 at 8:25 AM Laura Flores wrote: > Thank you Junior for your thorough review of the RADOS suite. Aside from a > few

[ceph-users] Re: pacific 16.2.15 QE validation status

2024-02-26 Thread Laura Flores
Thank you Junior for your thorough review of the RADOS suite. Aside from a few remaining warnings in the final run that could benefit from whitelisting, these are not blockers. Rados-approved. On Mon, Feb 26, 2024 at 9:29 AM Kamoltat Sirivadhna wrote: > details of RADOS run analysis: > >

[ceph-users] Re: pacific 16.2.15 QE validation status

2024-02-26 Thread Kamoltat Sirivadhna
details of RADOS run analysis: yuriw-2024-02-19_19:25:49-rados-pacific-release-distro-default-smithi 1. https://tracker.ceph.com/issues/64455 task/test_orch_cli: Health check

[ceph-users] Re: Seperate metadata pool in 3x MDS node

2024-02-26 Thread Özkan Göksu
Hello Anthony, The hardware is second hand built and does not have U.2 slots. U.2 servers cost 3x-4x more.I mean PCI-E "MZ-PLK3T20". I have to buy SFP cards and 25G is only +30$ more than 10G so why not. Yes I'm thinking pinned as (clients > rack MDS) I don't have problems with building and I

[ceph-users] Re: pacific 16.2.15 QE validation status

2024-02-26 Thread Kamoltat Sirivadhna
RADOS approved On Wed, Feb 21, 2024 at 11:27 AM Yuri Weinstein wrote: > Still seeking approvals: > > rados - Radek, Junior, Travis, Adam King > > All other product areas have been approved and are ready for the release > step. > > Pls also review the Release Notes:

[ceph-users] Re: Cephadm and Ceph.conf

2024-02-26 Thread Robert Sander
On 2/26/24 15:24, Michael Worsham wrote: So how would I be able to put in configurations like this into it? [global] fsid = 46620486-b8a6-11ee-bf23-6510c4d9efa7 mon_host = [v2:10.20.27.10:3300/0,v1:10.20.27.10:6789/0] [v2:10.20.27.11:3300/0,v1:10.20.27.11:6789/0] osd

[ceph-users] ceph-mgr client.0 error registering admin socket command: (17) File exists

2024-02-26 Thread Denis Polom
Hi, running Ceph Quincy 17.2.7 on Ubuntu Focal LTS, ceph-mgr service reports following errors: client.0 error registering admin socket command: (17) File exists I don't use any extra mgr configuration: mgr   advanced  mgr/balancer/active true mgr   advanced 

[ceph-users] Re: Cephadm and Ceph.conf

2024-02-26 Thread Michael Worsham
So how would I be able to put in configurations like this into it? [global] fsid = 46620486-b8a6-11ee-bf23-6510c4d9efa7 mon_host = [v2:10.20.27.10:3300/0,v1:10.20.27.10:6789/0] [v2:10.20.27.11:3300/0,v1:10.20.27.11:6789/0] osd pool default size = 3 osd pool

[ceph-users] Re: Cephadm and Ceph.conf

2024-02-26 Thread Robert Sander
On 2/26/24 14:24, Michael Worsham wrote: I deployed a Ceph reef cluster using cephadm. When it comes to the ceph.conf file, which file should I be editing for making changes to the cluster - the one running under the docker container or the local one on the Ceph monitors? None of both. You

[ceph-users] Cephadm and Ceph.conf

2024-02-26 Thread Michael Worsham
I deployed a Ceph reef cluster using cephadm. When it comes to the ceph.conf file, which file should I be editing for making changes to the cluster - the one running under the docker container or the local one on the Ceph monitors? -- Michael This message and its attachments are from Data

[ceph-users] Re: Some questions about cephadm

2024-02-26 Thread wodel youchi
Hi; So it was that, create the initial-ceph.conf and use the --config Now All images are from the local registry. Thank you all for your help. Regards. Le lun. 26 févr. 2024 à 14:09, wodel youchi a écrit : > I've read that, but I didn't find how to use it? > should I use the : --config

[ceph-users] Re: Some questions about cephadm

2024-02-26 Thread wodel youchi
I've read that, but I didn't find how to use it? should I use the : --config *CONFIG_FILE *options? Le lun. 26 févr. 2024 à 13:59, Robert Sander a écrit : > Hi, > > On 2/26/24 13:22, wodel youchi wrote: > > > > No didn't work, the bootstrap is still downloading the images from quay. > > For the

[ceph-users] Re: Some questions about cephadm

2024-02-26 Thread John Mulligan
> > I have another problem, the local registry. I deployed a local registry > with the required images, then I used cephadm-ansible to prepare my hosts > and inject the local registry url into /etc/container/registry.conf file > > Then I tried to deploy using this command on the admin node: >

[ceph-users] Re: Some questions about cephadm

2024-02-26 Thread Robert Sander
Hi, On 2/26/24 13:22, wodel youchi wrote: No didn't work, the bootstrap is still downloading the images from quay. For the image locations of the monitoring stack you have to create an initical ceph.conf like it is mentioned in the chapter you referred earlier:

[ceph-users] Re: Some questions about cephadm

2024-02-26 Thread wodel youchi
Hi, No didn't work, the bootstrap is still downloading the images from quay. PS : My local registry does not require any login/pass authentication, I used fake ones since it's mandatory to give them. cephadm --image 192.168.2.36:4000/ceph/ceph:v17 bootstrap --registry-url 192.168.2.36:4000

[ceph-users] Re: Some questions about cephadm

2024-02-26 Thread Robert Sander
Hi, On 26.02.24 11:08, wodel youchi wrote: Then I tried to deploy using this command on the admin node: cephadm --image 192.168.2.36:4000/ceph/ceph:v17 bootstrap --mon-ip 10.1.0.23 --cluster-network 10.2.0.0/16 After the boot strap I found that it still downloads the images from the internet,

[ceph-users] Re: [Urgent] Ceph system Down, Ceph FS volume in recovering

2024-02-26 Thread Matthew Leonard (BLOOMBERG/ 120 PARK)
Glad to hear it all worked out for you! From: nguyenvand...@baoviet.com.vn At: 02/26/24 05:32:32 UTC-5:00To: ceph-users@ceph.io Subject: [ceph-users] Re: [Urgent] Ceph system Down, Ceph FS volume in recovering Dear Mr Eugen, Mr Matthew, Mr David, Mr Anthony My System is UP. Thank you so

[ceph-users] Re: [Urgent] Ceph system Down, Ceph FS volume in recovering

2024-02-26 Thread nguyenvandiep
Dear Mr Eugen, Mr Matthew, Mr David, Mr Anthony My System is UP. Thank you so much. We get many support from all of you., mazing, kindly support from Top professional in Ceph. Hope we have a chance to cooperate in the future. And If you travel to VietNam in future, let me know. I ll be your

[ceph-users] Re: What exactly does the osd pool repair funtion do?

2024-02-26 Thread Eugen Block
Hi, I'm not a dev, but as I understand it, the command would issue a 'pg repair' on each (primary) PG of the provided pool. It might be useful if you have multiple (or even many) inconsistent PGs in a pool. But I've never used that and this is just a hypothesis. Regards, Eugen Zitat von

[ceph-users] Re: Some questions about cephadm

2024-02-26 Thread wodel youchi
Thank you all for your help. @Adam From the reading you gave me I have understood the following : 1 - Set osd_memory_target_autotune to true then set autotune_memory_target_ratio to 0.2 2 - Or do the math. For my setup I have 384Go per node, each node has 4 nvme disks of 7.6To, 0.2 of memory is

[ceph-users] Re: Scrub stuck and 'pg has invalid (post-split) stat'

2024-02-26 Thread Eugen Block
Hi, thanks for the context. Was there any progress over the weekend? The hanging commands seem to be MGR related, and there's only one in your cluster according to your output. Can you deploy a second one manually, then adopt it with cephadm? Can you add 'ceph versions' as well? Zitat

[ceph-users] Re: pg repair doesn't fix "got incorrect hash on read" / "candidate had an ec hash mismatch"

2024-02-26 Thread Eugen Block
Hi, I think your approach makes sense. But I'm wondering if moving only the problematic PGs to different OSDs could have an effect as well. I assume that moving the 2 PGs is much quicker than moving all BUT those 2 PGs. If that doesn't work you could still fall back to draining the

[ceph-users] Re: Is a direct Octopus to Reef Upgrade Possible?

2024-02-26 Thread Eugen Block
Hi, no, you can't go directly from O to R, you need to upgrade to Q first. Technically it might be possible but it's not supported. Your approach to first adopt the cluster by cephadm is my preferred way as well. Regards, Eugen Zitat von "Alex Hussein-Kershaw (HE/HIM)" : Hi ceph-users,

[ceph-users] Re: ambigous mds behind on trimming and slowops (ceph 17.2.5 and rook operator 1.10.8)

2024-02-26 Thread Dhairya Parmar
Hi, May I know which version is being used in the cluster? It was started after 2 hours of one of the active mds was crashed Do we know the reason for the crash? Please share more info, `ceph -s` and MDS logs should reveal more insights. -- *Dhairya Parmar* Associate Software Engineer,