[ceph-users] Re: SMB Service in Squid

2024-09-03 Thread Robert W. Eckert
I was able to get it to work - while the samba vfs_ceph document showed the format of /non-mounted/cephfs/path, it is really much simpler - it already can pick up the file system, so the path is the Cephfs path, or im my case since I want the root, it is "/". While I still need to configure t

[ceph-users] Re: MDS cache always increasing

2024-09-03 Thread Alexander Patrakov
Ok, thanks for the clarification. This does disprove my theory. On Wed, Sep 4, 2024 at 12:30 AM Sake Ceph wrote: > > But the client which is doing the rsync, doesn't hold any caps after the > rsync. Cephfs-top shows 0 caps. Even a system reboot of the client doesn't > make a change. > > Kind re

[ceph-users] Re: SMB Service in Squid

2024-09-03 Thread Robert W. Eckert
Thanks- I have the .smb pool, and the container is picking up the config. After fixing a few errors in my config.json, (I had an underscore between vfs and objects) I am connecting to the smb server, but not able to get to the share, I am not sure if I have something misconfigured with permissi

[ceph-users] Re: SMB Service in Squid

2024-09-03 Thread Bailey Allison
I'm curious if you've done any testing with the vfs_ceph_snapshots module with this as well? It would be nice to be able to leverage shadowcopy on windows clients using native cephfs snapshots. I know it is probably out of the scope for the project where it's currently at, but I must admit my

[ceph-users] Re: lifecycle policy on non-replicated buckets

2024-09-03 Thread Christopher Durham
Soumya, Thank you for responding.  What release was this fixed in? I am uisng 18.2.2 and am about to go to 18.2.4. Yes, the sync policy shows on both zones when doing: # aws --endpoint https://master.fqdn s3api get-bucket-lifecycle-configuration --bucket # aws --endpoint https://slave.fqdn s3

[ceph-users] Re: SMB Service in Squid

2024-09-03 Thread John Mulligan
On Tuesday, September 3, 2024 5:00:20 PM EDT Robert W. Eckert wrote: > When I try to create the .smb pool, I get an error message: > > # ceph osd pool create .smb > pool names beginning with . are not allowed Ah, I was writing my reply from memory and forgot that to create a pool like that you

[ceph-users] Re: SMB Service in Squid

2024-09-03 Thread Robert W. Eckert
When I try to create the .smb pool, I get an error message: # ceph osd pool create .smb pool names beginning with . are not allowed I assume I can just change to using a pool without the leading period. When I do the shares, how do I format the share path? Does the ceph file system get mounted

[ceph-users] Re: SMB Service in Squid

2024-09-03 Thread John Mulligan
On Tuesday, September 3, 2024 3:42:29 PM EDT Robert W. Eckert wrote: > I have upgraded my home cluster to 19.1.0 and wanted to try out the SMB > orchestration features to improve my hacked SMB shared using CTDB and SMB > services on each host. > Hi there, thanks for trying out the new SMB stuff

[ceph-users] SMB Service in Squid

2024-09-03 Thread Robert W. Eckert
I have upgraded my home cluster to 19.1.0 and wanted to try out the SMB orchestration features to improve my hacked SMB shared using CTDB and SMB services on each host. My smb.yaml file looks like service_type: smb service_id: home placement: hosts: - HOST1 - HOST2 - HOST3 -

[ceph-users] Re: Issue Replacing OSD with cephadm: Partition Path Not Accepted

2024-09-03 Thread Herbert Faleiros
On 03/09/2024 03:35, Robert Sander wrote: Hi, Hello, On 9/2/24 20:24, Herbert Faleiros wrote: /usr/bin/docker: stderr ceph-volume lvm batch: error: /dev/sdb1 is a partition, please pass LVs or raw block devices A Ceph OSD nowadays needs a logical volume because it stores crucial metadat

[ceph-users] Re: Issue Replacing OSD with cephadm: Partition Path Not Accepted

2024-09-03 Thread Fox, Kevin M
I thought bluestore stored that stuff in non lvm mode? From: Robert Sander Sent: Monday, September 2, 2024 11:35 PM To: ceph-users@ceph.io Subject: [ceph-users] Re: Issue Replacing OSD with cephadm: Partition Path Not Accepted Check twice before you clic

[ceph-users] Re: squid 19.2.0 QE validation status

2024-09-03 Thread J. Eric Ivancich
Still looking at the rgw failures. One caught in rgw testing looks to be in core, a valgrind mismatched delete[] in libceph, and I think this squid PR is addressing: https://github.com/ceph/ceph/pull/58991 Here's the valgrind error: https://qa-proxy.ceph.com/teuthology/yuriw-2024-08-29_20:04:

[ceph-users] Re: MDS cache always increasing

2024-09-03 Thread Sake Ceph
But the client which is doing the rsync, doesn't hold any caps after the rsync. Cephfs-top shows 0 caps. Even a system reboot of the client doesn't make a change. Kind regards, Sake > Op 03-09-2024 04:01 CEST schreef Alexander Patrakov : > > > MDS cannot release an inode if a client has ca

[ceph-users] Re: Discovery (port 8765) service not starting

2024-09-03 Thread Tim Holloway
You may find this interesting. I'm running Pacific from the Red Hat repo and Prometheus was given its own discrete container image, not the generic Ceph one. Rather than build custom Prometheus, Red Hat used the Prometheus project's own containers. In fact, it has 3: one for Prometheus, one for P

[ceph-users] Re: Discovery (port 8765) service not starting

2024-09-03 Thread Tim Holloway
Yeah. Although taming the Prometheus logs is on my list, I'm still fuzzy on its details. For your purposes, Docker and Podman can be considered as equivalent. I also run under Podman, incidentally. If the port isn't open inside the container, then blame Prometheus. I'd consider bumping its loggin

[ceph-users] Re: Discovery (port 8765) service not starting

2024-09-03 Thread Matthew Vernon
Hi, On 03/09/2024 14:27, Tim Holloway wrote: FWIW, I'm using podman not docker. The netstat command is not available in the stock Ceph containers, but the "ss" command is, so use that to see if there is in fact a process listening on that port. I have done this, and there's nothing listening

[ceph-users] Re: The journey to CephFS metadata pool’s recovery

2024-09-03 Thread Frédéric Nass
Have you tried to hexdump the actual NVMEs devices instead of the testdisk images? `hexdump -C -n 4096 /dev/nvmexxx` should show LVM LABELONE with PV UUID and `hexdump -C -n 4096 /dev/ceph-454751de-44ab-4aa6-b3ae-50abc22250b3/osd-block-b7745d63-0bf8-4ba4-9274-e034f1c15d7b` should show blue

[ceph-users] Re: [EXTERNAL] Re: Bucket Notifications v2 & Multisite Redundancy

2024-09-03 Thread Yuval Lifshitz
responded inline On Tue, Sep 3, 2024 at 3:05 PM Alex Hussein-Kershaw (HE/HIM) < alex...@microsoft.com> wrote: > Hi Yuval, > > Thanks for the response. I did managed to disable the feature, however I > hope you can understand my hesitancy to design our move away from pubsub > onto a deprecated rep

[ceph-users] Re: v19.1.1 Squid RC1 released

2024-09-03 Thread Eugen Block
Hi, since you pointed out the CephFS features, I wanted to raise some awareness towards snapshot schedulung/creating before releasing 19.2.0: https://tracker.ceph.com/issues/67790 I tried 19.1.1 and am failing to create snapshots: ceph01:~ # ceph fs subvolume snapshot create cephfs subvol1

[ceph-users] Re: Discovery (port 8765) service not starting

2024-09-03 Thread Tim Holloway
While I generally don't recommend getting down and dirty with the containers in Ceph, if you're going to build your own, well, that's different. When I have a container and the expected port isn't listening, the first thing I do is see if it's really listening and internal-only or truly not listen

[ceph-users] Re: squid 19.2.0 QE validation status

2024-09-03 Thread Venky Shankar
Hi Yuri, (cc Rachana) On Fri, Aug 30, 2024 at 8:13 PM Yuri Weinstein wrote: > > Details of this release are summarized here: > > https://tracker.ceph.com/issues/67779#note-1 > > Release Notes - TBD > Gibba upgrade -TBD > LRC upgrade - TBD > > It was decided and agreed upon that there would be li

[ceph-users] Re: Discovery (port 8765) service not starting

2024-09-03 Thread Matthew Vernon
On 03/09/2024 13:33, Eugen Block wrote: Oh that's interesting :-D I have no explanation for that, except maybe some flaw in your custom images? Or in the service specs? Not sure, to be honest... So obviously it _could_ be something in our images, but we're using Ceph's published .debs (18.2.2

[ceph-users] Re: Discovery (port 8765) service not starting

2024-09-03 Thread Eugen Block
Oh that's interesting :-D I have no explanation for that, except maybe some flaw in your custom images? Or in the service specs? Not sure, to be honest... Zitat von Matthew Vernon : Hi, On 03/09/2024 11:46, Eugen Block wrote: Do you see the port definition in the unit.meta file? Oddly:

[ceph-users] quincy radosgw-admin log list show entries with only the date

2024-09-03 Thread Boris
I am not sure how I should describe this. We enable ops log and have these entries here together with the normal ops logs. # radosgw-admin log list | grep ^.*-- "2024-09-03-12--", I did a head on the file and got this: B��f�� anonymous list_bucketsHEAD / HTTP/1.0200 ;tx0

[ceph-users] Re: [EXTERNAL] Re: Bucket Notifications v2 & Multisite Redundancy

2024-09-03 Thread Alex Hussein-Kershaw (HE/HIM)
Hi Yuval, Thanks for the response. I did managed to disable the feature, however I hope you can understand my hesitancy to design our move away from pubsub onto a deprecated replacement (i.e. "notifications v1"). The difference between this and the other operations that require forwarding to t

[ceph-users] Re: Discovery (port 8765) service not starting

2024-09-03 Thread Matthew Vernon
Hi, On 03/09/2024 11:46, Eugen Block wrote: Do you see the port definition in the unit.meta file? Oddly: "ports": [ 9283, 8765, 8765, 8765, 8765 ], which doesn't look right... Regards, Mattew ___ ce

[ceph-users] Re: Bucket Notifications v2 & Multisite Redundancy

2024-09-03 Thread Yuval Lifshitz
Hi Alex, It should be possible to disable the v2 feature through an admin command. radosgw-admin onegroup modify --disable-feature=notification_v2 Also note that when creating a new squid cluster, v2 is enabled by default. But, when upgrading an existing cluster, you need to call: radosgw-admin on

[ceph-users] Re: SOLVED: How to Limit S3 Access to One Subuser

2024-09-03 Thread Marc
Thanks! it should be done more often, posting results. I can remember struggling finding s3 solutions. > > I found countless questions but no real solution on how to have > multiple subusers and buckets in one account while limiting access to > a bucket to just one specific subuser. > > Here’s

[ceph-users] Re: Discovery (port 8765) service not starting

2024-09-03 Thread Eugen Block
Do you see the port definition in the unit.meta file? jq -r '.ports' /var/lib/ceph/{FSID}/mgr.{MGR}/unit.meta [ 8443, 9283, 8765 ] Zitat von Matthew Vernon : Hi, On 02/09/2024 21:24, Eugen Block wrote: Without having looked too closely, do you run ceph with IPv6? There’s a tracker is

[ceph-users] Re: The journey to CephFS metadata pool’s recovery

2024-09-03 Thread Marco Faggian
Hi Frédéric, Thanks a lot for the pointers! So, using testdisk I’ve created images of both the LVs. I’ve looked at the hexdump and it’s filled with 0x00 until 00a0. Then for curiosity I’ve compared them and they’re identical until byte 12726273. Also unfortunately the issue is that ceph-

[ceph-users] Re: Discovery (port 8765) service not starting

2024-09-03 Thread Matthew Vernon
Hi, On 02/09/2024 21:24, Eugen Block wrote: Without having looked too closely, do you run ceph with IPv6? There’s a tracker issue: https://tracker.ceph.com/issues/66426 It will be backported to Reef. I do run IPv6, but the problem is that nothing is listening on port 8765 at all, not that

[ceph-users] Re: The journey to CephFS metadata pool’s recovery

2024-09-03 Thread Frédéric Nass
Hi Marco, Have you checked the output of: dd if=/dev/ceph-xxx/osd-block-x of=/tmp/foo bs=4K count=2 hexdump -C /tmp/foo and: /usr/bin/ceph-bluestore-tool show-label --log-level=30 --dev /dev/nvmexxx -l /var/log/ceph/ceph-volume.log to see if it's aligned with OSD's metadata. You

[ceph-users] Bucket Notifications v2 & Multisite Redundancy

2024-09-03 Thread Alex Hussein-Kershaw (HE/HIM)
Hi Folks, I see in the pending release notes for Squid a description of the "notification_v2" feature. ceph/PendingReleaseNotes at main · ceph/ceph (github.com) I have some concerns about the multisite nature of this feature, I am

[ceph-users] SOLVED: How to Limit S3 Access to One Subuser

2024-09-03 Thread Ansgar Jazdzewski
Hi folks, I found countless questions but no real solution on how to have multiple subusers and buckets in one account while limiting access to a bucket to just one specific subuser. Here’s how I managed to make it work: ``` { "Version": "2012-10-17", "Statement": [ { "Sid": "DenyA