[ceph-users] How to use STS Lite correctly?

2023-09-22 Thread Huy Nguyen
Hi, I have a Ceph cluster v16.2.10 To use STS lite, my configures are like the following: ceph.conf ... [client.rgw.ss-rgw-01] host = ss-rgw-01 rgw_frontends = beast port=8080 rgw_zone=backup-hapu admin_socket = /var/run/ceph/ceph-client.rgw.ss-rgw-01 rgw_sts_key = qekd3Rd5zXr0adQx

[ceph-users] Re: rgw: strong consistency for (bucket) policy settings?

2023-09-22 Thread Casey Bodley
each radosgw does maintain its own cache for certain metadata like users and buckets. when one radosgw writes to a metadata object, it broadcasts a notification (using rados watch/notify) to other radosgws to update/invalidate their caches. the initiating radosgw waits for all watch/notify

[ceph-users] Re: rgw: strong consistency for (bucket) policy settings?

2023-09-22 Thread Matthias Ferdinand
On Tue, Sep 12, 2023 at 07:13:13PM +0200, Matthias Ferdinand wrote: > On Mon, Sep 11, 2023 at 02:37:59PM -0400, Matt Benjamin wrote: > > Yes, it's also strongly consistent. It's also last writer wins, though, so > > two clients somehow permitted to contend for updating policy could > > overwrite

[ceph-users] Re: S3website range requests - possible issue

2023-09-22 Thread Casey Bodley
that first "read 0~4194304" is probably what i fixed in https://github.com/ceph/ceph/pull/53602, but it's hard to tell from osd log where these osd ops are coming from. why are there several [read 1~10] requests after that? the rgw log would be more useful for debugging, with --debug-rgw=20 and

[ceph-users] Re: S3website range requests - possible issue

2023-09-22 Thread Ondřej Kukla
Hello Casey, Thanks a lot for that. I’ve forgot to mention that in my previous message that I was able to trigger the prefetch by header bytes=1-10 You can see the the read 1~10 in the osd logs I’ve sent here - https://pastebin.com/nGQw4ugd Which is wierd as it seems that it is not the same

[ceph-users] Re: S3website range requests - possible issue

2023-09-22 Thread Casey Bodley
hey Ondrej, thanks for creating the tracker issue https://tracker.ceph.com/issues/62938. i added a comment there, and opened a fix in https://github.com/ceph/ceph/pull/53602 for the only issue i was able to identify On Wed, Sep 20, 2023 at 9:20 PM Ondřej Kukla wrote: > > I was checking the

[ceph-users] Re: Join us for the User + Dev Relaunch, happening this Thursday!

2023-09-22 Thread Matthias Ferdinand
On Thu, Sep 21, 2023 at 03:49:25PM -0500, Laura Flores wrote: > Hi Ceph users and developers, > > Big thanks to Cory Snyder and Jonas Sterr for sharing your insights with an > audience of 50+ users and developers! > > Cory shared some valuable troubleshooting tools and tricks that would be >

[ceph-users] Re: millions of hex 80 0_0000 omap keys in single index shard for single bucket

2023-09-22 Thread Christopher Durham
Casey, I did fix this. Here is what I did: 1. Stopped write access to the bucket 2. After I stopped the writes: # radosgw-admin bucket sync status --bucket showed just the one shard that was behind, matching the shard number that has all the extra 0_ index objects. 3.  then did: #

[ceph-users] Re: Libcephfs : ceph_readdirplus_r() with ceph_ll_lookup_vino() : ceph version 17.2.5 (98318ae89f1a893a6ded3a640405cdbb33e08757) quincy (stable)

2023-09-22 Thread Joseph Fernandes
Hello Venky, Nice to hear from you :) Hope you are doing well. I tried as you suggested, root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/user_root# mkdir dir1 dir2 root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/user_root# echo "Hello Worldls!" > file2

[ceph-users] Re: Libcephfs : ceph_readdirplus_r() with ceph_ll_lookup_vino() : ceph version 17.2.5 (98318ae89f1a893a6ded3a640405cdbb33e08757) quincy (stable)

2023-09-22 Thread Venky Shankar
Hi Joseph, On Fri, Sep 22, 2023 at 5:27 PM Joseph Fernandes wrote: > > Hello All, > > I found a weird issue with ceph_readdirplus_r() when used along > with ceph_ll_lookup_vino(). > On ceph version 17.2.5 (98318ae89f1a893a6ded3a640405cdbb33e08757) quincy > (stable) > > Any help is really

[ceph-users] Re: After upgrading from 17.2.6 to 18.2.0, OSDs are very frequently restarting due to livenessprobe failures

2023-09-22 Thread Peter Goron
Hi, For the record, in the past we faced a similar issue with OSDs being killed one after each other every day starting from midnight. The root cause was linked to device_health_check launched by mgr on each OSD. While OSD is doing device_health_check, OSD admin socket is busy and can't answer to

[ceph-users] Re: Libcephfs : ceph_readdirplus_r() with ceph_ll_lookup_vino() : ceph version 17.2.5 (98318ae89f1a893a6ded3a640405cdbb33e08757) quincy (stable)

2023-09-22 Thread Joseph Fernandes
reattaching files On Fri, Sep 22, 2023 at 5:25 PM Joseph Fernandes wrote: > Hello All, > > I found a weird issue with ceph_readdirplus_r() when used along > with ceph_ll_lookup_vino(). > On ceph version 17.2.5 (98318ae89f1a893a6ded3a640405cdbb33e08757) quincy > (stable) > > Any help is really

[ceph-users] Re: Libcephfs : ceph_readdirplus_r() with ceph_ll_lookup_vino() : ceph version 17.2.5 (98318ae89f1a893a6ded3a640405cdbb33e08757) quincy (stable)

2023-09-22 Thread Joseph Fernandes
re-attaching the files On Fri, Sep 22, 2023 at 5:25 PM Joseph Fernandes wrote: > Hello All, > > I found a weird issue with ceph_readdirplus_r() when used along > with ceph_ll_lookup_vino(). > On ceph version 17.2.5 (98318ae89f1a893a6ded3a640405cdbb33e08757) quincy > (stable) > > Any help is

[ceph-users] Libcephfs : ceph_readdirplus_r() with ceph_ll_lookup_vino() : ceph version 17.2.5 (98318ae89f1a893a6ded3a640405cdbb33e08757) quincy (stable)

2023-09-22 Thread Joseph Fernandes
Hello All, I found a weird issue with ceph_readdirplus_r() when used along with ceph_ll_lookup_vino(). On ceph version 17.2.5 (98318ae89f1a893a6ded3a640405cdbb33e08757) quincy (stable) Any help is really appreciated. Thanks in advance, -Joe Test Scenario : A. Create a Ceph Fs Subvolume "4"

[ceph-users] multiple rgw instances with same cephx key

2023-09-22 Thread Boris Behrens
Hi, is it possible to use one cephx key for multiple parallel running RGW? Maybe I could just use the same 'name' and the same key for all of the RGW instances? I plan to start RGWs all over the place in container and let BGP handle the traffic. But I don't know how to create on demand keys, that

[ceph-users] Re: Querying the most recent snapshot

2023-09-22 Thread Ilya Dryomov
On Fri, Sep 22, 2023 at 8:40 AM Dominique Ramaekers wrote: > > Hi, > > A question to avoid using a to elaborate method in finding de most recent > snapshot of a RBD-image. > > So, what would be the preferred way to find the latest snapshot of this image? > > root@hvs001:/# rbd snap ls

[ceph-users] Querying the most recent snapshot

2023-09-22 Thread Dominique Ramaekers
Hi, A question to avoid using a to elaborate method in finding de most recent snapshot of a RBD-image. So, what would be the preferred way to find the latest snapshot of this image? root@hvs001:/# rbd snap ls libvirt-pool/CmsrvDOM2-MULTIMEDIA SNAPID NAMESIZE PROTECTED TIMESTAMP