[ceph-users] Re: Troubleshooting cephadm - not deploying any daemons

2022-06-09 Thread Redouane Kachach Elhichou
To see what cephadm is doing you can check both the logs on: */var/log/ceph/cephadm.log* (here you can see what the cephadm running on each host is doing) and you can also check what the cephadm (mgr module) is doing by checking the logs of the mgr container by: > podman logs -f `podman ps | grep

[ceph-users] Error adding lua packages to rgw

2022-06-09 Thread Koldo Aingeru
Hello, I’m having trouble adding new packages to rgw via radosgw-admin : # radosgw-admin script-package add --package=luajson ERROR: failed to add lua package: luajson .error: -10 # radosgw-admin script-package add --package=luasocket --allow-compilation ERROR: failed to add lua package: luaso

[ceph-users] Re: radosgw multisite sync - how to fix data behind shards?

2022-06-09 Thread Wyll Ingersoll
I think you mean "radosgw-admin sync error list", in which case there are 32 shards, each with the same error. I dont see errors on the master zone logs so I'm not sure how to correct the situation. "shard_id": 31, "entries": [ { "id": "1_1654722349.

[ceph-users] Re: OpenStack Swift on top of CephFS

2022-06-09 Thread David Orman
I agree with this, just because you can doesn't mean you should. It will likely be significantly less painful to upgrade the infrastructure to support doing this the more-correct way, vs. trying to layer swift on top of cephfs. I say this having a lot of personal experience with Swift at extremely

[ceph-users] Re: Troubleshooting cephadm - not deploying any daemons

2022-06-09 Thread Eugen Block
Can you share more details about the cluster, like 'ceph -s' and 'ceph orch ls'. Have you tried a MGR failover just to see if that clears anything? Also the active mgr log should contain at least some information. How did you deploy the current services when bootstrapping the cluster? Has a

[ceph-users] Re: Error adding lua packages to rgw

2022-06-09 Thread Yuval Lifshitz
Hi Koldo, this might be related to the containerized deployment. the error code (-10) is returned when we cannot find the "luarocks" binary. assuming it is installed on the host (just check: "luarocks --version"), it might not be accessible from inside the RGW container. if this is the case, can yo

[ceph-users] Re: Luminous to Pacific Upgrade with Filestore OSDs

2022-06-09 Thread Pardhiv Karri
Awesome, thank you, Eneko! Would you mind sharing the upgrade run book, if you have one? Want to avoid reinventing the wheel as there will b some caveats while uprading and they don't usually be present in official Ceph upgrade docs. Thanks, Pardhiv On Thu, Jun 9, 2022 at 12:40 AM Eneko Lacunza

[ceph-users] Re: radosgw multisite sync - how to fix data behind shards?

2022-06-09 Thread Wyll Ingersoll
I ended up giving up after trying everything I could find in the forums and docs, deleted the problematic zone, and then re-added it back to the zonegroup and re-established the group sync policy for the bucket in question. The sync-status is OK now, though the error list still shows a bunch

[ceph-users] Ceph User + Dev Monthly June Meetup

2022-06-09 Thread Neha Ojha
Hi everyone, This month's Ceph User + Dev Monthly meetup is on June 16, 14:00-15:00 UTC. Please add topics to the agenda: https://pad.ceph.com/p/ceph-user-dev-monthly-minutes. Hope to see you there! Thanks, Neha ___ ceph-users mailing list -- ceph-user

[ceph-users] Re: radosgw multisite sync - how to fix data behind shards?

2022-06-09 Thread Wyll Ingersoll
Running "object rewrite" on a couple of the objects in the bucket seems to have triggered the sync and now things appear ok. From: Szabo, Istvan (Agoda) Sent: Thursday, June 9, 2022 3:24 PM To: Wyll Ingersoll Cc: ceph-users@ceph.io ; d...@ceph.io Subject: Re:

[ceph-users] Re: Ceph on RHEL 9

2022-06-09 Thread Robert W. Eckert
Does anyone have any pointers to install CEPH on Rhel 9? -Original Message- From: Robert W. Eckert Sent: Saturday, May 28, 2022 8:28 PM To: ceph-users@ceph.io Subject: [ceph-users] Ceph on RHEL 9 Hi- I started to update my 3 host cluster to RHEL 9, but came across a bit of a stumblin

[ceph-users] Re: OSDs getting OOM-killed right after startup

2022-06-09 Thread Janne Johansson
Den tors 9 juni 2022 kl 22:31 skrev Mara Sophie Grosch : > good catch with the way too low memory target, I wanted to configure 1 > GiB not 1 MiB. I'm aware it's low, but removed anyway for testing - it > sadly didn't change anything. > > I customize the config mostly for dealing problems I have, s

[ceph-users] Re: Error adding lua packages to rgw

2022-06-09 Thread Koldo Aingeru
Hi Yuval, That was it, after installing it on the host I got no errors :) Thanks a lot! > On 9 Jun 2022, at 16:36, Yuval Lifshitz wrote: > > Hi Koldo, > this might be related to the containerized deployment. > the error code (-10) is returned when we cannot find the "luarocks" binary. > assumi

[ceph-users] Generation of systemd units after nuking /etc/systemd/system

2022-06-09 Thread Flemming Frandsen
Hi, this is somewhat embarrassing, but one of my colleagues fat fingered an ansible rule and managed to wipe out /etc/systemd/system on all of our ceph hosts. The cluster is running nautilus on ubuntu 18.04, deployed with ceph-ansible, one of our near-future tasks is to upgrade to the latest ceph

[ceph-users] Ceph pool set min_write_recency_for_promote not working

2022-06-09 Thread Pardhiv Karri
Hi, I created a new pool called "ssdimages," which is similar to another pool called "images" (a very old one). But when I try to set min_write_recency_for_promote to 1, it fails with permission denied. Do you know how I can fix it? ceph-lab # ceph osd dump | grep -E 'images|ssdimages' pool 3 'im