[ceph-users] activating+undersized+degraded+remapped

2024-03-16 Thread Deep Dish
Hello I found myself in the following situation: [WRN] PG_AVAILABILITY: Reduced data availability: 3 pgs inactive pg 4.3d is stuck inactive for 8d, current state activating+undersized+degraded+remapped, last acting [4,NONE,46,NONE,10,13,NONE,74] pg 4.6e is stuck inactive for 9d,

[ceph-users] Renaming an OSD node

2024-02-29 Thread Deep Dish
Hello. We have a requirement to change the hostname on some of our OSD nodes. All of our nodes are Ubuntu 22.04 based and have been deployed using 17.2.7 Orchestrator. 1. Is there a procedure to rename the existing node, without rebuilding and have it detected by Ceph Orchestrator? If not,

[ceph-users] Multi region RGW Config Questions - Quincy

2023-05-26 Thread Deep Dish
Hello, I have a Qunicy (17.2.6) cluster, looking to create a multi-zone / multi-region RGW service and have a few questions with respect to published docs - https://docs.ceph.com/en/quincy/radosgw/multisite/. In general, I understand the process as: 1. Create a new REALM, ZONEGROUP,

[ceph-users] Quincy Ceph-orchestrator and multipath SAS

2023-05-12 Thread Deep Dish
Hello, I have a few hosts about to add into a cluster that have a multipath storage config for SAS devices.Is this supported on Quincy, and how would ceph-orchestrator and / or ceph-volume handle multipath storage? Here's a snip of lsblk output of a host in question: # lsblk NAME

[ceph-users] Re: PG Sizing Question

2023-03-01 Thread Deep Dish
A pool with 5% of the data needs fewer PGs than a > pool with 50% of the cluster’s data. > > Others may well have different perspectives, this is something where > opinions vary. The pg_autoscaler in bulk mode can automate this, if one is > prescient with feeding it parameters. > > &

[ceph-users] PG Sizing Question

2023-02-28 Thread Deep Dish
Hello Looking to get some official guidance on PG and PGP sizing. Is the goal to maintain approximately 100 PGs per OSD per pool or for the cluster general? Assume the following scenario: Cluster with 80 OSD across 8 nodes; 3 Pools: - Pool1 = Replicated 3x - Pool2 =

[ceph-users] Re: Serious cluster issue - Incomplete PGs

2023-01-10 Thread Deep Dish
cover from this. Regards, Eugen [2] https://docs.ceph.com/en/pacific/man/8/ceph-objectstore-tool/ Zitat von Deep Dish : > Thanks for the insight Eugen. > > Here's what basically happened: > > - Upgrade from Nautilus to Quincy via migration to new cluster on temp > hardware; >

[ceph-users] Re: Serious cluster issue - Incomplete PGs

2023-01-09 Thread Deep Dish
hive.com/ceph-users@ceph.io/msg14757.html Zitat von Deep Dish : > Hello. I really screwed up my ceph cluster. Hoping to get data off it > so I can rebuild it. > > In summary, too many changes too quickly caused the cluster to develop > incomplete pgs. Some PGS were reporting t

[ceph-users] Serious cluster issue - Incomplete PGs

2023-01-08 Thread Deep Dish
Hello. I really screwed up my ceph cluster. Hoping to get data off it so I can rebuild it. In summary, too many changes too quickly caused the cluster to develop incomplete pgs. Some PGS were reporting that OSDs were to be probes. I've created those OSD IDs (empty), however this wouldn't

[ceph-users] Serious cluster issue - data inaccessible

2023-01-08 Thread Deep Dish
Hello. I really screwed up my ceph cluster. Hoping to get data off it so I can rebuild it. In summary, too many changes too quickly caused the cluster to develop incomplete pgs. Some PGS were reporting that OSDs were to be probes. I've created those OSD IDs (empty), however this wouldn't

[ceph-users] Re: Urgent help! RGW Disappeared on Quincy

2022-12-31 Thread Deep Dish
ault realm is set # radosgw-admin realm list-periods failed to read realm: (2) No such file or directory # rados ls -p .rgw.root zonegroup_info.45518452-8aa6-41b4-99f0-059b255c31cd zone_info.743ea532-f5bc-4cca-891b-c27a586d5129 zone_names.default zonegroups_names.default On Sat, Dec

[ceph-users] Re: Urgent help! RGW Disappeared on Quincy

2022-12-31 Thread Deep Dish
pg repair > > Pavin. > > On 29-Dec-22 4:08 AM, Deep Dish wrote: > > Hi Pavin, > > > > The following are additional developments.. There's one PG that's > > stuck and unable to recover. I've attached relevant ceph -s / health > > detail and pg stat outputs bel

[ceph-users] Re: Urgent help! RGW Disappeared on Quincy

2022-12-28 Thread Deep Dish
Hi Pavin, The following are additional developments.. There's one PG that's stuck and unable to recover. I've attached relevant ceph -s / health detail and pg stat outputs below. - There were some remaining lock files as suggested in /var/run/ceph/ pertaining to rgw. I removed the service,

[ceph-users] Re: Urgent help! RGW Disappeared on Quincy

2022-12-27 Thread Deep Dish
cs.ceph.com/en/quincy/mgr/crash/ > [1]: https://docs.podman.io/en/latest/markdown/podman-logs.1.html > > On 27-Dec-22 11:59 PM, Deep Dish wrote: > > HI Pavin, > > > > Thanks for the reply. I'm a bit at a loss honestly as this worked > > perfectly without any issue u

[ceph-users] Re: Urgent help! RGW Disappeared on Quincy

2022-12-27 Thread Deep Dish
s > 2. Is the RGW HTTP server running on its port? > 3. Re-check config including authentication. > > ceph orch is too new and didn't pass muster in our own internal testing. > You're braver than most for using it in production. > > Pavin. > > On 27-Dec-22 8:48 PM,

[ceph-users] Re: Urgent help! RGW Disappeared on Quincy

2022-12-27 Thread Deep Dish
): # ceph dashboard get-rgw-api-access-key P?G (? commented out) Seems to me like my RGW config is non-existent / corrupted for some reason. When trying to curl a RGW directly I get a "connection refused". On Tue, Dec 27, 2022 at 9:41 AM Deep Dish wrote: > I bu

[ceph-users] Urgent help! RGW Disappeared on Quincy

2022-12-27 Thread Deep Dish
I built a net-new Quincy cluster (17.2.5) using ceph orch as follows: 2x mgrs 4x rgw 5x mon 4x rgw 5x mds 6x osd hosts w/ 10 drives each --> will be growing to 7 osd hosts in the coming days. I migrated all data from my legacy nautilus cluster (via rbd-mirror, rclone for s3 buckets, etc.). All

[ceph-users] Cluster problem - Quncy

2022-12-20 Thread Deep Dish
Hello. I have a few issues with my ceph cluster: - RGWs have disappeared from management (console does not register any RGWs) despite showing 4 services deployed and processes running; - All object buckets not accessible / manageable; - Console showing some of my pools are “updating” – its

[ceph-users] Unbalanced new cluster - Qunicy

2022-11-16 Thread Deep Dish
Hello. I'm migrating from Nautilus -> Quincy. Data is being replicated between clusters. As data is migrated (currently about 60T), the Qunicy cluster repeatedly doesn't seem to do a good job at balancing pgs across all OSD. Never had this issue with Nautilus or other versions. Running