[ceph-users] Re: Crush map & rule

2023-11-08 Thread Albert Shih
Le 08/11/2023 à 19:29:19+0100, David C. a écrit Hi David. > > What would be the number of replicas (in total and on each row) and their > distribution on the tree ? Well “inside” a row that would be 3 in replica mode. Between row...well two ;-) Beside to understanding how to write a rule a

[ceph-users] Re: Ceph Dashboard - Community News Sticker [Feedback]

2023-11-08 Thread Dominique Ramaekers
Hi, On my opinion... Please don't. In worst case, maybe only messages concerning critical updates (security, stability issues). For two reasons: 1) as low as the impact may be, server sources are precious... 2) my time is also precious. If I login to the GUI, it's with the intention to do some

[ceph-users] Re: Ceph Dashboard - Community News Sticker [Feedback]

2023-11-08 Thread Chris Palmer
My vote would be "no": * This is an operational high-criticality system. Not the right place to have distracting other stuff or to bloat the dashboard. * Our ceph systems deliberately don't have direct internet connectivity. * There is plenty of useful operational information that could

[ceph-users] Re: Help needed with Grafana password

2023-11-08 Thread Eugen Block
Hi, you mean you forgot your password? You can remove the service with 'ceph orch rm grafana', then re-apply your grafana.yaml containing the initial password. Note that this would remove all of the grafana configs or custom dashboards etc., you would have to reconfigure them. So before

[ceph-users] Re: Ceph Dashboard - Community News Sticker [Feedback]

2023-11-08 Thread Dmitry Melekhov
09.11.2023 10:35, Nizamudeen A пишет: Hello, We wanted to get some feedback on one of the features that we are planning to bring in for upcoming releases. On the Ceph GUI, we thought it could be interesting to show information regarding the community events, ceph release information (Release

[ceph-users] Ceph Dashboard - Community News Sticker [Feedback]

2023-11-08 Thread Nizamudeen A
Hello, We wanted to get some feedback on one of the features that we are planning to bring in for upcoming releases. On the Ceph GUI, we thought it could be interesting to show information regarding the community events, ceph release information (Release notes and changelogs) and maybe even

[ceph-users] Re: reef 18.2.1 QE Validation status

2023-11-08 Thread Venky Shankar
On Thu, Nov 9, 2023 at 3:53 AM Laura Flores wrote: > @Venky Shankar and @Patrick Donnelly > , I reviewed the smoke suite results and identified > a new bug: > > https://tracker.ceph.com/issues/63488 - smoke test fails from "NameError: > name 'DEBUGFS_META_DIR' is not defined" > > Can you take a

[ceph-users] Re: reef 18.2.1 QE Validation status

2023-11-08 Thread Laura Flores
@Venky Shankar and @Patrick Donnelly , I reviewed the smoke suite results and identified a new bug: https://tracker.ceph.com/issues/63488 - smoke test fails from "NameError: name 'DEBUGFS_META_DIR' is not defined" Can you take a look? On Wed, Nov 8, 2023 at 12:32 PM Adam King wrote: > > > >

[ceph-users] Re: HDD cache

2023-11-08 Thread Peter
This server configured Dell R730 with HBA 330 card HDD are configured write through mode. From: David C. Sent: Wednesday, November 8, 2023 10:14 To: Peter Cc: ceph-users@ceph.io Subject: Re: [ceph-users] HDD cache Without (raid/jbod) controller ? Le mer. 8

[ceph-users] Re: reef 18.2.1 QE Validation status

2023-11-08 Thread Adam King
> > https://tracker.ceph.com/issues/63151 - Adam King do we need anything for > this? > Yes, but not an actual code change in the main ceph repo. I'm looking into a ceph-container change to alter the ganesha version in the container as a solution. On Wed, Nov 8, 2023 at 11:10 AM Yuri Weinstein

[ceph-users] Re: Crush map & rule

2023-11-08 Thread David C.
Hi Albert, What would be the number of replicas (in total and on each row) and their distribution on the tree ? Le mer. 8 nov. 2023 à 18:45, Albert Shih a écrit : > Hi everyone, > > I'm totally newbie with ceph, so sorry if I'm asking some stupid question. > > I'm trying to understand how the

[ceph-users] Re: HDD cache

2023-11-08 Thread David C.
Without (raid/jbod) controller ? Le mer. 8 nov. 2023 à 18:36, Peter a écrit : > Hi All, > > I note that HDD cluster commit delay improves after i turn off HDD cache. > However, i also note that not all HDDs are able to turn off the cache. > special I found that two HDD with same model number,

[ceph-users] Crush map & rule

2023-11-08 Thread Albert Shih
Hi everyone, I'm totally newbie with ceph, so sorry if I'm asking some stupid question. I'm trying to understand how the crush map & rule work, my goal is to have two groups of 3 servers, so I'm using “row” bucket ID CLASS WEIGHTTYPE NAME STATUS REWEIGHT PRI-AFF -1

[ceph-users] HDD cache

2023-11-08 Thread Peter
Hi All, I note that HDD cluster commit delay improves after i turn off HDD cache. However, i also note that not all HDDs are able to turn off the cache. special I found that two HDD with same model number, one can turn off, anther doesn't. i guess i have my system config or something different

[ceph-users] Re: Question about PG mgr/balancer/crush_compat_metrics

2023-11-08 Thread Bryan Song
Sorry for not making it clear, we are using upmap. Just saw this from the code and wondering about the usage. For the OSDs, we do not have any OSD weight < 1.00 until one OSD reaches the 85% near full ratio. Before I reweight the OSD, our mgr/balancer/upmap_max_deviation is set to 5 and the PG

[ceph-users] Ceph Leadership Team Weekly Meeting Minutes 2023-11-08

2023-11-08 Thread Patrick Donnelly
Hello all, Here are the minutes from today's meeting. - New time for CDM APAC to increase participation - 9.30 - 11.30 pm PT seems like the most popular based on https://doodle.com/meeting/participate/id/aM9XGZ3a/vote - One more week for more feedback; please ask more APAC

[ceph-users] Re: ceph storage pool error

2023-11-08 Thread Robert Sander
Hi, On 11/7/23 12:35, necoe0...@gmail.com wrote: Ceph 3 clusters are running and the 3rd cluster gave an error, it is currently offline. I want to get all the remaining data in 2 clusters. Instead of fixing ceph, I just want to save the data. How can I access this data and connect to the

[ceph-users] Re: reef 18.2.1 QE Validation status

2023-11-08 Thread Nizamudeen A
dashboard approved, the test failure is known cypress issue which is not a blocker. Regards, Nizam On Wed, Nov 8, 2023, 21:41 Yuri Weinstein wrote: > We merged 3 PRs and rebuilt "reef-release" (Build 2) > > Seeking approvals/reviews for: > > smoke - Laura, Radek 2 jobs failed in

[ceph-users] one cephfs volume becomes very slow

2023-11-08 Thread Ben
Dear cephers, we have a cephfs volume, that will be mounted by many clients with concurrent read/write capability. From time to time, maybe when concurrency goes as high as 100 clients' access, accessing it will become very slow to be useful at all. the cluster has multiple active mds. All disks

[ceph-users] Re: reef 18.2.1 QE Validation status

2023-11-08 Thread Yuri Weinstein
We merged 3 PRs and rebuilt "reef-release" (Build 2) Seeking approvals/reviews for: smoke - Laura, Radek 2 jobs failed in "objectstore/bluestore" tests (see Build 2) rados - Neha, Radek, Travis, Ernesto, Adam King rgw - Casey reapprove on Build 2 fs - Venky, approve on Build 2 orch - Adam King

[ceph-users] Re: Ceph OSD reported Slow operations

2023-11-08 Thread Zakhar Kirpichenko
Take hints from this: "544 pgs not deep-scrubbed in time". Your OSDs are unable to scrub their data in time, likely because they cannot cope with the client + scrubbing I/O. I.e. there's too much data on too few and too slow spindles. You can play with osd_deep_scrub_interval and increase the

[ceph-users] Re: list cephfs dirfrags

2023-11-08 Thread Ben
Hi, this directory is very busy: ceph tell mds.* dirfrag ls /volumes/csi/csi-vol-3a69d51a-f3cd-11ed-b738-964ec15fdba7/ while running it, all mds output: [ { "value": 0, "bits": 0, "str": "0/0" } ] Thank you, Ben Patrick Donnelly 于2023年11月8日周三 21:58写道: > > On

[ceph-users] Help needed with Grafana password

2023-11-08 Thread Sake Ceph
I configured a password for Grafana because I want to use Loki. I used the spec parameter initial_admin_password and this works fine for a staging environment, where I never tried to used Grafana with a password for Loki. Using the username admin with the configured password gives a

[ceph-users] Re: owner locked out of bucket via bucket policy

2023-11-08 Thread Jayanth Reddy
Hello Casey, Thank you so much, the steps you provided worked. I'll follow up on the tracker to provide further information. Regards, Jayanth On Wed, Nov 8, 2023 at 8:41 PM Jayanth Reddy wrote: > Hello Casey, > > Thank you so much for the response. I'm applying these right now and let > you

[ceph-users] Help needed with Grafana password

2023-11-08 Thread Sake Ceph
I configured a password for Grafana because I want to use Loki. I used the spec parameter initial_admin_password and this works fine for a staging environment, where I never tried to used Grafana with a password for Loki.    Using the username admin with the

[ceph-users] Re: owner locked out of bucket via bucket policy

2023-11-08 Thread Siddhit Renake
Hello Casey, Our Production buckets are impacted due to this issue. We have downgraded Ceph version from 17.2.7 to 17.2.6 but still we are getting "bucket policy parsing" error while accessing the buckets. rgw_policy_reject_invalid_principals is not present in 17.2.6 as configurable parameter.

[ceph-users] Radosgw object stat olh object attrs what does it mean.

2023-11-08 Thread Selcuk Gultekin
I'd like to discuss the questions I should ask to understand the values under the 'attrs' of an object in the following JSON data structure and evaluate the health of these objects: I have a sample json output, can you comment on the object state here? { "name": "$image.name", "size": 0,

[ceph-users] ceph storage pool error

2023-11-08 Thread necoe0147
Ceph 3 clusters are running and the 3rd cluster gave an error, it is currently offline. I want to get all the remaining data in 2 clusters. Instead of fixing ceph, I just want to save the data. How can I access this data and connect to the pool? Can you help me?1 and 2 clusters are working. I

[ceph-users] Memory footprint of increased PG number

2023-11-08 Thread Nicola Mori
Dear Ceph user, I'm wondering how much an increase of PG number would impact on the memory occupancy of OSD daemons. In my cluster I currently have 512 PGs and I would like to increase it to 1024 to mitigate some disk occupancy issues, but having machines with low amount of memory (down to 24

[ceph-users] Question about PG mgr/balancer/crush_compat_metrics

2023-11-08 Thread bryansoong21
Hello, We are using a Ceph Pacific (16.2.10) cluster and enabled the balancer module, but the usage of some OSDs keeps growing and reached up to mon_osd_nearfull_ratio, which we use 85% by default, and we think the balancer module should do some balancer work. So I checked our balancer

[ceph-users] Re: Ceph OSD reported Slow operations

2023-11-08 Thread prabhav
Hi Eugen Please find the details below root@meghdootctr1:/var/log/ceph# ceph -s cluster: id: c59da971-57d1-43bd-b2b7-865d392412a5 health: HEALTH_WARN nodeep-scrub flag(s) set 544 pgs not deep-scrubbed in time services: mon: 3 daemons, quorum meghdootctr1,meghdootctr2,meghdootctr3 (age 5d) mgr:

[ceph-users] Re: owner locked out of bucket via bucket policy

2023-11-08 Thread Jayanth Reddy
Hello Casey, Thank you so much for the response. I'm applying these right now and let you know the results. Regards, Jayanth On Wed, Nov 8, 2023 at 8:15 PM Casey Bodley wrote: > i've opened https://tracker.ceph.com/issues/63485 to allow > admin/system users to override policy parsing errors

[ceph-users] Re: Permanent KeyError: 'TYPE' ->17.2.7: return self.blkid_api['TYPE'] == 'part'

2023-11-08 Thread Sascha Lucas
Hi, On Tue, 7 Nov 2023, Harry G Coin wrote: These repeat for every host, only after upgrading from prev release Quincy to 17.2.7.   As a result, the cluster is always warned, never indicates healthy. I'm hitting this error, too. "/usr/lib/python3.6/site-packages/ceph_volume/util/device.py",

[ceph-users] Re: Seagate Exos power settings - any experiences at your sites?

2023-11-08 Thread Danny Webb
We've had some issues with Exos drives dropping out of our sas controllers (LSI SAS3008 PCI-Express Fusion-MPT SAS-3) intermittently which we believe is due to this. Upgrading the drive firmware largely solved it for us so we never ended up messing about with the power settings.

[ceph-users] Re: owner locked out of bucket via bucket policy

2023-11-08 Thread Casey Bodley
i've opened https://tracker.ceph.com/issues/63485 to allow admin/system users to override policy parsing errors like this. i'm not sure yet where this parsing regression was introduced. in reef, https://github.com/ceph/ceph/pull/49395 added better error messages here, along with a

[ceph-users] Re: reef 18.2.1 QE Validation status

2023-11-08 Thread Travis Nielsen
Yuri, we need to add this issue as a blocker for 18.2.1. We discovered this issue after the release of 17.2.7, and don't want to hit the same blocker in 18.2.1 where some types of OSDs are failing to be created in new clusters, or failing to start in upgraded clusters.

[ceph-users] Re: owner locked out of bucket via bucket policy

2023-11-08 Thread Jayanth Reddy
Hello Wesley, Thank you for the response. I tried the same but ended up with 403. Regards, Jayanth On Wed, Nov 8, 2023 at 7:34 PM Wesley Dillingham wrote: > Jaynath: > > Just to be clear with the "--admin" user's key's you have attempted to > delete the bucket policy using the following

[ceph-users] Re: owner locked out of bucket via bucket policy

2023-11-08 Thread Wesley Dillingham
Jaynath: Just to be clear with the "--admin" user's key's you have attempted to delete the bucket policy using the following method: https://docs.aws.amazon.com/cli/latest/reference/s3api/delete-bucket-policy.html This is what worked for me (on a 16.2.14 cluster). I didn't attempt to interact

[ceph-users] Re: list cephfs dirfrags

2023-11-08 Thread Patrick Donnelly
On Mon, Nov 6, 2023 at 4:56 AM Ben wrote: > Hi, > I used this but all returns "directory inode not in cache" > ceph tell mds.* dirfrag ls path > > I would like to pin some subdirs to a rank after dynamic subtree > partitioning. Before that, I need to know where are they exactly > If the dirfrag

[ceph-users] Re: owner locked out of bucket via bucket policy

2023-11-08 Thread Jayanth Reddy
Hello Casey, We're totally stuck at this point and none of the options seem to work. Please let us know if there is something in metadata or index to remove those applied bucket policies. We downgraded to v17.2.6 and encountering the same. Regards, Jayanth On Wed, Nov 8, 2023 at 7:14 AM Jayanth

[ceph-users] Re: 100.00 Usage for ssd-pool (maybe after: ceph osd crush move .. root=default)

2023-11-08 Thread David C.
so the next step is to place the pools on the right rule : ceph osd pool set db-pool crush_rule fc-r02-ssd Le mer. 8 nov. 2023 à 12:04, Denny Fuchs a écrit : > hi, > > I've forget to write the command, I've used: > > = > ceph osd crush move fc-r02-ceph-osd-01 root=default > ceph osd

[ceph-users] Re: 100.00 Usage for ssd-pool (maybe after: ceph osd crush move .. root=default)

2023-11-08 Thread Denny Fuchs
Hi, I overseen also this: == root@fc-r02-ceph-osd-01:[~]: ceph -s cluster: id: cfca8c93-f3be-4b86-b9cb-8da095ca2c26 health: HEALTH_OK services: mon: 5 daemons, quorum

[ceph-users] Re: 100.00 Usage for ssd-pool (maybe after: ceph osd crush move .. root=default)

2023-11-08 Thread Denny Fuchs
hi, I've forget to write the command, I've used: = ceph osd crush move fc-r02-ceph-osd-01 root=default ceph osd crush move fc-r02-ceph-osd-01 root=default ... = and I've found also this param: === root@fc-r02-ceph-osd-01:[~]: ceph osd crush tree --show-shadow ID CLASS

[ceph-users] Re: 100.00 Usage for ssd-pool (maybe after: ceph osd crush move .. root=default)

2023-11-08 Thread David C.
I've probably answered too quickly if the migration is complete and there are no incidents. Are the pg active+clean? Cordialement, *David CASIER* Le mer. 8 nov. 2023 à 11:50,

[ceph-users] Re: 100.00 Usage for ssd-pool (maybe after: ceph osd crush move .. root=default)

2023-11-08 Thread David C.
Hi, It seems to me that before removing buckets from the crushmap, it is necessary to do the migration first. I think you should restore the initial crushmap by adding the default root next to it and only then do the migration. There should be some backfill (probably a lot).

[ceph-users] Re: reef 18.2.1 QE Validation status

2023-11-08 Thread Venky Shankar
Hi Yuri, On Wed, Nov 8, 2023 at 2:32 AM Yuri Weinstein wrote: > > 3 PRs above mentioned were merged and I am returning some tests: > https://pulpito.ceph.com/?sha1=55e3239498650453ff76a9b06a37f1a6f488c8fd > > Still seeing approvals. > smoke - Laura, Radek, Prashant, Venky in progress > rados -

[ceph-users] 100.00 Usage for ssd-pool (maybe after: ceph osd crush move .. root=default)

2023-11-08 Thread Denny Fuchs
Hello, we upgraded to Quincy and tried to remove an obsolete part: In the beginning of Ceph, there where no device classes and we created rules, to split them into hdd and ssd on one of our datacenters. https://www.sebastien-han.fr/blog/2014/08/25/ceph-mix-sata-and-ssd-within-the-same-box/