[ceph-users] Introduce: Storage stability testing and DATA consistency verifying tools and system

2023-10-06 Thread 张友加
Dear All, I hope you are all well. I would like to introduce new tools I have developed, named "LBA tools" which including hd_write_verify & hd_write_verify_dump. github: https://github.com/zhangyoujia/hd_write_verify pdf: https://github.com/zhangyoujia/hd_write_verify/DISK stability

[ceph-users] Re: Hardware recommendations for a Ceph cluster

2023-10-06 Thread Anthony D'Atri
> Currently, I have an OpenStack installation with a Ceph cluster consisting of > 4 servers for OSD, each with 16TB SATA HDDs. My intention is to add a second, > independent Ceph cluster to provide faster disks for OpenStack VMs. Indeed, I know from experience that LFF spinners don't cut it

[ceph-users] Hardware recommendations for a Ceph cluster

2023-10-06 Thread Gustavo Fahnle
Hi, Currently, I have an OpenStack installation with a Ceph cluster consisting of 4 servers for OSD, each with 16TB SATA HDDs. My intention is to add a second, independent Ceph cluster to provide faster disks for OpenStack VMs. The idea for this second cluster is to exclusively provide RBD

[ceph-users] Re: cannot repair a handful of damaged pg's

2023-10-06 Thread Simon Oosthoek
Hi Wesley, On 06/10/2023 17:48, Wesley Dillingham wrote: A repair is just a type of scrub and it is also limited by osd_max_scrubs which in pacific is 1. We've increased that to 4 (and temporarily to 8) since we have so many OSDs and are running behind on scrubbing. If another scrub is

[ceph-users] Re: cannot repair a handful of damaged pg's

2023-10-06 Thread Kai Stian Olstad
On 06.10.2023 17:48, Wesley Dillingham wrote: A repair is just a type of scrub and it is also limited by osd_max_scrubs which in pacific is 1. If another scrub is occurring on any OSD in the PG it wont start. do "ceph osd set noscrub" and "ceph osd set nodeep-scrub" wait for all scrubs to

[ceph-users] Re: cannot repair a handful of damaged pg's

2023-10-06 Thread Wesley Dillingham
A repair is just a type of scrub and it is also limited by osd_max_scrubs which in pacific is 1. If another scrub is occurring on any OSD in the PG it wont start. do "ceph osd set noscrub" and "ceph osd set nodeep-scrub" wait for all scrubs to stop (a few seconds probably) Then issue the pg

[ceph-users] Re: cannot repair a handful of damaged pg's

2023-10-06 Thread Simon Oosthoek
On 06/10/2023 16:09, Simon Oosthoek wrote: Hi we're still in HEALTH_ERR state with our cluster, this is the top of the output of `ceph health detail` HEALTH_ERR 1/846829349 objects unfound (0.000%); 248 scrub errors; Possible data damage: 1 pg recovery_unfound, 2 pgs inconsistent; Degraded

[ceph-users] cannot repair a handful of damaged pg's

2023-10-06 Thread Simon Oosthoek
Hi we're still in HEALTH_ERR state with our cluster, this is the top of the output of `ceph health detail` HEALTH_ERR 1/846829349 objects unfound (0.000%); 248 scrub errors; Possible data damage: 1 pg recovery_unfound, 2 pgs inconsistent; Degraded data redundancy: 6/7118781559 objects

[ceph-users] Re: is the rbd mirror journal replayed on primary after a crash?

2023-10-06 Thread Scheurer François
Dear all, replying to my own question ;-) this document explains the rbd mirroring / journaling process more in details: https://pad.ceph.com/p/I-rbd_mirroring especially this part: on startup, replay journal from flush position Store journal metadata in journal header, to be more general

[ceph-users] Re: Random issues with Reef

2023-10-06 Thread Eugen Block
Hi, either the cephadm version installed on the host should be updated as well so it matches the cluster version or you can also use the one that the orchestrator uses which stores its different versions in this path (@Mykola thanks again for pointing that out), the latest matches the

[ceph-users] Received signal: Hangup from killall

2023-10-06 Thread Rok Jaklič
Hi, yesterday we changed RGW from civetweb to beast and at 04:02 RGW stopped working; we had to restart it in the morning. In one rgw log for previous day we can see: 2023-10-06T04:02:01.105+0200 7fb71d45d700 -1 received signal: Hangup from killall -q -1 ceph-mon ceph-mgr ceph-mds ceph-osd

[ceph-users] Re: rgw: disallowing bucket creation for specific users?

2023-10-06 Thread Ondřej Kukla
If you want to do it using CLI in one command then try this “radosgw-admin user create --uid=test --display-name=“Test User" --max-buckets=-1” Ondrej > On 6. 10. 2023, at 9:07, Matthias Ferdinand wrote: > > On Fri, Oct 06, 2023 at 08:55:42AM +0200, Ondřej Kukla wrote: >> Hello Matthias, >>

[ceph-users] Re: rgw: disallowing bucket creation for specific users?

2023-10-06 Thread Matthias Ferdinand
On Fri, Oct 06, 2023 at 08:55:42AM +0200, Ondřej Kukla wrote: > Hello Matthias, > > In our setup we have a set of users that are only use to read from certain > buckets (they have s3:GetObject set in the bucket policy). > > When we create those read users using the Admin Ops API we add the >

[ceph-users] Re: rgw: disallowing bucket creation for specific users?

2023-10-06 Thread Ondřej Kukla
Hello Matthias, In our setup we have a set of users that are only use to read from certain buckets (they have s3:GetObject set in the bucket policy). When we create those read users using the Admin Ops API we add the max-buckets=-1 parameter which disables bucket creation.

[ceph-users] Re: rgw: disallowing bucket creation for specific users?

2023-10-06 Thread Matthias Ferdinand
On Thu, Oct 05, 2023 at 09:22:29AM +0200, Robert Hish wrote: > Unless I'm misunderstanding your situation, you could also tag your > placement targets. You then tag users with the corresponding tag enabling > them to create new buckets at that placement target. If a user is not tagged > with the

[ceph-users] Re: Fixing BlueFS spillover (pacific 16.2.14)

2023-10-06 Thread Chris Dunlop
On Fri, Oct 06, 2023 at 02:55:22PM +1100, Chris Dunlop wrote: Hi, tl;dr why are my osds still spilling? I've recently upgraded to 16.2.14 from 16.2.9 and started receiving bluefs spillover warnings (due to the "fix spillover alert" per the 16.2.14 release notes). E.g. from 'ceph health