Dear All,
I hope you are all well. I would like to introduce new tools I have developed,
named "LBA tools" which including hd_write_verify & hd_write_verify_dump.
github: https://github.com/zhangyoujia/hd_write_verify
pdf: https://github.com/zhangyoujia/hd_write_verify/DISK stability
> Currently, I have an OpenStack installation with a Ceph cluster consisting of
> 4 servers for OSD, each with 16TB SATA HDDs. My intention is to add a second,
> independent Ceph cluster to provide faster disks for OpenStack VMs.
Indeed, I know from experience that LFF spinners don't cut it
Hi,
Currently, I have an OpenStack installation with a Ceph cluster consisting of 4
servers for OSD, each with 16TB SATA HDDs. My intention is to add a second,
independent Ceph cluster to provide faster disks for OpenStack VMs.
The idea for this second cluster is to exclusively provide RBD
Hi Wesley,
On 06/10/2023 17:48, Wesley Dillingham wrote:
A repair is just a type of scrub and it is also limited by
osd_max_scrubs which in pacific is 1.
We've increased that to 4 (and temporarily to 8) since we have so many
OSDs and are running behind on scrubbing.
If another scrub is
On 06.10.2023 17:48, Wesley Dillingham wrote:
A repair is just a type of scrub and it is also limited by
osd_max_scrubs
which in pacific is 1.
If another scrub is occurring on any OSD in the PG it wont start.
do "ceph osd set noscrub" and "ceph osd set nodeep-scrub" wait for all
scrubs to
A repair is just a type of scrub and it is also limited by osd_max_scrubs
which in pacific is 1.
If another scrub is occurring on any OSD in the PG it wont start.
do "ceph osd set noscrub" and "ceph osd set nodeep-scrub" wait for all
scrubs to stop (a few seconds probably)
Then issue the pg
On 06/10/2023 16:09, Simon Oosthoek wrote:
Hi
we're still in HEALTH_ERR state with our cluster, this is the top of the
output of `ceph health detail`
HEALTH_ERR 1/846829349 objects unfound (0.000%); 248 scrub errors;
Possible data damage: 1 pg recovery_unfound, 2 pgs inconsistent;
Degraded
Hi
we're still in HEALTH_ERR state with our cluster, this is the top of the
output of `ceph health detail`
HEALTH_ERR 1/846829349 objects unfound (0.000%); 248 scrub errors;
Possible data damage: 1 pg recovery_unfound, 2 pgs inconsistent;
Degraded data redundancy: 6/7118781559 objects
Dear all,
replying to my own question ;-)
this document explains the rbd mirroring / journaling process more in details:
https://pad.ceph.com/p/I-rbd_mirroring
especially this part:
on startup, replay journal from flush position
Store journal metadata in journal header, to be more general
Hi,
either the cephadm version installed on the host should be updated as
well so it matches the cluster version or you can also use the one
that the orchestrator uses which stores its different versions in this
path (@Mykola thanks again for pointing that out), the latest matches
the
Hi,
yesterday we changed RGW from civetweb to beast and at 04:02 RGW stopped
working; we had to restart it in the morning.
In one rgw log for previous day we can see:
2023-10-06T04:02:01.105+0200 7fb71d45d700 -1 received signal: Hangup from
killall -q -1 ceph-mon ceph-mgr ceph-mds ceph-osd
If you want to do it using CLI in one command then try this “radosgw-admin user
create --uid=test --display-name=“Test User" --max-buckets=-1”
Ondrej
> On 6. 10. 2023, at 9:07, Matthias Ferdinand wrote:
>
> On Fri, Oct 06, 2023 at 08:55:42AM +0200, Ondřej Kukla wrote:
>> Hello Matthias,
>>
On Fri, Oct 06, 2023 at 08:55:42AM +0200, Ondřej Kukla wrote:
> Hello Matthias,
>
> In our setup we have a set of users that are only use to read from certain
> buckets (they have s3:GetObject set in the bucket policy).
>
> When we create those read users using the Admin Ops API we add the
>
Hello Matthias,
In our setup we have a set of users that are only use to read from certain
buckets (they have s3:GetObject set in the bucket policy).
When we create those read users using the Admin Ops API we add the
max-buckets=-1 parameter which disables bucket creation.
On Thu, Oct 05, 2023 at 09:22:29AM +0200, Robert Hish wrote:
> Unless I'm misunderstanding your situation, you could also tag your
> placement targets. You then tag users with the corresponding tag enabling
> them to create new buckets at that placement target. If a user is not tagged
> with the
On Fri, Oct 06, 2023 at 02:55:22PM +1100, Chris Dunlop wrote:
Hi,
tl;dr why are my osds still spilling?
I've recently upgraded to 16.2.14 from 16.2.9 and started receiving
bluefs spillover warnings (due to the "fix spillover alert" per the
16.2.14 release notes). E.g. from 'ceph health
16 matches
Mail list logo