also helpful is the output of:
cephpg{poolnum}.{pg-id}query
___
ceph ambassador DACH
ceph consultant since 2012
Clyso GmbH - Premier Ceph Foundation Member
https://www.clyso.com/
Am 16.03.24 um 13:52 schrieb Eugen Block:
Yeah, the whole story would help to
Hi,
another short note regarding the documentation, the paths are designed
for a package installation.
the paths for container installation look a bit different e.g.:
/var/lib/ceph//osd.y/
Joachim
___
ceph ambassador DACH
ceph consultant since 2012
Clyso
Hi,
I know similar requirements, the motivation and the need behind them.
We have chosen a clear approach to this, which also does not make the
whole setup too complicated to operate.
1.) Everything that doesn't require strong consistency we do with other
tools, especially when it comes to
@Eugen
We have seen the same problems 8 years ago. I can only recommend never
to use cache tiering in production.
At Cephalocon this was part of my talk and as far as I remember cache
tiering will also disappear from ceph soon.
Cache tiering has been deprecated in the Reef release as it has
Hi,
we have often seen strange behavior and also interesting pg targets from
pg_autoscaler in the last years.
That's why we disable it globally.
The commands:
ceph osd reweight-by-utilization
ceph osd test-reweight-by-utilization
are from the time before the upmap balancer was introduced and
Another the possibility is also the ceph mon discovery via DNS:
https://docs.ceph.com/en/quincy/rados/configuration/mon-lookup-dns/#looking-up-monitors-through-dns
Regards, Joachim
___
ceph ambassador DACH
ceph consultant since 2012
Clyso GmbH - Premier Ceph
Hi,
short note if you replace the disks with large disks, the weight of the
osd and host will change and this will force data migration.
Perhaps you read a bit more about the upmap balancer, if you want to
avoid data migration during the upgrade phase.
Regards, Joachim
you can also test it directly with ceph bench, if the WAL is on the
flash device:
https://www.clyso.com/blog/verify-ceph-osd-db-and-wal-setup/
Joachim
___
ceph ambassador DACH
ceph consultant since 2012
Clyso GmbH - Premier Ceph Foundation Member
Hello
we have been following rook since 2018 and have had our experiences both
on bare-metal and in the hyperscalers.
In the same way, we have been following cephadm from the beginning.
Meanwhile, we have been using both in production for years and the
decision which orchestrator to use
Hi Rok,
try this:
rgw_delete_multi_obj_max_num - Max number of objects in a single
multi-object delete request
(int, advanced)
Default: 1000
Can update at runtime: true
Services: [rgw]
config set
WHO: client. or client.rgw
KEY: rgw_delete_multi_obj_max_num
VALUE: 1
Jens Galsgaard:
https://www.youtube.com/playlist?list=PLrBUGiINAakPd9nuoorqeOuS9P9MTWos3
-Original Message-
From: Marc
Sent: Monday, May 15, 2023 4:42 PM
To: Joachim Kraftmayer - ceph ambassador ; Frank Schilder
; Tino Todino
Cc: ceph-users@ceph.io
Subject: [ceph-users] Re: CEPH Ver
Don't know if it helps, but we have also experienced something similar
with osd images. We changed the image tag from version to sha and it did
not happen again.
___
ceph ambassador DACH
ceph consultant since 2012
Clyso GmbH - Premier Ceph Foundation Member
Hi,
I know the problems that Frank has raised. However, it should also be
mentioned that many critical bugs have been fixed in the major versions.
We are working on the fixes ourselves.
We and others have written a lot of tools for ourselves in the last 10
years to improve migration/update
"bucket does not exist" or "permission denied".
Had received similar error messages with another client program. The default
region did not match the region of the cluster.
___
ceph ambassador DACH
ceph consultant since 2012
Clyso GmbH - Premier Ceph Foundation
Hello Thomas,
I would strongly recommend you to read the messages on the mailing list
regarding ceph version 16.2.11,16.2.12 and 16.2.13.
Joachim
___
ceph ambassador DACH
ceph consultant since 2012
Clyso GmbH - Premier Ceph Foundation Member
15 matches
Mail list logo