[ceph-users] Re: PG damaged "failed_repair"

2024-03-11 Thread Eugen Block
Hi, your ceph version seems to be 17.2.4, not 17.2.6 (which is the locally installed ceph version on the system where you ran the command) Could you add the 'ceph versions' output as well? How is the load on the systems when the recovery starts? The OSDs crash after around 20 minutes, not

[ceph-users] Dashboard building issue "RuntimeError: memory access out of bounds"?

2024-03-11 Thread 张东川
Hi there, I was building ceph with the tag "v19.0.0" on Milkv Pioneer board (RISCV arch, OS is fedora-riscv 6.1.55) I ran "./do_cmake.sh -DCMAKE_BUILD_TYPE=RelWithDebInfo -DWITH_SPDK=ON" then just went to the "build" folder and ran "ninja" command. But it failed with the following dashboard err

[ceph-users] Telemetry endpoint down?

2024-03-11 Thread Konstantin Shalygin
Hi, seems telemetry endpoint is down for a some days? We have connection errors from multiple places 1:ERROR Mar 10 00:46:10.653 [564383]: opensock: Could not establish a connection to telemetry.ceph.com:443 2:ERROR Mar 10 01:48:20.061 [564383]: opensock: Could not establish a connecti

[ceph-users] Re: MANY_OBJECT_PER_PG on 1 pool which is cephfs_metadata

2024-03-11 Thread Eugen Block
Hi, I assume you're still on a "low" pacific release? This was fixed by PR [1][2] and the warning is supressed when autoscaler is on, it was merged into Pacific 16.2.8 [3]. I can't answer why autoscaler doesn't increase the pg_num, but yes, you can increase it by yourself. The pool for ce

[ceph-users] v18.2.2 Reef (hot-fix) released

2024-03-11 Thread Yuri Weinstein
We're happy to announce the 2nd hotfix release in the Reef series. We recommend users to update to this release. For detailed release notes with links & changelog please refer to the official blog entry at https://ceph.io/en/news/blog/2024/v18-2-2-reef-released/ Notable Changes --- * m

[ceph-users] Re: AMPQS support in Nautilus

2024-03-11 Thread Yuval Lifshitz
Hi Manuel, I looked into the nautilus documentation [1]. could not find anything about amqps there. Yuval [1] https://docs.ceph.com/en/nautilus/radosgw/notifications/#create-a-topic On Mon, Mar 11, 2024 at 12:50 AM Manuel Negron wrote: > Hello, ive been trying to setup bucket notifications usi

[ceph-users] Re: Telemetry endpoint down?

2024-03-11 Thread Gregory Farnum
We had a lab outage Thursday and it looks like this service wasn’t restarted after that occurred. Fixed now and we’ll look at how to prevent that in future. -Greg On Mon, Mar 11, 2024 at 6:46 AM Konstantin Shalygin wrote: > Hi, seems telemetry endpoint is down for a some days? We have connection

[ceph-users] Re: Telemetry endpoint down?

2024-03-11 Thread Konstantin Shalygin
Hi Greg Seems is up now, last report uploaded successfully Thanks, k Sent from my iPhone > On 11 Mar 2024, at 18:57, Gregory Farnum wrote: > > We had a lab outage Thursday and it looks like this service wasn’t > restarted after that occurred. Fixed now and we’ll look at how to prevent > that

[ceph-users] Re: General best practice for stripe unit and count if I want to change object size

2024-03-11 Thread Ilya Dryomov
On Sat, Mar 9, 2024 at 4:42 AM Nathan Morrison wrote: > > This was asked in reddit and was requested to post here: > > So in RBD, say I want to make an image that's got an object size of 1M > instead of the default 4M (if it will be a VM say, and likely not have > too many big files in it, just OS

[ceph-users] 18.2.2 dashboard really messed up.

2024-03-11 Thread Harry G Coin
Looking at ceph -s, all is well.  Looking at the dashboard, 85% of my capacity is 'warned', and 95% is 'in danger'.   There is no hint given as to the nature of the danger or reason for the warning.  Though apparently with merely 5% of my ceph world 'normal', the cluster reports 'ok'.  Which, y

[ceph-users] Re: bluestore_min_alloc_size and bluefs_shared_alloc_size

2024-03-11 Thread Joel Davidow
For osds that are added new, bfm_bytes_per_block is 4096. However, for osds that were added when the cluster was running octopus, bfm_bytes_per_block remains 65535. Based on https://github.com/ceph/ceph/blob/1c349451176cc5b4ebfb24b22eaaa754e05cff6c/src/os/bluestore/BitmapFreelistManager.cc and the

[ceph-users] Re: bluestore_min_alloc_size and bluefs_shared_alloc_size

2024-03-11 Thread Alexander E. Patrakov
Hello Joel, Please be aware that it is not recommended to keep a mix of OSDs created with different bluestore_min_alloc_size values within the same CRUSH device class. The consequence of such a mix is that the balancer will not work properly - instead of evening out the OSD space utilization, it w

[ceph-users] Elasticsearch sync module | Ceph Issue

2024-03-11 Thread Lokendra Rathour
Hi Team, We are working on Elasticsearch sync module integration with ceph. Ceph version: 18.2.5 (reef) Elasticsearch: 8.2.1 Problem statement: The Syncing between the zones are not happenning: Links followed to perform the integration: https://ceph.io/en/news/blog/2017/new-luminous-rgw-metadat