Hi,
tl;dr why are my osds still spilling?
I've recently upgraded to 16.2.14 from 16.2.9 and started receiving bluefs
spillover warnings (due to the "fix spillover alert" per the 16.2.14
release notes). E.g. from 'ceph health detail', the warning on one of
these (there are a few):
osd.76
Hi
I have been using Ceph for many years now, and recently upgraded to Reef.
Seems I made the jump too quickly, as I have been hitting a few issues. I can't
find any mention of them in the bug reports. I thought I would share them here
in case it is something to do with my setup.
On V18.2.0
thanks Tobias, i see that https://github.com/ceph/ceph/pull/53414 had
a ton of test failures that don't look related. i'm working with Yuri
to reschedule them
On Thu, Oct 5, 2023 at 2:05 AM Tobias Urdin wrote:
>
> Hello Yuri,
>
> On the RGW side I would very much like to get this [1] patch in
I usually set it to warn, so I don't forget to check from time to time :)
Am Do., 5. Okt. 2023 um 12:24 Uhr schrieb Eugen Block :
> Hi,
>
> I strongly agree with Joachim, I usually disable the autoscaler in
> production environments. But the devs would probably appreciate bug
> reports to
Hi,
I strongly agree with Joachim, I usually disable the autoscaler in
production environments. But the devs would probably appreciate bug
reports to improve it.
Zitat von Boris Behrens :
Hi,
I've just upgraded to our object storages to the latest pacific version
(16.2.14) and the
Maybe the mismatching OSD versions had an impact on the unclean tier
removal, but this is just a guess. I couldn't reproduce it in a
Pacific test cluster, the removal worked fine without leaving behind
empty PGs. But I had only a few rbd images in that pool so it's not
really
Do your current Crush rules for your pools apply to the new OSD map
with those 4 nodes? If you have e.g. ec 4+2 in 8 node cluster and now
you have 4 nodes you went less than your min size, please check
Στις Πέμ 28 Σεπ 2023 στις 9:24 μ.μ., ο/η έγραψε:
>
> I have an 8-node cluster with old
Hello Eugen, Hello Joachim,
@Joachim: Interesting! And you got empty PGs, too? How did you solve the
problem?
@Eugen: This is one of our biggest clusters and we're in the process to
migrate from Nautilus to Octopus and to migrate from CentOS to Ubuntu.
The cache tier pool's OSDs were still
I know, I know... but since we are already using it (for years) I have
to check how to remove it safely, maybe as long as we're on Pacific. ;-)
Zitat von Joachim Kraftmayer - ceph ambassador :
@Eugen
We have seen the same problems 8 years ago. I can only recommend
never to use cache
@Eugen
We have seen the same problems 8 years ago. I can only recommend never
to use cache tiering in production.
At Cephalocon this was part of my talk and as far as I remember cache
tiering will also disappear from ceph soon.
Cache tiering has been deprecated in the Reef release as it has
I don't see sensible output for the commands:
# ls -ld //volumes/subvolgrp/test
# ls -l //volumes/subvolgrp/test/.snap
please remember to replace / with the path to the mount
point on your system
I'm presuming / is the path where you have mounted the
root dir of your cephfs filesystem
On Thu,
Which ceph version is this? I'm trying to understand how removing a
pool leaves the PGs of that pool... Do you have any logs or something
from when you removed the pool?
We'll have to deal with a cache tier in the forseeable future as well
so this is quite relevant for us as well. Maybe I'll
Unless I'm misunderstanding your situation, you could also tag your
placement targets. You then tag users with the corresponding tag
enabling them to create new buckets at that placement target. If a user
is not tagged with the corresponding tag they cannot create new buckets
at that placement
this is really odd
Please run following commands and send over their outputs:
# ceph status
# ceph fs status
# ceph report
# ls -ld //volumes/subvolgrp/test
# ls -l //volumes/subvolgrp/test/.snap
On Thu, Oct 5, 2023 at 11:17 AM Kushagr Gupta
wrote:
>
> Hi Milind,Team
>
> Thank you for your
Hi,
were you able to recover your cluster or is this still an issue?
What exactly do you mean by this?
It generally fails to recover in the middle and starts from scratch.
Are OSDs "flapping" or are there other issues as well? Please provide
more details what exactly happens.
There are a
Hi Dave,
The request built correctly.
Actually... RGW responded to that request (the embedded S3 select engine)
The engine error message points on syntax error.
It's quite an old version... we made a lot of changes, and implemented more
related features.
If not mistaken, the query missing a
16 matches
Mail list logo