Dear Cepher,
I have a requirement to use CephFS as a tiered file system, i.e. the data will
be first stored onto an all-flash pool (using SSD OSDs), and then automatically
moved to an EC coded pool (using HDD OSDs) according to threshold on file
creation time (or access time). The reason for
Hi !
We have upgraded our Ceph Cluster to version 14.2.20 now.
But we can not upgrade all clients at the moment, so we would like to
stick with the insecure global id for a while.
So we have set :
ceph config set mon mon_warn_on_insecure_global_id_reclaim_allowed false
and
ceph config
IIRC 'ceph health mute' is new in octopus (15.2.x). But disabling the
mon_warn_on_insecure_global_id_reclaim_allowed setting should be
sufficient to make the cluster be quiet...
On Mon, Jul 19, 2021 at 10:53 AM Siegfried Höllrigl
wrote:
>
> Hi !
>
> We have upgraded our Ceph Cluster to version
I tried both auth_allow_insecure_global_id_reclaim= false and
auth_allow_insecure_global_id_reclaim=true, but get the same errors. I will
watch for an updated build.
-Original Message-
From: Ilya Dryomov
Sent: Monday, July 19, 2021 7:59 AM
To: Robert W. Eckert
Cc: ceph-users@ceph.io
Hello all,
We have a replicated cluster in Nautilus. Recently, we resharded a bucket
without stopping the gateways, as a consequence, the bucket on the
secondary zone now reports 0KB usage, even though you can still see the
objects in the data pool.
I was able to reproduce the issue in a lab, so
Hi Dominic, All,
After going through the errors in detail and looking through "ceph
features", I have set *ceph osd set-require-min-compat-client luminous* and
cleared the warning. I have fixed the remaining warning too and the
cluster is healthy.
Thank you everyone for taking time to
I had recently setup a test cluster of Ceph Octopus, on a particular set of
hybrid OSD nodes.
It ran at a particular rated IO level, judging by "fio".
Now in the last month or so, I got to deploy an evaluation cluster of ceph
pacific, on the same hardware.
It is *drastically* slower, using the
Here's a recipe, from the when I had the same question:
"[ceph-users] Re: rgw index shard much larger than others - ceph-users -
lists.ceph.io"
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/message/MO7IHRGJ7TGPKT3GXCKMFLR674G3YGUX/
On Mon, 19 Jul 2021, 18:00 Boris Behrens, wrote:
>
Hi Dan,
how do I find out if a bucket got versioning enabled?
Am Mo., 19. Juli 2021 um 17:00 Uhr schrieb Dan van der Ster
:
>
> Hi Boris,
>
> Does the bucket have object versioning enabled?
> We saw something like this once a while ago: `s3cmd ls` showed an
> entry for an object, but when we
Hi Boris,
Does the bucket have object versioning enabled?
We saw something like this once a while ago: `s3cmd ls` showed an
entry for an object, but when we tried to get it we had 404.
We didn't find a good explanation in the end -- our user was able to
re-upload the object and it didn't recur so
On Tue, Jun 29, 2021 at 4:03 PM Lucian Petrut
wrote:
>
> Hi,
>
> It’s a compatibility issue, we’ll have to update the Windows Pacific build.
Hi Lucian,
Did you get a chance to update the build?
I assume that means the MSI installer at [1]? I see [2] but the MSI
bundle still seems to contain
On Thu, Jul 15, 2021 at 11:55 PM Robert W. Eckert wrote:
>
> I would like to directly mount cephfs from the windows client, and keep
> getting the error below.
>
>
> PS C:\Program Files\Ceph\bin> .\ceph-dokan.exe -l x
> 2021-07-15T17:41:30.365Eastern Daylight Time 4 -1 monclient(hunting):
>
Does someone got an idea, how this could happen?
* The files are present in the output of "radosgw-admin bi list --bucket BUCKET"
* The files are missing in the output of "radosgw-admin bucket
radoslist --bucket BUCKET"
* I have strange shadow objects that doesn't seem to have a filename
In this case the release notes of 14.2.22 already contained the hint
that bluefs_buffered_io was set to true again after it had been disabled
some minor versions ago. So I guess the realease notes are the best
option to find infos about changes.
You could also dump all config settings on a
Regarding: "Probably there is no IO"
When I check "ceph -s", it says there is activity. Not much, but some
activity:
io:
client: 316 KiB/s rd, 10 KiB/s wr, 1 op/s rd, 1 op/s wr
And no activity doesn't explain why cephfs-top complains about not finding
the cluster.
Re: using the options
Just digging: I have a ton of iles in the radosgw-admin bucket radoslist
output that looks like
ff7a8b0c-07e6-463a-861b-78f0adeba8ad.83821626.6927__shadow_.LRSp5qOg4cDn2ImWxeXtJlRvfLNZ-8R_1
ff7a8b0c-07e6-463a-861b-78f0adeba8ad.83821626.6927__shadow_.yscyiu0DpWRh_Agsnii3635ZNnrO16x_1
Hello Josh,
thank you very much for your answer. Well we do not use encryption on
our OSDs, but the symptoms you found are quite similar to what I
observed after the upgrade to 14.2.22.
We also use the new default with bluefs_buffered_io=true which probably
cause the higher OSD latencies
Is there any way I can unset pglog_hardlimit from osdmap?
I see the release note about this flag that it could not be unset but I
don't get why because the only diff when this flag is on is in timing pg
logs in aggressive mode and I don't get why if I unset this flag anything
might be hurt?
The
18 matches
Mail list logo