Hey Alexandre,
Sorry for the late reply here. I believe Anthony can give you a response on why
we chose a matrix rating scale type instead of rank.
—
— Mike Perez (thingee)
On Wed, Nov 25, 2020 at 8:27 AM Alexandre Marangone
wrote:
> Hi Mike,
>
> For some of the multiple answer questions like
May I ask if enabling pool compression helps for the future space amplification?
> George Yil şunları yazdı (27 Oca 2021 18:57):
>
> Thank you. This helps a lot.
>
>> Josh Baergen şunları yazdı (27 Oca 2021 17:08):
>>
>> On Wed, Jan 27, 2021 at 12:24 AM George Yil wrote:
>>> May I ask if i
On 1/27/21 3:51 PM, Konstantin Shalygin wrote:
Martin, also before restart - issue cache drop command to active mds
Don't do this if you have a large cache. It will make your MDS
unresponsive and replaced by a standby if available. There is a PR to
fix this: https://github.com/ceph/ceph/pull/
Thanks Dan and Anthony your suggestions have pointed me in the right
direction. Looking back through the logs at when the first error was
detected I found this:
ceph-osd: 2021-01-24 01:04:55.905 7f0c17821700 -1 log_channel(cluster)
log [ERR] : 17.7ffs0 scrub : stat mismatch, got 112867/112868 obje
On Wed, Jan 27, 2021 at 12:24 AM George Yil wrote:
> May I ask if it can be dynamically changed and any disadvantages should be
> expected?
Unless there's some magic I'm unaware of, there is no way to
dynamically change this. Each OSD must be recreated with the new
min_alloc_size setting. In pro
Anyone running octopus (v15)? Can you please share your experience of
radosgw-admin performance?
A simple 'radosgw-admin user list' took 11 minutes; if I use a v13.2.4
radosgw-admin, it can be finished in a few seconds.
This sounds like a performance regression to me. I've already filed a bug
rep
Nope !
Le 27/01/2021 à 17:40, Anthony D'Atri a écrit :
Do you have any override reweights set to values less than 1.0?
The REWEIGHT column when you run `ceph osd df`
On Jan 27, 2021, at 8:15 AM, Francois Legrand wrote:
Hi all,
I have a cluster with 116 disks (24 new disks of 16TB added in d
On Wed, Jan 27, 2021 at 5:34 PM Schoonjans, Tom (RFI,RAL,-) <
tom.schoonj...@rfi.ac.uk> wrote:
> Looks like there’s already a ticket open for AMQP SSL support:
> https://tracker.ceph.com/issues/42902 (you opened it ;-))
>
> I will give a try myself if I have some time, but don’t hold your breath
>
Hi all,
I have a cluster with 116 disks (24 new disks of 16TB added in december
and the rest of 8TB) running nautilus 14.2.16.
I moved (8 month ago) from crush_compat to upmap balancing.
But the cluster seems not well balanced, with a number of pgs on the 8TB
disks varying from 26 to 52 ! And a
Thank you. This helps a lot.
> Josh Baergen şunları yazdı (27 Oca 2021 17:08):
>
> On Wed, Jan 27, 2021 at 12:24 AM George Yil wrote:
>> May I ask if it can be dynamically changed and any disadvantages should be
>> expected?
>
> Unless there's some magic I'm unaware of, there is no way to
>
Usually the ceph.log prints the reason for the inconsistency when it
is first detected by scrubbing.
-- dan
On Wed, Jan 27, 2021 at 12:41 AM Richard Bade wrote:
>
> Hi Everyone,
> I also have seen this inconsistent with empty when you do
> list-inconsistent-obj
>
> $ sudo ceph health detail
> H
Martin, also before restart - issue cache drop command to active mds
k
Sent from my iPhone
> On 27 Jan 2021, at 11:58, Dan van der Ster wrote:
>
> In our experience failovers are largely transparent if the mds has:
>
>mds session blacklist on timeout = false
>mds session blacklist on
Paste your `ceph versions` please
k
Sent from my iPhone
> On 27 Jan 2021, at 03:07, Richard Bade wrote:
>
> Ceph v14.2.13 by the way.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi Chris
Having also recently started exploring Ceph. I too happened upon this problem.
I found that terminating the command witch ctrl-c seemed to stop the looping.
Which btw. also happens on all other mgr instances in the cluster.
Regards
Jens
-Original Message-
From: Chris Read
Se
On Wed, Jan 27, 2021 at 11:33 AM Schoonjans, Tom (RFI,RAL,-) <
tom.schoonj...@rfi.ac.uk> wrote:
> Hi Yuval,
>
>
> Switching to non-SSL connections to RabbitMQ allowed us to get things
> working, although currently it’s not very reliable.
>
can you please add more about that? what reliability issu
Doing some more testing.
I can demote the rbd image on the primary, promote on the secondary and the
image looks great. I can map it, mount it, and it looks just like it should.
However, the rbd snapshots are still unusable on the secondary even when
promoted. I went as far as taking a 2nd sn
Hi,
In our experience failovers are largely transparent if the mds has:
mds session blacklist on timeout = false
mds session blacklist on evict = false
And clients have
client reconnect stale = true
Cheers, Dan
On Wed, Jan 27, 2021 at 9:09 AM Martin Hronek
wrote:
>
> Hello fellow
Hello fellow CEPH-users,
currently we are updating our CEPH(14.2.16) and making changes to some
config settings.
TLDR: is there a way to make a graceful MDS active node shutdown without
loosing the caps, open files and client connections? Something like
handover active state, promote standby
18 matches
Mail list logo