Hello!
I guess that default 4 and 15 seconds are too long for my configuration.
If I want to reduce mds_beacon_grace to 5 seconds is 1 right value for
mds_beacon_interval ?
Thank you!
___
ceph-users mailing list -- ceph-users@ceph.io
To
Hi Stuart,
I would highly recommend you to have this [1] fix, so that the mirroring
works as expected and uses the prev snapshot for syncing.
Having multiple mirror daemons also improves the speed.
[1] https://github.com/ceph/ceph/pull/54405
- Jos Collin
On 13/11/23 21:31, Stuart Cornell
dashboard changes are minimal and approved. and since the dashboard change
is related to the
monitoring stack (prometheus..) which is something not covered in the
dashboard test suites, I don't think running it is necessary.
But maybe the cephadm suite has some monitoring stack related testings
> the speed of data transfer is varying a lot over time (200KB/s
> – 120MB/s). [...] The FS in question, has a lot of small files
> in it and I suspect this is the cause of the variability – ie,
> the transfer of many small files will be more impacted by
> greater site-site latency.
200KB/s on
Ack Travis.
Since it touches a dashboard, Nizam - please reply/approve.
I assume that rados/dashboard tests will be sufficient, but expecting
your recommendations.
This addition will make the final release likely to be pushed.
On Mon, Nov 13, 2023 at 11:30 AM Travis Nielsen wrote:
>
> I'd
Thanks Travis!
@Yuri Weinstein In this case, I think we should rerun
the full orch suite on the new build, + a subset of mgr tests from the
rados suite. How does that sound?
- Laura
On Mon, Nov 13, 2023 at 1:30 PM Travis Nielsen wrote:
> I'd like to see these changes for much improved
I'd like to see these changes for much improved dashboard integration with
Rook. The changes are to the rook mgr orchestrator module, and supporting
test changes. Thus, this should be very low risk to the ceph release. I
don't know the details of the tautology suites, but I would think suites
Redouane
What would be a sufficient level of testing (tautology suite(s))
assuming this PR is approved to be added?
On Mon, Nov 13, 2023 at 9:13 AM Redouane Kachach wrote:
>
> Hi Yuri,
>
> I've just backported to reef several fixes that I introduced in the last
> months for the rook
Josh, Travis, Neha - I can't accept this change without your approval.
Please reply.
On Mon, Nov 13, 2023 at 9:13 AM Redouane Kachach wrote:
>
> Hi Yuri,
>
> I've just backported to reef several fixes that I introduced in the last
> months for the rook orchestrator. Most of them are fixes for
Hi Ceph users and developers,
You are invited to join us at the User + Dev meeting this week Thursday,
November 16th at 10:00 AM EST! See below for more meeting details.
The focus topic, "Operational Reliability and Flexibility in Ceph
Upgrades", will be presented by Christian Theune. His
Hello,
As far as I can tell there is no way to shrink a db/wal after creation.
I recently added a new server to my cluster with SSD's for the wal/db and
just used the ceph dashboard for deployment. I did not specify a db size,
which is my mistake, it seems by default it uses "block.db has no
On 13/11/2023 16:28, Daniel Baumann wrote:
On 11/13/23 17:14, Luke Hall wrote:
How is it that Proxmox were able to release Debian12 packages for Quincy
quite some time ago?
because you can, as always, just (re-)build the package yourself.
I guess I was just trying to point out that there
On 11/13/23 17:14, Luke Hall wrote:
> How is it that Proxmox were able to release Debian12 packages for Quincy
> quite some time ago?
because you can, as always, just (re-)build the package yourself.
> My understanding is that they change almost nothing in their packages
> and just roll them to
Hello all,
today i got a new certificate for our internal domain based on RSA/4096
secp384r1. After inserting CRT and Key i got both "...updated" messages.
After checking the dashboard i got an empty page and this error:
health: HEALTH_ERR
Module 'dashboard' has failed: key type
How is it that Proxmox were able to release Debian12 packages for Quincy
quite some time ago?
https://download.proxmox.com/debian/ceph-quincy/dists/
My understanding is that they change almost nothing in their packages
and just roll them to fit with their naming schema etc.
On 01/11/2023
my understanding is that default placement is stored at the bucket
level, so changes to the user's default placement only take effect for
newly-created buckets
On Sun, Nov 12, 2023 at 9:48 PM Huy Nguyen wrote:
>
> Hi community,
> I'm using Ceph version 16.2.13. I tried to set
Hi,
On 13/11/2023 10:42, Chris Palmer wrote:
And another big +1 for debian12 reef from us. We're unable to upgrade to
either debian12 or reef.
I've been keeping an eye on the debian12 bug, and it looks as though it
might be fixed if you start from the latest repo release.
My expectation is
Hi all.
I have successfully configured an operational mirror between 2 sites for Ceph
FS. The mirroring is running but the speed of data transfer is varying a lot
over time (200KB/s – 120MB/s). The network infrastructure between the two Ceph
clusters is reliable and should not be the cause of
Is this the same cluster as the one your reported down OSDs for? Can
you share the logs from before the "probing" status? You may have to
increase the log level to something like debug_mon = 20. But be
cautious and monitor the used disk space, it can increase quite a lot.
Did you have any
Hi,
can you share the following output:
ceph -s
ceph health detail
ceph versions
ceph osd df tree
ceph osd dump
I see this line in the logs:
check_osdmap_features require_osd_release unknown -> octopus
which makes me wonder if you really run a Nautilus cluster.
Are your OSDs saturated?
Hi Motahare,
On 13/11/2023 14:44, Motahare S wrote:
Hello everyone,
Recently we have noticed that the results of "ceph df" stored and used
space does not match; as the amount of stored data *1.5 (ec factor) is
still like 5TB away from used amount:
POOLID PGS
Hello everyone,
Recently we have noticed that the results of "ceph df" stored and used
space does not match; as the amount of stored data *1.5 (ec factor) is
still like 5TB away from used amount:
POOLID PGS STORED OBJECTS USED %USED
MAX AVAIL
And another big +1 for debian12 reef from us. We're unable to upgrade to
either debian12 or reef.
I've been keeping an eye on the debian12 bug, and it looks as though it
might be fixed if you start from the latest repo release.
Thanks, Chris
On 13/11/2023 07:43, Berger Wolfgang wrote:
+1 for
Hello Mosharaf,
There is an automated service available that will criticize your cluster:
https://analyzer.clyso.com/#/analyzer
On Sun, Nov 12, 2023 at 12:03 PM Mosharaf Hossain <
mosharaf.hoss...@bol-online.com> wrote:
> Hello Community
>
> Currently, I operate a CEPH Cluster utilizing Ceph
Dear Concern
I am observing a mon is out of quorum. The current running version of Ceph
is octopus.
Total Node in the cluster: 13
Mon: 3/3
Network: 20G(10Gx 2) bonded link
Each node capacity: 512GB RAM + 72 Core CPU
root@ceph6:/var/run/ceph# systemctl status ceph-mon@ceph6.service
â—
25 matches
Mail list logo