Hi,
For me the only thing that solved my slowness is to set the numa node/socket to
the maximum which is with amd 4. After my cluster started to work.
Also on our HP hardware I need to use HPC profile to squeeze out the maximum,
any other profile creates latency.
Istvan
_
>
>
> On Fri, Sep 13, 2024 at 12:16 PM Anthony D'Atri wrote:
>> My sense is that with recent OS and kernel releases (e.g., not CentOS 8)
>> irqbalance does a halfway decent job.
>
> Strongly disagree! Canonical has actually disabled it by default in
> Ubuntu 24.04 and IIRC Debian already doe
There's at least everything for authentication, a service kesystone account.
Every config we can see here :
https://docs.ceph.com/en/latest/radosgw/keystone/
Le mercredi 11 septembre 2024, 23:12:14 CEST W a écrit :
> more a OpenStack question, but;
> may I ask why you define many rgw_keystone_
On Fri, Sep 13, 2024 at 12:16 PM Anthony D'Atri wrote:
> My sense is that with recent OS and kernel releases (e.g., not CentOS 8)
> irqbalance does a halfway decent job.
Strongly disagree! Canonical has actually disabled it by default in
Ubuntu 24.04 and IIRC Debian already does, too:
https://di
Hi,
I’d suggest to check the servers where the MDS‘s are supposed to be
running on for a reason why the services stopped. Check daemon logs
and the service status for hints pointing to a possible root cause.
Try restarting the services and paste startup logs from a failure here
if you nee
Hi,
we started upgrading our Ceph Cluster consisting of 7 Nodes from quincy
to reef two days ago. This included the upgrade of the underlying OS and
several other small changes.
After hitting the osd_remove_queue bug, we could recover mostly, but are
still in a non-healthy state because of chan
Hi,
we started upgrading our Ceph Cluster consisting of 7 Nodes from quincy
to reef two days ago. This included the upgrade of the underlying OS and
several other small changes.
After hitting the osd_remove_queue bug, we could recover mostly, but are
still in a non-healthy state because of chan
more a OpenStack question, but;
may I ask why you define many rgw_keystone_ vars?
I only set like 3 or 4 keys when I remember right. keys for cinder and Nova and
one to control bucket operations.
On 11 September 2024 20:00:56 UTC, Gilles Mocellin
wrote:
>Oh !
>Thank you !
>
>I'll try tomorr
Hello,
I'm following this guide to upgrade our cephs:
https://ainoniwa.net/pelican/2021-08-11a.html (Proxmox VE 6.4 Ceph upgrade
Nautilus to Octopus)
It's a requirement to upgrade our ProxMox environnement.
Now I've reached the point at that guide where i have to "Upgrade all
CephFS MDS daemons"
The gibba cluster upgrade is complete! All looks good.
On Thu, Sep 12, 2024 at 9:54 AM Laura Flores wrote:
> Will do, @Rachana Patel .
>
> On Thu, Sep 12, 2024 at 3:44 AM Rachana Patel wrote:
>
>> Thanks Venky !
>> We can now focus on next tasks -
>>
>> - Release Notes
>>https://
We are a small non-profit company that develops technologies for affordable
delivery of broadband Internet access to rural communities in developing
regions.
We are experimenting with a small Ceph cluster that is housed in a street-side
cabinet.
Our goals are to maintain availability, and avoid
Hi,
Increasing this value to 30 is the only thing I could do at the moment
k
Sent from my iPhone
> On 13 Sep 2024, at 16:49, Eugen Block wrote:
>
> I remember having a prometheus issue quite some time ago, it couldn't handle
> 30 nodes or something, not really a big cluster. But we needed to
12 matches
Mail list logo