Hello, i have an issue about my multisite configuration.
pacific 16.2.9
My problem:
i have a permission denied on the the master zone when i use the command below.
$ radosgw-admin sync status
realm 8df19226-a200-48fa-bd43-1491d32c636c (myrealm)
zonegroup
Your advice regarding the set container images manually did lead me to check
cephadmin config to see what other nodes are set to and i did see stop and
17.2.5 set for certain nodes and OSDs. As soon as I pointed all of them the
right away my logs started showing real data and I can deploy and
Hey David, yes its me..Thank you for your help btw.
I was waiting on my acceptance to the ceph tracker website. Seems it is in so
I will submit a request soon, but I havent been able to reproduce it so I am
not sure if I can provide relevant info for that.
I already ran that orch upgrade stop
Hi,
I have a cluster with rook operator running the ceph version 1.6 and
upgraded first rook operator and then the ceph cluster definition.
Everything was fine, every component except from osds are upgraded. Below
is the reason of OSD not being upgraded:
not updating OSD 1 on node
>
> Current cluster status says healthy but I cannot deploy new daemons, the
>> mgr information isnt refreshing (5 days old info) under hosts and services
>> but the main dashboard is accurate like ceph -s
>> Ceph -s will show accurate information but things like ceph orch ps
>> --daemon-type mgr
that looks like it was expecting a json structure somewhere and got a blank
string. Is there anything in the logs (ceph log last 100 info cephadm)? If
not, might be worth trying a couple mgr failovers (I'm assuming only one
got upgraded, so first failover would go back to the 15.2.17 one and then
hi , starting upgrade from 15.2.17 i got this error
Module 'cephadm' has failed: Expecting value: line 1 column 1 (char 0)
Cluster was in health ok before starting.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to
This is the output
{
"target_image": null,
"in_progress": false,
"which": "",
"services_complete": [],
"progress": null,
"message": "",
"is_paused": false
}
grep image
global basic
container_image
Hello at this point I've tried to upgrade a few times so I believe the command
is long gone. On another forum someone was eluding that i accidentally set the
image to "stop" instead of running a proper upgrade stop command but I couldnt
find anything like that on the hosts I ran commands from
hi!
thanks to all of you, I appreciate this very much! I will have to go through
all of your messages a few more times and do some research.
so our rule from the intial post does make sure, that, when 1 room goes down it
does NOT try to restore 3 replicas in the remaining room but it will only
Hi,
I have a cluster with rook operator running the ceph version 1.6 and
upgraded first rook operator and then the ceph cluster definition.
Everything was fine, every component except from osds are upgraded. Below
is the reason of OSD not being upgraded:
not updating OSD 1 on node
This post took a while to be checked from a moderator and meanwhile I found a
Service rule, that fetched all my available diskes. I deleted it and after
that, all commands works as foreseen.
Thanks to all for reading.
___
ceph-users mailing list --
Sorry, can anyone aks how to fix MountVolume.MountDevice failed for volume
"pvc-7b60b096-c9d3-4b6f-a8fc-5241541959d3" : rpc error: code = Internal desc =
rados: ret=-61, No data available: "error in getxattr"
Now its a main problem
Thank you
___
Hello,
I need to provide a credential to set of users to allow them to set quota for
Cephfs pools. How do I accomplish it?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi,
I ended up with having whole set of osds to get back original ceph cluster.
I figured out to make the cluster running. However, it's status is
something as below:
bash-4.4$ ceph -s
cluster:
id: 3f271841-6188-47c1-b3fd-90fd4f978c76
health: HEALTH_WARN
7 daemons
Thank you Xiubo, will try that option, looks like it is done with the intention
to keep it at the client level.
Anantha
-Original Message-
From: Xiubo Li
Sent: Tuesday, March 7, 2023 12:44 PM
To: Adiga, Anantha ; ceph-users@ceph.io
Subject: Re: [ceph-users] Creating a role for quota
Hi Joffrey,
That's good to know. Please note that you can switch back to
"high_client_ops" profile
once all the recoveries are completed. This is to ensure clients operations
get
higher priority when there aren't many recoveries or no recoveries going on.
-Sridhar
On Tue, Mar 7, 2023 at 2:51 PM
Hi, I changed de mclock priority last Friday with "ceph tell 'osd.*'
injectargs '--osd_mclock_profile=high_recovery_ops'" and now, the Health is
OK.
So, you(re right, I need to change the mclock scheduler to changes recovery
priority.
Thanks you
Le sam. 4 mars 2023 à 08:12, Sridhar Seshasayee a
18 matches
Mail list logo