:
> Hey Darrin,
>
> Can you provide the output of the following commands?
>
> ceph df detail
> ceph osd pool ls detail
> ceph balancer status
>
> Thanks so much,
>
>
> *From:* Darrin Hodges
HI all,
Just looking for clarification around the relationship between PGs,OSDs
and balancing on a ceph (octopus) cluster. We have pg autobalance on
and balancing is set to upmap. There are 2 pools, one is the default
metric pool with 1 pg, the other is the pool we are using for
everything, it h
Its all good, finally got it working, some of the osd nodes had the
incorrect default gw.
cheers
Darrin
On 12/2/21 9:41 am, Darrin Hodges wrote:
> Hi all,
>
> Still getting upgrade issue with cephadm " Upgrade: failed to pull
> target image". On each of the nodes i
Hi all,
Still getting upgrade issue with cephadm " Upgrade: failed to pull
target image". On each of the nodes in the cluster I can do:
docker pull docker.io/ceph/ceph:v15.2.8
And there is no error but the upgrade command fails still. I can see an
entry in the logs for:
Feb 11 22:27
HI all,
Upgrading ceph containers is failing, how do I debug it? 'cephadm pull'
seems to work on the node, but the upgrade fails.
$ ceph orch upgrade start --ceph-version 15.2.8
"message": "Error: UPGRADE_FAILED_PULL: Upgrade: failed to pull target image
thanks
Darrin
--
CONFIDENTIALITY NOT
HI all,
Still can't seem to get this upgrade to work, cephadm is at 15.2.8 but
the containers are still 15.2.4, any ideas on how find out what the
issue is?
many thanks
Darrin
On 1/2/21 11:48 am, Darrin Hodges wrote:
> Hi all,
>
> I'm attempting to upgrade our octopus 1
Hi all,
I'm attempting to upgrade our octopus 15.2.4 containers to 15.2.8. If I
run 'ceph orch upgrade start --ceph-version 15.2.8' it eventually errors
with:
'"message": "Error: UPGRADE_FAILED_PULL: Upgrade: failed to pull target
image", The documentation suggests that this is caused by specify
Hi all,
Have an issue with my three monitors, they keep getting " e3
handle_auth_request failed to assign global_id" errors, subsequently,
commands like 'ceph status' just hang. Any ideas on what the errors means?
many thanks
Darrin
--
CONFIDENTIALITY NOTICE: This email is intended for the n
HI all,
Had an issue where the docker containers on all the ceph nodes just seem
to stop at some point, effectively shutting down the cluster. Restarting
cephs on all of the nodes restored the cluster to normal working order.
I would like to find out why this occurred, any ideas on where to look?
HI all,
I have created a new cephs cluster using the cephadm command and it
appears to work very well. I tried to specify using an ssd for the
journals but it doesn't appear to have worked. My yaml file is:
service_type: osd
service_id: default_drive_group
placement:
host_pattern: 'ceph-o
10 matches
Mail list logo