On Debian I have had the experience that one of the OSD-related systemd
unit files was not installed - sorry that I can't recall which one, but
the error you describe is the same one I had.
The required file was in the source package, but was apparently left off
the list of files to be copied
On Sat, Jul 4, 2020 at 1:37 AM Rodrigo Severo - Fábrica
wrote:
>
> Hi,
>
>
> Just rebooted one of my OSD servers after upgrading Ceph from 14.2.9 to
> 14.2.10 and it's OSDs won't come up.
>
> I find the following messages on my log:
>
>4991 Jul 3 17:24:03 osdserver1-df ceph-osd[1272]: 2020-07
So mount it, if it is empty
-Original Message-
To: ceph-users
Subject: [ceph-users] Ceph OSD not mounting after reboot
Hi,
Just rebooted one of my OSD servers after upgrading Ceph from 14.2.9 to
14.2.10 and it's OSDs won't come up.
I find the following messages on my log:
4991
Hi,
Just rebooted one of my OSD servers after upgrading Ceph from 14.2.9 to
14.2.10 and it's OSDs won't come up.
I find the following messages on my log:
4991 Jul 3 17:24:03 osdserver1-df ceph-osd[1272]: 2020-07-03
17:24:03.036 7fcc497f1c00 -1 auth: unable to find a keyring on
/var/lib/ceph
Am 03.07.20 um 20:29 schrieb Dimitri Savineau:
> You can try to use ceph-ansible which supports baremetal and containerized
> deployment.
>
> https://github.com/ceph/ceph-ansible
Thanks for the pointer!
I know about ceph-ansible. The problem is that our full infrastructure is
Puppet-based, so
https://tracker.ceph.com/issues/45726
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
You can try to use ceph-ansible which supports baremetal and containerized
deployment.
https://github.com/ceph/ceph-ansible
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Okay, I think I may have solved my own problem. I was using the ceph Octopus
packages that are distributed by Ubuntu. When I switched to the ones
distributed by Ceph, the cephadm adopt was more verbose, and apparently set the
requisite permissions. This time, after a reboot, the OSD docker conta
Biohazard;
This looks like a fairly simple authentication issue. It looks like the
keyring(s) available to the command don't contain a key which meets the
commands needs.
Have you verified the presence and accuracy of your keys?
Thank you,
Dominic L. Hilsbos, MBA
Director - Information Tech
Hendik;
I'm assuming that s3.url.com round robin DNSed to the new interface on each
host.
I don't see a problem with pointing the dashboard at one of the hosts directly.
Though there is no load balancing in that kind of setup. I don't believe the
dashboard represents a significant load.
If
Hello Dominic,
thank you for you quick help, I did change the settings but maybe to the wrong
host:
The endpoint for the clients would be something like $bucketname.s3.url.com, i
therefor set the api-host to s3.url.com (which worked before).
Now that I am writing this I realize that the dashboa
I have a situation were OSDs won’t work as Docker containers with Octopus on an
Ubuntu 2020.04 host.
The cephadm adopt —style legacy —name osd.8 command works as expected, and sets
up the /var/lib/ceph/ directory as expected:
root@balin:~# ll /var/lib/ceph/c3d06c94-bb66-4f84-bf78-470a2364b667/o
Hendrik;
Since the hostname / FQDN for use by Ceph for you RGW server(s) changed, did
you adjust the rgw-api-host setting for the dashboard?
The command would be:
ceph dashboard set-rgw-api-host
Thank you,
Dominic L. Hilsbos, MBA
Director – Information Technology
Perform Air International I
anyone seen this error on the new Ceph 15.2.4 cluster using cpehadm to manage
it ?
Module 'cephadm' has failed: auth get failed: failed to find
client.crash.ceph0-ote in keyring retval:
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe s
Hello.
I have tried to follow through the documented writeback cache tier
removal procedure
(https://docs.ceph.com/docs/master/rados/operations/cache-tiering/#removing-a-writeback-cache)
on a test cluster, and failed.
I have successfully executed this command:
ceph osd tier cache-mode alex-test-
Hi all,
we are currently experiencing a problem with the Obejct Gateway part of the
dashboard not working anymore:
We had a working setup were the RGW servers only had 1 network interface with
an IP address that was reachable by the monitor servers and the dashboard was
working as expected.
A
Am 03.07.20 um 10:00 schrieb Sebastian Wagner:
Am 02.07.20 um 19:57 schrieb Oliver Freyermuth:
Dear Cephalopodians,
as we all know, ceph-deploy is on its demise since a while and essentially in
"maintenance mode".
We've been eyeing the "ssh orchestrator" which was in Nautilus as the "succes
Am 02.07.20 um 19:57 schrieb Oliver Freyermuth:
> Dear Cephalopodians,
>
> as we all know, ceph-deploy is on its demise since a while and essentially in
> "maintenance mode".
>
> We've been eyeing the "ssh orchestrator" which was in Nautilus as the
> "successor in spirit" of ceph-deploy.
>
18 matches
Mail list logo