Hi,
I've just checked with the team and the situation is much more serious than
it seems: the lost disks contained the MONs AND OSDs databases (5 servers
down out of 8, replica 3).
It seems that the team fell victim to a bad batch of Samsung 980 Pros (I'm
not a big fan of this "Pro" range, but
This admittedly is the case throughout the docs.
> On Nov 2, 2023, at 07:27, Joachim Kraftmayer - ceph ambassador
> wrote:
>
> Hi,
>
> another short note regarding the documentation, the paths are designed for a
> package installation.
>
> the paths for container installation look a bit
Hi Mohamed,
I understand there's one operational monitor, isn't there?
If so, you need to reprovision the other monitors on an empty base so that
they synchronize with the only remaining monitor.
Cordialement,
*David CASIER*
Hi,
On 11/2/23 13:05, Mohamed LAMDAOUAR wrote:
when I ran this command, I got this error (because the database of the
osd was on the boot disk)
The RocksDB part of the OSD was on the failed SSD?
Then the OSD is lost and cannot be recovered.
The RocksDB contains the information where each
Hey Mohamed,
just send us the output of
ceph -s
and
ceph mon dump
please.
Best,
Malte
On 02.11.23 13:05, Mohamed LAMDAOUAR wrote:
Hi robert,
when I ran this command, I got this error (because the database of the osd
was on the boot disk)
ceph-objectstore-tool \
--type bluestore \
Hi,
follow these instructions:
https://docs.ceph.com/en/quincy/rados/operations/add-or-rm-mons/#removing-monitors-from-an-unhealthy-cluster
As you are using containers, you might need to specify the --mon-data
directory (/var/lib/CLUSTER_UUID/mon.MONNAME) (actually I never did this in
an
Hi robert,
when I ran this command, I got this error (because the database of the osd
was on the boot disk)
ceph-objectstore-tool \
> --type bluestore \
> --data-path /var/lib/ceph/c80891ba-55f3-11ed-9389-919f4368965c/osd.9 \
> --op update-mon-db \
> --mon-store-path
On 11/2/23 12:48, Mohamed LAMDAOUAR wrote:
I reinstalled the OS on a new SSD disk. How can I rebuild my cluster with
only one mons.
If there is one MON still operating you can try to extract its monmap
and remove all the other MONs from it with the monmaptool:
Thanks Joachim for the clarification ;)
8 rue greneta, 75003, Paris, FRANCE
enyx.com
Thanks Robert,
I tried this but I'm stuck. If you have some time, do help me with that I
will be very happy because I'm lost :(
8 rue greneta, 75003, Paris, FRANCE
enyx.com
Hello Boris,
I have one server monitor up and two other servers of the cluster are also
up (These two servers are not monitors ) .
I have four other servers down (the boot disk is out) but the osd data
disks are safe.
I reinstalled the OS on a new SSD disk. How can I rebuild my cluster with
only
Hi,
another short note regarding the documentation, the paths are designed
for a package installation.
the paths for container installation look a bit different e.g.:
/var/lib/ceph//osd.y/
Joachim
___
ceph ambassador DACH
ceph consultant since 2012
Clyso
Hi,
On 11/2/23 11:28, Mohamed LAMDAOUAR wrote:
I have 7 machines on CEPH cluster, the service ceph runs on a docker
container.
Each machine has 4 hdd of data (available) and 2 nvme sssd (bricked)
During a reboot, the ssd bricked on 4 machines, the data are available on
the HDD disk but
Hi Mohamed,
are all mons down, or do you still have at least one that is running?
AFAIK: the mons save their DB on the normal OS disks, and not within the
ceph cluster.
So if all mons are dead, which mean the disks which contained the mon data
are unrecoverable dead, you might need to bootstrap a
14 matches
Mail list logo