Please find the below output.
cn1.chn8be1c1.cdn ~# ceph osd metadata 0
{
"id": 0,
"arch": "x86_64",
"back_addr": "[v2:10.50.12.41:6883/12650,v1:10.50.12.41:6887/12650]",
"back_iface": "dss-private",
"bluefs": "1",
"bluefs_single_shared_device": "1",
"bluestore_bdev_acce
The same problem:
2019-11-10 05:26:33.215 7fbfafeef700 7 mon.cn1@0(leader).osd e1819
preprocess_boot from osd.0 v2:10.50.11.41:6814/2022032 clashes with
existing osd: different fsid (ours:
ccfdbd54-fcd2-467f-ab7b-c152b7e422fb ; theirs: a1ea2ea3-984d
-4c91-86cf-29f452f5a952)
maybe the osd uuid is w
Hi,
yes still the cluster unrecovered. Not able to even up the osd.0 yet.
osd logs: https://pastebin.com/4WrpgrH5
Mon logs: https://drive.google.com/open?id=1_HqK2d52Cgaps203WnZ0mCfvxdcjcBoE
# ceph daemon /var/run/ceph/ceph-mon.cn1.asok config show|grep debug_mon
"debug_mon": "20/20",
"
good, please send me the mon and osd.0 log.
the cluster still un-recovered?
nokia ceph 于2019年11月10日周日 下午1:24写道:
>
> Hi Huang,
>
> Yes the node 10.50.10.45 is the fifth node which is replaced. Yes I have set
> the debug_mon to 20 and still it is running with that value only. If you want
> I will
Hi Huang,
Yes the node 10.50.10.45 is the fifth node which is replaced. Yes I have
set the debug_mon to 20 and still it is running with that value only. If
you want I will send you the logs of the mon once again by restarting the
osd.0
On Sun, Nov 10, 2019 at 10:17 AM huang jun wrote:
> The mon
The mon log shows that the all mismatch fsid osds are from node 10.50.11.45,
maybe that the fith node?
BTW i don't found the osd.0 boot message in ceph-mon.log
do you set debug_mon=20 first and then restart osd.0 process, and make
sure the osd.0 is restarted.
nokia ceph 于2019年11月10日周日 下午12:31写道:
Hi,
Please find the ceph osd tree output in the pastebin
https://pastebin.com/Gn93rE6w
On Fri, Nov 8, 2019 at 7:58 PM huang jun wrote:
> can you post your 'ceph osd tree' in pastebin?
> do you mean the osds report fsid mismatch is from old removed nodes?
>
> nokia ceph 于2019年11月8日周五 下午10:21写道:
Did you reinstall mons as well? If no, check if you've removed that osd
auth (ceph auth ls)
On Fri, Nov 8, 2019, 19:27 nokia ceph wrote:
> Hi,
>
> The fifth node in the cluster was affected by hardware failure and hence
> the node was replaced in the ceph cluster. But we were not able to replace
can you post your 'ceph osd tree' in pastebin?
do you mean the osds report fsid mismatch is from old removed nodes?
nokia ceph 于2019年11月8日周五 下午10:21写道:
>
> Hi,
>
> The fifth node in the cluster was affected by hardware failure and hence the
> node was replaced in the ceph cluster. But we were no
Hi,
The fifth node in the cluster was affected by hardware failure and hence
the node was replaced in the ceph cluster. But we were not able to replace
it properly and hence we uninstalled the ceph in all the nodes, deleted the
pools and also zapped the osd's and recreated them as new ceph cluster
I saw many lines like that
mon.cn1@0(leader).osd e1805 preprocess_boot from osd.112
v2:10.50.11.45:6822/158344 clashes with existing osd: different fsid
(ours: 85908622-31bd-4728-9be3-f1f6ca44ed98 ; theirs:
127fdc44-c17e-42ee-bcd4-d577c0ef4479)
the osd boot will be ignored if the fsid mismatch
wha
the osd.0 is still in down state after restart? if so, maybe the
problem is in mon,
can you set the leader mon's debug_mon=20 and restart one of the down
state osd.
and then attach the mon log file.
nokia ceph 于2019年11月8日周五 下午6:38写道:
>
> Hi,
>
>
>
> Below is the status of the OSD after restart.
>
Hi,
Below is the status of the OSD after restart.
# systemctl status ceph-osd@0.service
● ceph-osd@0.service - Ceph object storage daemon osd.0
Loaded: loaded (/usr/lib/systemd/system/ceph-osd@.service;
enabled-runtime; vendor preset: disabled)
Drop-In: /etc/systemd/system/ceph-osd@.s
try to restart some of the down osds in 'ceph osd tree', and to see
what happened?
nokia ceph 于2019年11月8日周五 下午6:24写道:
>
> Adding my official mail id
>
> -- Forwarded message -
> From: nokia ceph
> Date: Fri, Nov 8, 2019 at 3:57 PM
> Subject: OSD's not coming up in Nautilus
> To:
14 matches
Mail list logo