huang jun wrote:
: do you have osd's crush location changed after reboot?
I am not sure which reboot do you mean, but to sum up what I wrote
in previous messages i this thread, it probably went as follows:
- reboot of the OSD server
- the server goes up with wrong hostname "localhost"
- n
do you have osd's crush location changed after reboot?
kas 于2019年5月15日周三 下午10:39写道:
>
> kas wrote:
> : Marc,
> :
> : Marc Roos wrote:
> : : Are you sure your osd's are up and reachable? (run ceph osd tree on
> : : another node)
> :
> : They are up, because all three mons see them as u
kas wrote:
: Marc,
:
: Marc Roos wrote:
: : Are you sure your osd's are up and reachable? (run ceph osd tree on
: : another node)
:
: They are up, because all three mons see them as up.
: However, ceph osd tree provided the hint (thanks!): The OSD host went back
: with hostname "loca
295
-Yenya
: From: Jan Kasprzak [mailto:k...@fi.muni.cz]
: Sent: woensdag 15 mei 2019 14:46
: To: ceph-us...@ceph.com
: Subject: [ceph-users] Huge rebalance after rebooting OSD host (Mimic)
:
: Hello, Ceph users,
:
: I wanted to install the recent kernel update on my OSD hosts with CentOS
Are you sure your osd's are up and reachable? (run ceph osd tree on
another node)
-Original Message-
From: Jan Kasprzak [mailto:k...@fi.muni.cz]
Sent: woensdag 15 mei 2019 14:46
To: ceph-us...@ceph.com
Subject: [ceph-users] Huge rebalance after rebooting OSD host (
Hello, Ceph users,
I wanted to install the recent kernel update on my OSD hosts
with CentOS 7, Ceph 13.2.5 Mimic. So I set a noout flag and ran
"yum -y update" on the first OSD host. This host has 8 bluestore OSDs
with data on HDDs and database on LVs of two SSDs (each SSD has 4 LVs
for OS