rs@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
Subject: Re: [ceph-users] 300 active+undersized+degraded+remapped
OK, I fixed the issue. But this is very weird. But will list them so its easy
for other to check when there is similar issue.
1) I had create rack aware osd tree
2
29.09380 osd.2 up 1.0
> 1.0
>
> 39.09380 osd.3 up 1.0
> 1.0
>
> -2 545.62775 host OSD2
>
> 0 9.09380 osd.0 up 1.0
> 1.
-users@lists.ceph.com
Subject: Re: [ceph-users] 300 active+undersized+degraded+remapped
OK, so looks like its ceph crushmap behavior
http://docs.ceph.com/docs/master/rados/operations/crush-map/
--
Deepak
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Deepak
Naidu
Sent
: [ceph-users] 300 active+undersized+degraded+remapped
OK, I fixed the issue. But this is very weird. But will list them so its easy
for other to check when there is similar issue.
1) I had create rack aware osd tree
2) I have SATA OSD’s and NVME OSD
3) I created rack aware
-users] 300 active+undersized+degraded+remapped
ceph status
ceph osd tree
Is your meta pool on ssds instead of the same root and osds as the rest of the
cluster?
On Fri, Jun 30, 2017, 9:29 PM Deepak Naidu
<dna...@nvidia.com<mailto:dna...@nvidia.com>> wrote:
Hello,
I am getting the
ceph status
ceph osd tree
Is your meta pool on ssds instead of the same root and osds as the rest of
the cluster?
On Fri, Jun 30, 2017, 9:29 PM Deepak Naidu wrote:
> Hello,
>
>
>
> I am getting the below error and I am unable to get them resolved even
> after starting and
Hello,
I am getting the below error and I am unable to get them resolved even after
starting and stopping the OSD's. All the OSD's seems to be up.
How do I repair the OSD's or fix them manually. I am using cephFS. But oddly
the ceph df is showing 100% used(which is showing in KB). But the pool