Ok, removed this line and it got fixed --
crush location = "region=XX datacenter=XXXX room=NNNN row=N rack=N
chassis=N"
But why will it matter?

On Thu, Sep 14, 2017 at 12:11 PM, dE . <de.tec...@gmail.com> wrote:

> Hi,
>     In my test cluster where I've just 1 OSD which's up and in --
>
> 1 osds: 1 up, 1 in
>
> I create a pool with size 1 and min_size 1 and PG of 1, or 2 or 3 or any
> no. However I cannot write objects to the cluster. The PGs are stuck in an
> unknown state --
>
> ceph -c /etc/ceph/cluster.conf health detail
> HEALTH_WARN Reduced data availability: 2 pgs inactive; Degraded data
> redundancy: 2 pgs unclean; too few PGs per OSD (2 < min 30)
> PG_AVAILABILITY Reduced data availability: 2 pgs inactive
>     pg 1.0 is stuck inactive for 608.785938, current state unknown, last
> acting []
>     pg 1.1 is stuck inactive for 608.785938, current state unknown, last
> acting []
> PG_DEGRADED Degraded data redundancy: 2 pgs unclean
>     pg 1.0 is stuck unclean for 608.785938, current state unknown, last
> acting []
>     pg 1.1 is stuck unclean for 608.785938, current state unknown, last
> acting []
> TOO_FEW_PGS too few PGs per OSD (2 < min 30)
>
> From the documentation --
> Placement groups are in an unknown state, because the OSDs that host them
> have not reported to the monitor cluster in a while.
>
> But all OSDs are up and in.
>
> Thanks for any help!
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to