I think I had a similar issue recently when I've added a new pool. All pgs that 
corresponded to the new pool were shown as degraded/unclean. After doing a bit 
of testing I've realized that my issue was down to this: 

replicated size 2 
min_size 2 

replicated size and min size was the same. In my case, i've got 2 osd servers 
with total replica of 2. The minimal size should be set to 1 - so that the 
cluster would still work with at least one PG being up. 

After I've changed the min_size to 1 the cluster sorted itself out. Try doing 
this for your pools. 

Andrei 
----- Original Message -----

> From: "Georgios Dimitrakakis" <gior...@acmac.uoc.gr>
> To: ceph-users@lists.ceph.com
> Sent: Saturday, 29 November, 2014 11:13:05 AM
> Subject: [ceph-users] Ceph Degraded

> Hi all!

> I am setting UP a new cluster with 10 OSDs
> and the state is degraded!

> # ceph health
> HEALTH_WARN 940 pgs degraded; 1536 pgs stuck unclean
> #

> There are only the default pools

> # ceph osd lspools
> 0 data,1 metadata,2 rbd,

> with each one having 512 pg_num and 512 pgp_num

> # ceph osd dump | grep replic
> pool 0 'data' replicated size 2 min_size 2 crush_ruleset 0
> object_hash
> rjenkins pg_num 512 pgp_num 512 last_change 286 flags hashpspool
> crash_replay_interval 45 stripe_width 0
> pool 1 'metadata' replicated size 2 min_size 2 crush_ruleset 0
> object_hash rjenkins pg_num 512 pgp_num 512 last_change 287 flags
> hashpspool stripe_width 0
> pool 2 'rbd' replicated size 2 min_size 2 crush_ruleset 0 object_hash
> rjenkins pg_num 512 pgp_num 512 last_change 288 flags hashpspool
> stripe_width 0

> No data yet so is there something I can do to repair it as it is?

> Best regards,

> George
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to