...@ceph.com
Subject: Re: [ceph-users] Strange osd in PG with new EC-Pool - pgs: 2
active+undersized+degraded
Hi Don,
thanks for the info!
looks that choose_tries set to 200 do the trick.
But the setcrushmap takes a long long time (alarming, but the client have
still IO)... hope it's
Hi,
due to two more hosts (now 7 storage nodes) I want to create an new
ec-pool and get an strange effect:
ceph@admin:~$ ceph health detail
HEALTH_WARN 2 pgs degraded; 2 pgs stuck degraded; 2 pgs stuck unclean; 2
pgs stuck undersized; 2 pgs undersized
pg 22.3e5 is stuck unclean since forever,
On Wed, Mar 25, 2015 at 1:20 AM, Udo Lembke ulem...@polarzone.de wrote:
Hi,
due to two more hosts (now 7 storage nodes) I want to create an new
ec-pool and get an strange effect:
ceph@admin:~$ ceph health detail
HEALTH_WARN 2 pgs degraded; 2 pgs stuck degraded; 2 pgs stuck unclean; 2
pgs
...@lists.ceph.com] On Behalf Of Don
Doerner
Sent: 25 March, 2015 08:01
To: Udo Lembke; ceph-us...@ceph.com
Subject: Re: [ceph-users] Strange osd in PG with new EC-Pool - pgs: 2
active+undersized+degraded
Assuming you've calculated the number of PGs reasonably, see
herehttps://urldefense.proofpoint.com
Hi Gregory,
thanks for the answer!
I have look which storage nodes are missing, and it's two differrent:
pg 22.240 is stuck undersized for 24437.862139, current state
active+undersized+degraded, last acting
[38,85,17,74,2147483647,10,58]
pg 22.240 is stuck undersized for 24437.862139, current
Hi Don,
thanks for the info!
looks that choose_tries set to 200 do the trick.
But the setcrushmap takes a long long time (alarming, but the client have still
IO)... hope it's finished soon ;-)
Udo
Am 25.03.2015 16:00, schrieb Don Doerner:
Assuming you've calculated the number of PGs
osd in PG with new EC-Pool - pgs: 2
active+undersized+degraded
Hi,
due to two more hosts (now 7 storage nodes) I want to create an new ec-pool and
get an strange effect:
ceph@admin:~$ ceph health detail
HEALTH_WARN 2 pgs degraded; 2 pgs stuck degraded; 2 pgs stuck unclean; 2 pgs
stuck