If it wouldn't be too much trouble, I'd actually like the binary osdmap as well 
(it contains the crushmap, but also a bunch of other stuff).  There is a 
command that lets you get old osdmaps from the mon by epoch as long as they 
haven't been trimmed.
-Sam

----- Original Message -----
From: "Chad William Seys" <cws...@physics.wisc.edu>
To: "Samuel Just" <sj...@redhat.com>
Cc: "ceph-users" <ceph-us...@ceph.com>
Sent: Tuesday, July 28, 2015 7:40:31 AM
Subject: Re: [ceph-users] why are there "degraded" PGs when adding OSDs?

Hi Sam,

Trying again today with crush tunables set to firefly.  Degraded peaked around 
46.8%.

I've attached the ceph pg dump and the crushmap (same as osdmap) from before 
and after the OSD additions. 3 osds were added on host osd03.  This added 5TB 
to about 17TB for a total of around 22TB.  5TB/22TB = 22.7%  Is it expected 
for 46.8% of PGs to be degraded after adding 22% of the storage?

Another weird thing is that the kernel RBD clients froze up after the OSDs 
were added, but worked fine after reboot.  (Debian kernel 3.16.7)

Thanks for checking!
C.
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to