Hi List,

Another interesting and unexpected thing we observed during cluster
expansion is the following. After we added  extra disks to the cluster,
while "norebalance" flag was set, we put the new OSDs "IN". As soon as
we did that a couple of hundered objects would become degraded. During
that time no OSD crashed or restarted. Every "ceph osd crush add $osd
weight host=$storage-node" would cause extra degraded objects.

I don't expect objects to become degraded when extra OSDs are added.
Misplaced, yes. Degraded, no

Someone got an explantion for this?

Gr. Stefan



-- 
| BIT BV  http://www.bit.nl/        Kamer van Koophandel 09090351
| GPG: 0xD14839C6                   +31 318 648 688 / i...@bit.nl
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to