We've seen this as well as early as 0.94.3 and have a bug,
http://tracker.ceph.com/issues/13990 which we're working through
currently.  Nothing fixed yet, still trying to nail down exactly why the
osd maps aren't being trimmed as they should.


On Thu, Feb 25, 2016 at 10:16 AM, Stillwell, Bryan <
bryan.stillw...@twcable.com> wrote:

> After evacuated all the PGs from a node in hammer 0.94.5, I noticed that
> each of the OSDs was still using ~8GB of storage.  After investigating it
> appears like all the data is coming from around 13,000 files in
> /usr/lib/ceph/osd/ceph-*/current/meta/ with names like:
>
> DIR_4/DIR_0/DIR_0/osdmap.303231__0_C23E4004__none
> DIR_4/DIR_2/DIR_F/osdmap.314431__0_C24ADF24__none
> DIR_4/DIR_0/DIR_A/osdmap.312688__0_C2510A04__none
>
> They're all around 500KB in size.  I'm guessing these are all old OSD
> maps, but I'm wondering why there are so many of them?
>
> Thanks,
> Bryan
>
>
> ________________________________
>
> This E-mail and any of its attachments may contain Time Warner Cable
> proprietary information, which is privileged, confidential, or subject to
> copyright belonging to Time Warner Cable. This E-mail is intended solely
> for the use of the individual or entity to which it is addressed. If you
> are not the intended recipient of this E-mail, you are hereby notified that
> any dissemination, distribution, copying, or action taken in relation to
> the contents of and attachments to this E-mail is strictly prohibited and
> may be unlawful. If you have received this E-mail in error, please notify
> the sender immediately and permanently delete the original and any copy of
> this E-mail and any printout.
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to