Hi,

I've got a 3 node test cluster (3 mons, 3 osds) with about 24,000,000
very small objects across 2400 pools (written directly with librados,
this isn't a ceph filesystem).

The cosd processes have steadily grown in ram size and have finally
exhausted ram and are getting killed by the oom killer (the nodes have
6gig RAM and no swap).

When I start them back up they just very quickly increase in ram size
again and get killed.

Is this expected? Do the osds require a certain amount of resident
memory relative to the data size (or perhaps number of objects)?

Can you offer any guidance on planning for ram usage?

I'm running ceph 0.24 on 64bit Ubuntu Lucid servers.  In case it's
useful, I've just written these objects serially, no reading, no
rewrites, updates or snapshots.

I've got some further questions/observations about disk usage with this
scenario but I'll start a separate thread about that.

Thanks,

John. 

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to