When building a new cluster with 3 osd using mkcephfs I'm getting the
message max osd in /etc/ceph/ceph.conf is 2, num osd is 3. The
ceph.conf initially didn't define a max osd line at all and adding
max osd = 4 or osd max = 4 didn't seem to have any effect on the
message, regardless of the
On Tue, 4 Jan 2011, Matthew Roy wrote:
When building a new cluster with 3 osd using mkcephfs I'm getting the
message max osd in /etc/ceph/ceph.conf is 2, num osd is 3. The
ceph.conf initially didn't define a max osd line at all and adding
max osd = 4 or osd max = 4 didn't seem to have any
Hi
I have been following your project for a long time and it looks like
Ceph is getting closer to release 1.0. Are you planning on calling
version 1.0 production ready?
We have been holding off on testing Ceph in depth, but it looks like
we should start now that a stable production ready release
On Tue, Jan 4, 2011 at 10:02 AM, Roland Rabben rol...@jotta.no wrote:
Hi
I have been following your project for a long time and it looks like
Ceph is getting closer to release 1.0. Are you planning on calling
version 1.0 production ready?
Version 1.0 will definitely be a production ready
Hi,
I've got a 3 node test cluster (3 mons, 3 osds) with about 24,000,000
very small objects across 2400 pools (written directly with librados,
this isn't a ceph filesystem).
The cosd processes have steadily grown in ram size and have finally
exhausted ram and are getting killed by the oom
On Tue, Jan 4, 2011 at 1:58 PM, John Leach j...@brightbox.co.uk wrote:
Hi,
I've got a 3 node test cluster (3 mons, 3 osds) with about 24,000,000
very small objects across 2400 pools (written directly with librados,
this isn't a ceph filesystem).
The cosd processes have steadily grown in ram
A week or two back, I had some cases where cosd got killed by the OOM
killer on my test box.
Someone else was hogging memory with some other programs running on
the same computer, so I thought that was the cause. Also, it didn't
happen again after like the first two times, so I turned my
On Tue, 2011-01-04 at 14:28 -0800, Gregory Farnum wrote:
On Tue, Jan 4, 2011 at 1:58 PM, John Leach j...@brightbox.co.uk wrote:
Hi,
I've got a 3 node test cluster (3 mons, 3 osds) with about 24,000,000
very small objects across 2400 pools (written directly with librados,
this isn't a
Has anyone tried to operate (or simulated) a cluster over various link
speeds? There was a mailing list question months ago about ceph over
WAN and the consensus was that it would not perform well - but
there's a broad spectrum of link speeds and latencies in the real
world - LAN and WAN are