On Tue, Jan 8, 2013 at 12:20 PM, Moore, Shawn M smmo...@catawba.edu wrote:
I have been testing ceph for a little over a month now. Our design goal is
to have 3 datacenters in different buildings all tied together over 10GbE.
Currently there are 10 servers each serving 1 osd in 2 of the
: ceph-devel-ow...@vger.kernel.org
[mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Moore, Shawn M
Sent: 2013年1月9日 4:21
To: ceph-devel@vger.kernel.org
Subject: Crushmap Design Question
I have been testing ceph for a little over a month now. Our design goal is
to have 3 datacenters
Cc: Moore, Shawn M; ceph-devel@vger.kernel.org
Subject: Re: Crushmap Design Question
Hi,
On 01/09/2013 01:53 AM, Chen, Xiaoxi wrote:
Hi,
Setting rep size to 3 only make the data triple-replication, that means
when you fail all OSDs in 2 out of 3 DCs, the data still accessable
On 01/09/2013 08:59 AM, Wido den Hollander wrote:
Hi,
On 01/09/2013 01:53 AM, Chen, Xiaoxi wrote:
Hi,
Setting rep size to 3 only make the data triple-replication, that means
when you fail all OSDs in 2 out of 3 DCs, the data still accessable.
But Monitor is another story, for
I have been testing ceph for a little over a month now. Our design goal is to
have 3 datacenters in different buildings all tied together over 10GbE.
Currently there are 10 servers each serving 1 osd in 2 of the datacenters. In
the third is one large server with 16 SAS disks serving 8 osds.
Of Moore, Shawn M
Sent: 2013年1月9日 4:21
To: ceph-devel@vger.kernel.org
Subject: Crushmap Design Question
I have been testing ceph for a little over a month now. Our design goal is to
have 3 datacenters in different buildings all tied together over 10GbE.
Currently there are 10 servers each serving 1