Thank you Juan for all info!
So, if I understand well, just create three nodes, one OSD per hard drive
(without having RAID) and that's all?
Will Ceph be able to choose itself where to store data?
Let's try!
Thank you very much,
bye!
2013/12/23 JuanJose Galvez
> On 12/22/2013 1:57 AM, shacky w
On 12/22/2013 1:57 AM, shacky wrote:
>
> Replication is set on a per pool basis. You can set some, or all,
> pools to replica size of 2 instead of 3.
>
>
> Thank you very much. I saw this is to be setted in the global
> configuration (osd pool default size).
> So it's up to me to configure
>
> Replication is set on a per pool basis. You can set some, or all, pools to
> replica size of 2 instead of 3.
>
Thank you very much. I saw this is to be setted in the global configuration
(osd pool default size).
So it's up to me to configure Ceph to be rendundant and fault tolerant?
If I set "
On Dec 21, 2013 12:32 PM, "shacky" wrote:
>>
>> I all depends on the replication level you use, but let's assume 3.
>
>
> Does replication level 3 mean that the data are all replicated three
times in the cluster?
>
Replication is set on a per pool basis. You can set some, or all, pools to
replica
>
> I all depends on the replication level you use, but let's assume 3.
>
Does replication level 3 mean that the data are all replicated three times
in the cluster?
So you get the capacity of one machine.
>
> 4 * 1000 * 1000 / 1024 /1024 = 3.81TB
>
> This would result in 15.24TB of raw space per
On 12/21/2013 07:53 PM, shacky wrote:
Hi.
I am trying to understand how much space available I will get on my Ceph
cluster if I will use three servers with 4x4Tb hard drives each.
I all depends on the replication level you use, but let's assume 3.
So you get the capacity of one machine.
4 *
Hi.
I am trying to understand how much space available I will get on my Ceph
cluster if I will use three servers with 4x4Tb hard drives each.
Thank you very much!
Bye.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.