With ucs you can run dual server and split the disk.  30 drives per node.  
Better density and easier to manage. 

Sent from TypeApp



On Feb 16, 2016, 3:39 AM, at 3:39 AM, "Василий Ангапов" <anga...@gmail.com> 
wrote:
>Nick, Tyler, many thanks for very helpful feedback!
>I spent many hours meditating on the following two links:
>http://www.supermicro.com/solutions/storage_ceph.cfm
>http://s3s.eu/cephshop
>
>60- or even 72-disk nodes are very capacity-efficient, but will the 2
>CPUs (even the fastest ones) be enough to handle Erasure Coding?
>Also as Nick stated with 4-5 nodes I cannot use high-M "K+M"
>combinations.
>I've did some calculations and found that the most efficient and safe
>configuration is to use 10 nodes with 29*6TB SATA and 7*200GB S3700
>for journals. Assuming 6+3 EC profile that will give me 1.16 PB of
>effective space. Also I prefer not to use precious NVMe drives. Don't
>see any reason to use them.
>
>But what about RAM? Can I go with 64GB per node with above config?
>I've seen OSDs are consuming not more than 1GB RAM for replicated
>pools (even 6TB ones). But what is the typical memory usage of EC
>pools? Does anybody know that?
>
>Also, am I right that for 6+3 EC profile i need at least 10 nodes to
>feel comfortable (one extra node for redundancy)?
>
>And finally can someone recommend what EC plugin to use in my case? I
>know it's a difficult question but anyway?
>
>
>
>
>
>
>
>
>
>2016-02-16 16:12 GMT+08:00 Nick Fisk <n...@fisk.me.uk>:
>>
>>
>>> -----Original Message-----
>>> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On
>Behalf Of
>>> Tyler Bishop
>>> Sent: 16 February 2016 04:20
>>> To: Василий Ангапов <anga...@gmail.com>
>>> Cc: ceph-users <ceph-users@lists.ceph.com>
>>> Subject: Re: [ceph-users] Recomendations for building 1PB RadosGW
>with
>>> Erasure Code
>>>
>>> You should look at a 60 bay 4U chassis like a Cisco UCS C3260.
>>>
>>> We run 4 systems at 56x6tB with dual E5-2660 v2 and 256gb ram.
>>> Performance is excellent.
>>
>> Only thing I will say to the OP, is that if you only need 1PB, then
>likely 4-5 of these will give you enough capacity. Personally I would
>prefer to spread the capacity around more nodes. If you are doing
>anything serious with Ceph its normally a good idea to try and make
>each node no more than 10% of total capacity. Also with Ec pools you
>will be limited to the K+M combo's you can achieve with smaller number
>of nodes.
>>
>>>
>>> I would recommend a cache tier for sure if your data is busy for
>reads.
>>>
>>> Tyler Bishop
>>> Chief Technical Officer
>>> 513-299-7108 x10
>>>
>>>
>>>
>>> tyler.bis...@beyondhosting.net
>>>
>>>
>>> If you are not the intended recipient of this transmission you are
>notified
>>> that disclosing, copying, distributing or taking any action in
>reliance on the
>>> contents of this information is strictly prohibited.
>>>
>>> ----- Original Message -----
>>> From: "Василий Ангапов" <anga...@gmail.com>
>>> To: "ceph-users" <ceph-users@lists.ceph.com>
>>> Sent: Friday, February 12, 2016 7:44:07 AM
>>> Subject: [ceph-users] Recomendations for building 1PB RadosGW with
>>> Erasure       Code
>>>
>>> Hello,
>>>
>>> We are planning to build 1PB Ceph cluster for RadosGW with Erasure
>Code. It
>>> will be used for storing online videos.
>>> We do not expect outstanding write performace, something like 200-
>>> 300MB/s of sequental write will be quite enough, but data safety is
>very
>>> important.
>>> What are the most popular hardware and software recomendations?
>>> 1) What EC profile is best to use? What values of K/M do you
>recommend?
>>
>> The higher total k+m you go, you will require more CPU and sequential
>performance will degrade slightly as the IO's are smaller going to the
>disks. However larger numbers allow you to be more creative with
>failure scenarios and "replication" efficiency.
>>
>>> 2) Do I need to use Cache Tier for RadosGW or it is only needed for
>RBD? Is it
>>
>> Only needed for RBD, but depending on workload it may still benefit.
>If you are mostly doing large IO's, the gains will be a lot smaller.
>>
>>> still an overall good practice to use Cache Tier for RadosGW?
>>> 3) What hardware is recommended for EC? I assume higher-clocked CPUs
>are
>>> needed? What about RAM?
>>
>> Total Ghz is more important (ie ghzxcores) Go with the cheapest/power
>efficient you can get. Aim for somewhere around 1Ghz per disk.
>>
>>> 4) What SSDs for Ceph journals are the best?
>>
>> Intel S3700 or P3700 (if you can stretch)
>>
>> By all means explore other options, but you can't go wrong by buying
>these. Think "You can't get fired for buying Cisco" quote!!!
>>
>>>
>>> Thanks a lot!
>>>
>>> Regards, Vasily.
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to