Like Alex said network is not an issue now. For example I got a 6 node
cluster with mix of sas and ssd disks running inside cassandra clusters
with heavy load and also mysql clusters and Im getting less than 1ms of IO
latency, on the network part i have infiniband with FDR configured and also
clust
On Fri, Mar 24, 2017 at 10:04 AM Alejandro Comisario
wrote:
> thanks for the recommendations so far.
> any one with more experiences and thoughts?
>
> best
>
On the network side, 25, 40, 56 and maybe soon 100 Gbps can now be fairly
affordable, and simplify the architecture for the high throughpu
thanks for the recommendations so far.
any one with more experiences and thoughts?
best
On Mar 23, 2017 16:36, "Maxime Guyot" wrote:
> Hi Alexandro,
>
> As I understand you are planning NVMe for Journal for SATA HDD and
> collocated journal for SATA SSD?
>
> Option 1:
> - 24x SATA SSDs per serv
Hi Alexandro,
As I understand you are planning NVMe for Journal for SATA HDD and collocated
journal for SATA SSD?
Option 1:
- 24x SATA SSDs per server, will have a bottleneck with the storage
bus/controller. Also, I would consider the network capacity 24xSSDs will
deliver more performance tha
Hi,
ceph speeds up with more nodes and more OSDs - so go for 6 nodes with
mixed SSD+SATA.
Udo
On 23.03.2017 18:55, Alejandro Comisario wrote:
> Hi everyone!
> I have to install a ceph cluster (6 nodes) with two "flavors" of
> disks, 3 servers with SSD and 3 servers with SATA.
>
> Y will purchase
Hi everyone!
I have to install a ceph cluster (6 nodes) with two "flavors" of
disks, 3 servers with SSD and 3 servers with SATA.
Y will purchase 24 disks servers (the ones with sata with NVE SSD for
the SATA journal)
Processors will be 2 x E5-2620v4 with HT, and ram will be 20GB for the
OS, and 1.