[ceph-users] Re: Ceph server

2021-03-12 Thread Ignazio Cassano
Yes, I noted that more bandwidth is required with this kind of servers. I must reconsider my network infrastructure. Many tanks Ignazio Il giorno ven 12 mar 2021 alle ore 09:26 Robert Sander < r.san...@heinlein-support.de> ha scritto: > Hi, > > Am 10.03.21 um 17:43 schrieb Ignazio Cassano: > > >

[ceph-users] Re: Ceph server

2021-03-12 Thread Robert Sander
Am 10.03.21 um 20:44 schrieb Ignazio Cassano: > 1 small ssd is for operations system and 1 is for mon. Make that a RAID1 set of SSDs and be happier. ;) Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-43 Fax:

[ceph-users] Re: Ceph server

2021-03-12 Thread Robert Sander
Hi, Am 10.03.21 um 17:43 schrieb Ignazio Cassano: > 5 x 8.0TB IntelĀ® SSD DC P4510 Series U.2 PCIe 3.1 x4 NVMe Solid State Drive > Hard Drive > 2 x IntelĀ® 10-Gigabit Ethernet Converged Network Adapter X710-DA2 (2x SFP+) Have you calculated the throughput of 8 NVMe drives against 2x 10G bonded

[ceph-users] Re: Ceph server

2021-03-11 Thread Ignazio Cassano
Many thanks Ignazio Il Ven 12 Mar 2021, 00:04 Reed Dier ha scritto: > I'm going to echo what Stefan said. > > I would ditch the 2x SATA drives to free up your slots. > Replace with an M.2 or SATADOM. > > I would also recommend moving from the 2x X710-DA2 cards to 1x X710-DA4 > card. > It can't

[ceph-users] Re: Ceph server

2021-03-11 Thread Reed Dier
I'm going to echo what Stefan said. I would ditch the 2x SATA drives to free up your slots. Replace with an M.2 or SATADOM. I would also recommend moving from the 2x X710-DA2 cards to 1x X710-DA4 card. It can't saturate the x8 slot, and it frees up a PCIe slot for possibly another NVMe card or

[ceph-users] Re: Ceph server

2021-03-10 Thread Stefan Kooman
On 3/10/21 8:12 PM, Stefan Kooman wrote: On 3/10/21 5:43 PM, Ignazio Cassano wrote: Hello, what do you think about of ceph cluster made up of 6 nodes each one with the following configuration ? I forgot to ask: Are you planning on only OSDs or should this be OSDs and MONs and ? In case of

[ceph-users] Re: Ceph server

2021-03-10 Thread Stefan Kooman
On 3/10/21 5:43 PM, Ignazio Cassano wrote: Hello, what do you think about of ceph cluster made up of 6 nodes each one with the following configuration ? A+ Server 1113S-WN10RT Barebone Supermicro A+ Server 1113S-WN10RT - 1U - 10x U.2 NVMe - 2x M.2 - Dual 10-Gigabit LAN - 750W Redundant

[ceph-users] Re: Ceph server

2021-03-10 Thread Ignazio Cassano
Sorry I forgot to mention I will not use cephfs Il Mer 10 Mar 2021, 20:44 Ignazio Cassano ha scritto: > Hello , non and osd. > 1 small ssd is for operations system and 1 is for mon. > I am agree to increase the ram. > As far as nvme size it is true that more osd little disks is a better >

[ceph-users] Re: Ceph server

2021-03-10 Thread Ignazio Cassano
Hello , non and osd. 1 small ssd is for operations system and 1 is for mon. I am agree to increase the ram. As far as nvme size it is true that more osd little disks is a better choose for performances but I should buy more servers. Ignazio Il Mer 10 Mar 2021, 20:24 Stefan Kooman ha scritto: >