Because it's almost impossible to purchase the equipment required to convert old drive bays to u.2 etc.

The M.2's we purchased are enterprise class.

Mike


On 14/1/2024 12:53 pm, Anthony D'Atri wrote:
Why use such a card and M.2 drives that I suspect aren’t enterprise-class? 
Instead of U.2, E1.s, or E3.s ?

On Jan 13, 2024, at 5:10 AM, Mike O'Connor<m...@oeg.com.au>  wrote:

On 13/1/2024 1:02 am, Drew Weaver wrote:
Hello,

So we were going to replace a Ceph cluster with some hardware we had laying 
around using SATA HBAs but I was told that the only right way to build Ceph in 
2023 is with direct attach NVMe.

Does anyone have any recommendation for a 1U barebones server (we just drop in ram 
disks and cpus) with 8-10 2.5" NVMe bays that are direct attached to the 
motherboard without a bridge or HBA for Ceph specifically?

Thanks,
-Drew

_______________________________________________
ceph-users mailing list --ceph-users@ceph.io
To unsubscribe send an emailtoceph-users-le...@ceph.io
Hi

You need to use PCIe card with a PCIe switch, cards with 4 x m.2 NVME are cheap 
enough around $USD180 from Aliexpress.

There are companies with cards which have many more m.2 ports but the cost goes 
up greatly.

We just build a 3x1RU G9 HP cluster with 4 x 2T m.2 NVME using Dual 40G 
Ethernet ports and dual 10G Ethernet and a second hand Arisa 16 port 40G switch.

It works really well.

Cheers

Mike
_______________________________________________
ceph-users mailing list --ceph-users@ceph.io
To unsubscribe send an email toceph-users-le...@ceph.io

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to