On 13/1/2024 1:02 am, Drew Weaver wrote:
Hello,
So we were going to replace a Ceph cluster with some hardware we had laying
around using SATA HBAs but I was told that the only right way to build Ceph in
2023 is with direct attach NVMe.
Does anyone have any recommendation for a 1U barebones se
Why use such a card and M.2 drives that I suspect aren’t enterprise-class?
Instead of U.2, E1.s, or E3.s ?
> On Jan 13, 2024, at 5:10 AM, Mike O'Connor wrote:
>
> On 13/1/2024 1:02 am, Drew Weaver wrote:
>> Hello,
>>
>> So we were going to replace a Ceph cluster with some hardware we had layin
Because it's almost impossible to purchase the equipment required to
convert old drive bays to u.2 etc.
The M.2's we purchased are enterprise class.
Mike
On 14/1/2024 12:53 pm, Anthony D'Atri wrote:
Why use such a card and M.2 drives that I suspect aren’t enterprise-class?
Instead of U.2, E
The OP is asking about new servers I think.
> On Jan 13, 2024, at 9:36 PM, Mike O'Connor wrote:
>
> Because it's almost impossible to purchase the equipment required to convert
> old drive bays to u.2 etc.
>
> The M.2's we purchased are enterprise class.
>
> Mike
>
>
>> On 14/1/2024 12:5
On 14/1/2024 1:57 pm, Anthony D'Atri wrote:
The OP is asking about new servers I think.
I was looking his statement below relating to using hardware laying
around, just putting out there some options which worked for use.
So we were going to replace a Ceph cluster with some hardware we had
l
On Fri, Jan 12, 2024 at 02:32:12PM +, Drew Weaver wrote:
> Hello,
>
> So we were going to replace a Ceph cluster with some hardware we had
> laying around using SATA HBAs but I was told that the only right way
> to build Ceph in 2023 is with direct attach NVMe.
>
> Does anyone have any recomm
Agreed, though today either limits one’s choices of manufacturer.
> There are models to fit that, but if you're also considering new drives,
> you can get further density in E1/E3
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an
ther) because the $/GB on datacenter NVMe drives like Kioxia,
etc is still pretty far away from what it is for HDDs (obviously).
Anyway thanks.
-Drew
-Original Message-
From: Robin H. Johnson
Sent: Sunday, January 14, 2024 5:00 PM
To: ceph-users@ceph.io
Subject: [ceph-users] Re: recommenda
.
* If they meet your needs
>
> Anyway thanks.
> -Drew
>
>
>
>
>
> -----Original Message-
> From: Robin H. Johnson
> Sent: Sunday, January 14, 2024 5:00 PM
> To: ceph-users@ceph.io
> Subject: [ceph-users] Re: recommendation for barebones server wit
On Mon, Jan 15, 2024 at 03:21:11PM +, Drew Weaver wrote:
> Oh, well what I was going to do wAs just use SATA HBAs on PowerEdge R740s
> because we don't really care about performance as this is just used as a copy
> point for backups/archival but the current Ceph cluster we have [Which is
> b
>> So we were going to replace a Ceph cluster with some hardware we had
>> laying around using SATA HBAs but I was told that the only right way
>> to build Ceph in 2023 is with direct attach NVMe.
My impression are somewhat different:
* Nowadays it is rather more difficult to find 2.5in SAS or SA
>
> Now that you say it's just backups/archival, QLC might be excessive for
> you (or a great fit if the backups are churned often).
PLC isn’t out yet, though, and probably won’t have a conventional block
interface.
> USD70/TB is the best public large-NVME pricing I'm aware of presently; for
On 12/1/24 22:32, Drew Weaver wrote:
So we were going to replace a Ceph cluster with some hardware we had
laying around using SATA HBAs but I was told that the only right way to
build Ceph in 2023 is with direct attach NVMe.
These kinds of statements make me at least ask questions. Dozens of 14
by “RBD for cloud”, do you mean VM / container general-purposes volumes on
which a filesystem is usually built? Or large archive / backup volumes that
are read and written sequentially without much concern for latency or
throughput?
How many of those ultra-dense chassis in a cluster? Are all
By HBA I suspect you mean a non-RAID HBA?
Yes, something like the HBA355
NVMe SSDs shouldn’t cost significantly more than SATA SSDs. Hint: certain
tier-one chassis manufacturers mark both the fsck up. You can get a better
warranty and pricing by buying drives from a VAR.
We st
>
> NVMe SSDs shouldn’t cost significantly more than SATA SSDs. Hint: certain
> tier-one chassis manufacturers mark both the fsck up. You can get a better
> warranty and pricing by buying drives from a VAR.
>
> We stopped buying “Vendor FW” drives a long time ago.
Groovy. Cha
>Groovy. Channel drives are IMHO a pain, though in the case of certain
>manufacturers it can be the only way to get firmware updates. Channel drives
>often only have a 3 year warranty, vs 5 for generic drives.
Yes, we have run into this with Kioxia as far as being able to find new
firmware. W
On 16/1/24 11:39, Anthony D'Atri wrote:
by “RBD for cloud”, do you mean VM / container general-purposes volumes
on which a filesystem is usually built? Or large archive / backup
volumes that are read and written sequentially without much concern for
latency or throughput?
General purpose vol
> Also in our favour is that the users of the cluster we are currently
> intending for this have established a practice of storing large objects.
That definitely is in your favor.
> but it remains to be seen how 60x 22TB behaves in practice.
Be sure you don't get SMR drives.
> and it's har
19 matches
Mail list logo