>> Thanks. I am planning to change all of my disks. But do you know enterprise
>> SSD Disk which is best in trade of between cost & iops performance?
In my prior response I meant to ask what your workload is like. RBD? RGW?
Write-heavy? Mostly reads? This influences what drives make sense.
—
Hi,
Just to add to the previous discussion, consumer SSDs like these can
unfortunately be significantly *slower* than plain old HDDs for Ceph. This is
because Ceph always uses SYNC writes to guarantee that data is on disk before
returning.
Unfortunately NAND writes are intrinsically quite slow
Thanks. I am planning to change all of my disks. But do you know enterprise
SSD Disk which is best in trade of between cost & iops performance?Which model
and brand.Thanks in advance.
On Wednesday, December 28, 2022 at 08:44:34 AM GMT+3:30, Konstantin
Shalygin wrote:
Hi,
The cache wa
Hi,
The cache was gone, optimize is proceed. This is not enterprise device, you
should never use it with Ceph 🙂
k
Sent from my iPhone
> On 27 Dec 2022, at 16:41, hosseinz8...@yahoo.com wrote:
>
>  Thanks AnthonyI have a cluster with QLC SSD disks (Samsung QVO 860). The
> cluster works for 2
I do not have *direct* experience with that model but can share some
speculations:
That is a *consumer* model, known as “client” in the SSD industry. It’s also
QLC. It’s optimized for PB/$.
I suspect that at least one of several things is going on.
* Cliffing: Client SSDs are architected for
Thanks AnthonyI have a cluster with QLC SSD disks (Samsung QVO 860). The
cluster works for 2 year. Now all OSDs return 12 iops when running tell bench
which is very slow. But I Buy new QVO disks yesterday, and I added this new
disk to cluster. For the first 1 hour, I got 100 iops from this new
My understanding is that when you ask an OSD to bench (via the admin socket),
only that OSD executes, there is no replication. Replication is a function of
PGs.
Thus, this is a narrowly-focused tool with both unique advantages and
disadvantages.
> On Dec 26, 2022, at 12:47 PM, hosseinz8...@