Hi,
Can you test it slightly differently (and simpler)? Like in this
googledoc:
https://docs.google.com/spreadsheets/d/1E9-eXjzsKboiCCX-0u0r5fAjjufLKayaut_FOPxYZjc/edit#gid=0
As we know that it's a QLC drive, first let it fill the SLC cache:
fio -ioengine=libaio -direct=1 -name=test -bs=4M
Hi Mourik Jan,
> So, ran the fio commands, and pasted output (as it's quite a lot)
here:
> I hope someone here can draw some conclusions from this output...
Now you know, it sort of performs similar to other enterprise drives.
And you know your ceph solution will never perform beyond
>
> An observation: We are using proxmox (5.4), and it displays the WEAR level as
> "N/A", which is unfortunate... :-( Tried upgrading to 6: the same, still no
> wearout in the GUI.
>
> Here is smartctl output:
>
>> smartctl 6.6 2016-05-31 r4324 [x86_64-linux-4.15.18-26-pve] (local build)
020 10:01
To: ceph-users@ceph.io
Subject: [ceph-users] Re: Ceph Performance of Micron 5210 SATA?
Last monday I performed a quick test with those two disks already,
probably not that relevant, but posting it anyway:
I created a two-disk ceph 'cluster' on just the one local node, and ran
the following:
Yes. The drive treats some portion of cells as SLC, which having only two
charge states is a lot faster and is used as cache. As with any cache-enabled
drive, if that cache fills up either due to misaligned flush cycles or simply
data coming in faster than it can flush, you’ll see a
Hi,
Sure, I will try the fio as requested by Marc Roos. I will be onsite
again monday/tuesday. If you have a good way to test/show what you
describe below, don't hesitate to post it here.
I'll try it when I have the time.
Are you saying that the write performance becomes bad (90MB/sec) for
But is random/sequential read performance still good? even during saturated
write performance ?
if so the tradeoff could fit quite some applications
Sent from myMail for iOS
Friday, 6 March 2020, 14.06 +0100 from vitalif :
>Hi,
>
>Current QLC drives are total shit in terms of
=randrw
#write_bw_log=sdx-1024k-randrw-seq.results
#write_iops_log=sdx-1024k-randrw-seq.results
-Original Message-
From: mj [mailto:li...@merit.unu.edu]
Sent: 06 March 2020 10:01
To: ceph-users@ceph.io
Subject: [ceph-users] Re: Ceph Performance of Micron 5210 SATA?
Last monday I
Last monday I performed a quick test with those two disks already,
probably not that relevant, but posting it anyway:
I created a two-disk ceph 'cluster' on just the one local node, and ran
the following:
root@ceph:~# rados bench -p scbench 10 write --no-cleanup
hints = 1
Maintaining 16
I have just ordered two of them to try. (the 3.47GB ION's)
If you want, next week I could perhaps run some commands on them..?
MJ
On 3/5/20 9:38 PM, Hermann Himmelbauer wrote:
Hi,
Does someone know if the following harddisk has a decent performance in
a ceph cluster:
Micron 5210 ION 1.92TB,
That depends on how you define “decent” , and your use case.
Be careful that these are QLC drives. QLC is pretty new and longevity would
seem to vary quite a bit based on op mix. These might be fine for read-mostly
workloads, but high-turnover databases might burn them up fast, especially as
11 matches
Mail list logo