If you're writes are small enough (64k or smaller) they're being placed on
the WAL device regardless of where your DB is.  If you change your testing
to use larger writes you should see a difference by adding the DB.

Please note that the community has never recommended using less than 120GB
DB for a 12TB OSD and the docs have come out and officially said that you
should use at least a 480GB DB for a 12TB OSD.  If you're setting up your
OSDs with a 30GB DB, you're just going to fill that up really quick and
spill over onto the HDD and have wasted your money on the SSDs.

On Wed, Sep 12, 2018 at 11:07 AM Ján Senko <jan.se...@gmail.com> wrote:

> We are benchmarking a test machine which has:
> 8 cores, 64GB RAM
> 12 * 12 TB HDD (SATA)
> 2 * 480 GB SSD (SATA)
> 1 * 240 GB SSD (NVME)
> Ceph Mimic
>
> Baseline benchmark for HDD only (Erasure Code 4+2)
> Write 420 MB/s, 100 IOPS, 150ms latency
> Read 1040 MB/s, 260 IOPS, 60ms latency
>
> Now we moved WAL to the SSD (all 12 WALs on single SSD, default size
> (512MB)):
> Write 640 MB/s, 160 IOPS, 100ms latency
> Read identical as above.
>
> Nice boost we thought, so we moved WAL+DB to the SSD (Assigned 30GB for DB)
> All results are the same as above!
>
> Q: This is suspicious, right? Why is the DB on SSD not helping with our
> benchmark? We use *rados bench*
>
> We tried putting WAL on the NVME, and again, the results are the same as
> on SSD.
> Same for WAL+DB on NVME
>
> Again, the same speed. Any ideas why we don't gain speed by using faster
> HW here?
>
> Jan
>
> --
> Jan Senko, Skype janos-
> Phone in Switzerland: +41 774 144 602
> Phone in Czech Republic: +420 777 843 818 <+420%20777%20843%20818>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to