Hi Erik,

For now I have everything on the hdd's and I have some pools on just 
ssd's that require more speed. It looked to me the best way to start 
simple. I do not seem to need the iops yet to change this setup.

However I am curious about what the kind of performance increase you 
will get from moving the db/wal to ssd with spinners. So if you are able 
to, please publish some test results of the same environment from before 
and after your change.

Thanks,
Marc




-----Original Message-----
From: Erik McCormick [mailto:emccorm...@cirrusseven.com] 
Sent: 29 March 2019 06:22
To: ceph-users
Subject: [ceph-users] Bluestore WAL/DB decisions

Hello all,

Having dug through the documentation and reading mailing list threads 
until my eyes rolled back in my head, I am left with a conundrum still. 
Do I separate the DB / WAL or not.

I had a bunch of nodes running filestore with 8 x 8TB spinning OSDs and 
2 x 240 GB SSDs. I had put the OS on the first SSD, and then split the 
journals on the remaining SSD space.

My initial minimal understanding of Bluestore was that one should stick 
the DB and WAL on an SSD, and if it filled up it would just spill back 
onto the OSD itself where it otherwise would have been anyway.

So now I start digging and see that the minimum recommended size is 4% 
of OSD size. For me that's ~2.6 TB of SSD. Clearly I do not have that 
available to me.

I've also read that it's not so much the data size that matters but the 
number of objects and their size. Just looking at my current usage and 
extrapolating that to my maximum capacity, I get to ~1.44 million 
objects / OSD.

So the question is, do I:

1) Put everything on the OSD and forget the SSDs exist.

2) Put just the WAL on the SSDs

3) Put the DB (and therefore the WAL) on SSD, ignore the size 
recommendations, and just give each as much space as I can. Maybe 48GB / 
OSD.

4) Some scenario I haven't considered.

Is the penalty for a too small DB on an SSD partition so severe that 
it's not worth doing?

Thanks,
Erik
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to