[ceph-users] Re: Placement of block/db and WAL on SSD?

2020-07-05 Thread Lindsay Mathieson

On 5/07/2020 8:16 pm, Lindsay Mathieson wrote:
But from what you are saying, the 500GB disk would have been gaining 
no benefit? I would be better off allocating 30GB (or 30GB)  for each 
disk?


Edit: 30GB or 62GB (its a 127GB SSD)

--
Lindsay
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Placement of block/db and WAL on SSD?

2020-07-05 Thread Lindsay Mathieson

On 5/07/2020 7:38 pm, Alexander E. Patrakov wrote:

If the wal location is not explicitly specified, it goes together with
the db. So it is on the SSD.


Conversely, what happens with the block.db if I place the wal with
--block.wal

The db then stays with the data.

Ah, so my 2nd reading was correct. I've recreated the OSD's on this node 
3 times now :)


1. HDD Only
2. HDD + WAL on SSD
3.   HDD + DB/WAL on SSD

However give the following, I see try 4 approaching...


The partition needs to be 30
or 300 GB in size (this requirement was relaxed only very recently, so
let's not count on this), but not smaller than 1-4% of the data
device.


Was not aware of that. I have two mismatched disk (500GB & 3TB) and was 
allocating space proportionally.


 * 25GB for the 500GB Disk
 * 100GB for the 3TB Disk

But from what you are saying, the 500GB disk would have been gaining no 
benefit? I would be better off allocating 30GB (or 30GB)  for each disk?


 * 1% for the 3TB (2.7TB effective) - RBD for VM's only.



this requirement was relaxed only very recently, so
let's not count on this


Only in master and the size can be proportional to disk size now?


Thanks!


--
Lindsay

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Placement of block/db and WAL on SSD?

2020-07-05 Thread Alexander E. Patrakov
On Sun, Jul 5, 2020 at 6:57 AM Lindsay Mathieson
 wrote:
>
> Nautilus install.
>
> Documentation seems a bit ambiguous to me - this is for a spinner + SSD,
> using ceph-volume
>
> If I put the block.db on the SSD with
>
>  "ceph-volume lvm create --bluestore --data /dev/sdd --block.db
> /dev/sdc1"
>
> does the wal exists on the ssd (/dev/sdc1) as well, or does it remain on
> the hdd (/dev/sdd)?

If the wal location is not explicitly specified, it goes together with
the db. So it is on the SSD.

>
>
> Conversely, what happens with the block.db if I place the wal with
> --block.wal

The db then stays with the data.

> Or do I have to setup separate partitions for the block.db and wal?

You can, in theory, provide all three devices, but nobody does that in practice.

Common setups are:

1) just --data, then the db and its wal are located on the same device;
2) --data on HDD and --block.db on a partition on the SSD (the wal
automatically goes together with the db). The partition needs to be 30
or 300 GB in size (this requirement was relaxed only very recently, so
let's not count on this), but not smaller than 1-4% of the data
device.
3) --data on something (then the db goes there as well) and
--block.wal on a small (i.e. not large enough to use as a db device)
but very fast nvdimm.

-- 
Alexander E. Patrakov
CV: http://pc.cd/PLz7
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io