On 28/08/2020 5:19 pm, Zhenshi Zhou wrote:
In my deployment I part the disk for wal and db seperately as I can assign
the size manually.
When using ceph-volume, you can specify the sizes on the command line.
--
Lindsay
___
ceph-users mailing list --
In my deployment I part the disk for wal and db seperately as I can assign
the size manually.
For example, I assign every two partitions of nvme with 30G and 2G, for
each osd's wal and db.
Tony Liu 于2020年8月28日周五 上午1:53写道:
> How's WAL utilize disk when it shares the same device with DB?
> Say
How's WAL utilize disk when it shares the same device with DB?
Say device size 50G, 100G, 200G, they are no difference to DB
because DB will take 30G anyways. Does it make any difference
to WAL?
Thanks!
Tony
> -Original Message-
> From: Zhenshi Zhou
> Sent: Wednesday, August 26, 2020
Official document says that you should allocate 4% of the slow device space
for block.db.
But the main problem is that Bluestore uses RocksDB and RocksDB puts a file
on the fast
device only if it thinks that the whole layer will fit there.
As for RocksDB, L1 is about 300M, L2 is about 3G, L3 is
> -Original Message-
> From: Anthony D'Atri
> Sent: Monday, August 24, 2020 7:30 PM
> To: Tony Liu
> Subject: Re: [ceph-users] Re: Add OSD with primary on HDD, WAL and DB on
> SSD
>
> Why such small HDDs? Kinda not worth the drive bays and power, instead
> of the complexity of putting
> > I don't need to create
> > WAL device, just primary on HDD and DB on SSD, and WAL will be using
> > DB device cause it's faster. Is that correct?
>
> Yes.
>
>
> But be aware that the DB sizes are limited to 3GB, 30GB and 300GB.
> Anything less than those sizes will have a lot of untilised
On 25/08/2020 6:07 am, Tony Liu wrote:
I don't need to create
WAL device, just primary on HDD and DB on SSD, and WAL will be
using DB device cause it's faster. Is that correct?
Yes.
But be aware that the DB sizes are limited to 3GB, 30GB and 300GB.
Anything less than those sizes will have
Hi,
you could try to use ceph-volume lvm create --data DEV --db DEV and
inspect the output to learn what is being done.
I am not sure about the right Syntax now but you should find related
Information via search ...
Hth
Mehmet
Am 23. August 2020 05:52:29 MESZ schrieb Tony Liu :
>Hi,
>
>I
Thanks Eugen for pointing it out.
I reread this link.
https://ceph.readthedocs.io/en/latest/rados/configuration/bluestore-config-ref/
It seems that, for the mix of HDD and SSD, I don't need to create
WAL device, just primary on HDD and DB on SSD, and WAL will be
using DB device cause it's faster.
Hi,
if you shared your drivegroup config we might be able to help identify
your issue. ;-)
The last example in [1] shows the "wal_devices" filter for splitting
wal and db.
Regards,
Eugen
[1] https://docs.ceph.com/docs/master/cephadm/drivegroups/#dedicated-wal-db
Zitat von Tony Liu :
10 matches
Mail list logo