That's very helpful, thanks.  In your first case above your
bluefs_db_partition_path and bluestore_bdev_partition path are the same.
Though I have a different data and db drive, mine are different.  Might
this explain something?  My root concern is that there is more utilization
on the cluster than what's in the pools, the excess equal to about wal size
* number of osds...

On Mon, Oct 22, 2018 at 3:35 PM David Turner <drakonst...@gmail.com> wrote:

> My DB doesn't have a specific partition anywhere, but there's still a
> symlink for it to the data partition.  On my home cluster with all DB, WAL,
> and Data on the same disk without any partitions specified there is a block
> symlink but no block.wal symlink.
>
> For the cluster with a specific WAL partition, but no DB partition, my OSD
> paths looks like [1] this.  For my cluster with everything on the same
> disk, my OSD paths look like [2] this.  Unless you have a specific path for
> "bluefs_wal_partition_path" then it's going to find itself on the same
> partition as the db.
>
> [1] $ ceph osd metadata 5 | grep path
>     "bluefs_db_partition_path": "/dev/dm-29",
>     "bluefs_wal_partition_path": "/dev/dm-41",
>     "bluestore_bdev_partition_path": "/dev/dm-29",
>
> [2] $ ceph osd metadata 5 | grep path
>     "bluefs_db_partition_path": "/dev/dm-5",
>     "bluestore_bdev_partition_path": "/dev/dm-5",
>
> On Mon, Oct 22, 2018 at 4:21 PM Robert Stanford <rstanford8...@gmail.com>
> wrote:
>
>>
>>  Let me add, I have no block.wal file (which the docs suggest should be
>> there).
>> http://docs.ceph.com/docs/master/rados/configuration/bluestore-config-ref/
>>
>> On Mon, Oct 22, 2018 at 3:13 PM Robert Stanford <rstanford8...@gmail.com>
>> wrote:
>>
>>>
>>>  We're out of sync, I think.  You have your DB on your data disk so your
>>> block.db symlink points to that disk, right?  There is however no wal
>>> symlink?  So how would you verify your WAL actually lived on your NVMe?
>>>
>>> On Mon, Oct 22, 2018 at 3:07 PM David Turner <drakonst...@gmail.com>
>>> wrote:
>>>
>>>> And by the data disk I mean that I didn't specify a location for the DB
>>>> partition.
>>>>
>>>> On Mon, Oct 22, 2018 at 4:06 PM David Turner <drakonst...@gmail.com>
>>>> wrote:
>>>>
>>>>> Track down where it says they point to?  Does it match what you
>>>>> expect?  It does for me.  I have my DB on my data disk and my WAL on a
>>>>> separate NVMe.
>>>>>
>>>>> On Mon, Oct 22, 2018 at 3:21 PM Robert Stanford <
>>>>> rstanford8...@gmail.com> wrote:
>>>>>
>>>>>>
>>>>>>  David - is it ensured that wal and db both live where the symlink
>>>>>> block.db points?  I assumed that was a symlink for the db, but 
>>>>>> necessarily
>>>>>> for the wal, because it can live in a place different than the db.
>>>>>>
>>>>>> On Mon, Oct 22, 2018 at 2:18 PM David Turner <drakonst...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> You can always just go to /var/lib/ceph/osd/ceph-{osd-num}/ and look
>>>>>>> at where the symlinks for block and block.wal point to.
>>>>>>>
>>>>>>> On Mon, Oct 22, 2018 at 12:29 PM Robert Stanford <
>>>>>>> rstanford8...@gmail.com> wrote:
>>>>>>>
>>>>>>>>
>>>>>>>>  That's what they say, however I did exactly this and my cluster
>>>>>>>> utilization is higher than the total pool utilization by about the 
>>>>>>>> number
>>>>>>>> of OSDs * wal size.  I want to verify that the wal is on the SSDs too 
>>>>>>>> but
>>>>>>>> I've asked here and no one seems to know a way to verify this.  Do you?
>>>>>>>>
>>>>>>>>  Thank you, R
>>>>>>>>
>>>>>>>> On Mon, Oct 22, 2018 at 5:22 AM Maged Mokhtar <mmokh...@petasan.org>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>>
>>>>>>>>> If you specify a db on ssd and data on hdd and not explicitly
>>>>>>>>> specify a
>>>>>>>>> device for wal, wal will be placed on same ssd partition with db.
>>>>>>>>> Placing only wal on ssd or creating separate devices for wal and
>>>>>>>>> db are
>>>>>>>>> less common setups.
>>>>>>>>>
>>>>>>>>> /Maged
>>>>>>>>>
>>>>>>>>> On 22/10/18 09:03, Fyodor Ustinov wrote:
>>>>>>>>> > Hi!
>>>>>>>>> >
>>>>>>>>> > For sharing SSD between WAL and DB what should be placed on SSD?
>>>>>>>>> WAL or DB?
>>>>>>>>> >
>>>>>>>>> > ----- Original Message -----
>>>>>>>>> > From: "Maged Mokhtar" <mmokh...@petasan.org>
>>>>>>>>> > To: "ceph-users" <ceph-users@lists.ceph.com>
>>>>>>>>> > Sent: Saturday, 20 October, 2018 20:05:44
>>>>>>>>> > Subject: Re: [ceph-users] Drive for Wal and Db
>>>>>>>>> >
>>>>>>>>> > On 20/10/18 18:57, Robert Stanford wrote:
>>>>>>>>> >
>>>>>>>>> >
>>>>>>>>> >
>>>>>>>>> >
>>>>>>>>> > Our OSDs are BlueStore and are on regular hard drives. Each OSD
>>>>>>>>> has a partition on an SSD for its DB. Wal is on the regular hard 
>>>>>>>>> drives.
>>>>>>>>> Should I move the wal to share the SSD with the DB?
>>>>>>>>> >
>>>>>>>>> > Regards
>>>>>>>>> > R
>>>>>>>>> >
>>>>>>>>> >
>>>>>>>>> > _______________________________________________
>>>>>>>>> > ceph-users mailing list [ mailto:ceph-users@lists.ceph.com |
>>>>>>>>> ceph-users@lists.ceph.com ] [
>>>>>>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com |
>>>>>>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com ]
>>>>>>>>> >
>>>>>>>>> > you should put wal on the faster device, wal and db could share
>>>>>>>>> the same ssd partition,
>>>>>>>>> >
>>>>>>>>> > Maged
>>>>>>>>> >
>>>>>>>>> > _______________________________________________
>>>>>>>>> > ceph-users mailing list
>>>>>>>>> > ceph-users@lists.ceph.com
>>>>>>>>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>>>>>>> > _______________________________________________
>>>>>>>>> > ceph-users mailing list
>>>>>>>>> > ceph-users@lists.ceph.com
>>>>>>>>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>>>>>>>
>>>>>>>>> _______________________________________________
>>>>>>>>> ceph-users mailing list
>>>>>>>>> ceph-users@lists.ceph.com
>>>>>>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>>>>>>>
>>>>>>>> _______________________________________________
>>>>>>>> ceph-users mailing list
>>>>>>>> ceph-users@lists.ceph.com
>>>>>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>>>>>>
>>>>>>>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to