David - thanks again.  Your input the last week has been invaluable.

On Wed, Aug 22, 2018 at 2:41 PM David Turner <drakonst...@gmail.com> wrote:

> Yes, whatever you set your DB LV to at the time you create the Bluestore
> OSD, it will use all of that space for the db/wal.  If you increase the
> size after the initial creation, the space will not be used for the the
> DB.  You cannot resize it.
>
> On Wed, Aug 22, 2018 at 3:39 PM Robert Stanford <rstanford8...@gmail.com>
> wrote:
>
>>
>>  In my case I am using the same values for lvcreate and in the ceph.conf
>> (bluestore* settings).  Since my lvs are the size I want the db to be, and
>> since I'm told that the wal will live in the same place automatically, it
>> sounds like setting my lv to be xGB ensures Ceph will use all of this for
>> db/wal automatically?
>>
>>  Thanks
>>  R
>>
>> On Wed, Aug 22, 2018 at 2:09 PM Alfredo Deza <ad...@redhat.com> wrote:
>>
>>> On Wed, Aug 22, 2018 at 2:48 PM, David Turner <drakonst...@gmail.com>
>>> wrote:
>>> > The config settings for DB and WAL size don't do anything.  For journal
>>> > sizes they would be used for creating your journal partition with
>>> ceph-disk,
>>> > but ceph-volume does not use them for creating bluestore OSDs.  You
>>> need to
>>> > create the partitions for the DB and WAL yourself and supply those
>>> > partitions to the ceph-volume command.  I have heard that they're
>>> working on
>>> > this for future releases, but currently those settings don't do
>>> anything.
>>>
>>> This is accurate, ceph-volume as of the latest release doesn't do any
>>> with them because it doesn't create these for the user.
>>>
>>> We are getting close on getting that functionality rolled out, but not
>>> ready unless you are using master (please don't use master :))
>>>
>>>
>>> >
>>> > On Wed, Aug 22, 2018 at 1:34 PM Robert Stanford <
>>> rstanford8...@gmail.com>
>>> > wrote:
>>> >>
>>> >>
>>> >>  I have created new OSDs for Ceph Luminous.  In my Ceph.conf I have
>>> >> specified that the db size be 10GB, and the wal size be 1GB.  However
>>> when I
>>> >> type ceph daemon osd.0 perf dump I get: bluestore_allocated": 5963776
>>> >>
>>> >>  I think this means that the bluestore db is using the default, and
>>> not
>>> >> the value of bluestore block db size in the ceph.conf.  Why is this?
>>> >>
>>> >>  Thanks
>>> >>  R
>>> >>
>>> >> _______________________________________________
>>> >> ceph-users mailing list
>>> >> ceph-users@lists.ceph.com
>>> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>> >
>>> >
>>> > _______________________________________________
>>> > ceph-users mailing list
>>> > ceph-users@lists.ceph.com
>>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>> >
>>>
>>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to