Got it Gregory, sounds good enough for us.

Thank you all for the help provided.

Regards,

Webert Lima
DevOps Engineer at MAV Tecnologia
*Belo Horizonte - Brasil*
*IRC NICK - WebertRLZ*


On Wed, Jun 13, 2018 at 2:20 PM Gregory Farnum <gfar...@redhat.com> wrote:

> Nah, I would use one Filesystem unless you can’t. The backtrace does
> create another object but IIRC it’s a maximum one IO per create/rename (on
> the file).
> On Wed, Jun 13, 2018 at 1:12 PM Webert de Souza Lima <
> webert.b...@gmail.com> wrote:
>
>> Thanks for clarifying that, Gregory.
>>
>> As said before, we use the file layout to resolve the difference of
>> workloads in those 2 different directories in cephfs.
>> Would you recommend using 2 filesystems instead? By doing so, each fs
>> would have it's default data pool accordingly.
>>
>>
>> Regards,
>>
>> Webert Lima
>> DevOps Engineer at MAV Tecnologia
>> *Belo Horizonte - Brasil*
>> *IRC NICK - WebertRLZ*
>>
>>
>> On Wed, Jun 13, 2018 at 11:33 AM Gregory Farnum <gfar...@redhat.com>
>> wrote:
>>
>>> The backtrace object Zheng referred to is used only for resolving hard
>>> links or in disaster recovery scenarios. If the default data pool isn’t
>>> available you would stack up pending RADOS writes inside of your mds but
>>> the rest of the system would continue unless you manage to run the mds out
>>> of memory.
>>> -Greg
>>> On Wed, Jun 13, 2018 at 9:25 AM Webert de Souza Lima <
>>> webert.b...@gmail.com> wrote:
>>>
>>>> Thank you Zheng.
>>>>
>>>> Does that mean that, when using such feature, our data integrity relies
>>>> now on both data pools'  integrity/availability?
>>>>
>>>> We currently use such feature in production for dovecot's index files,
>>>> so we could store this directory on a pool of SSDs only. The main data pool
>>>> is made of HDDs and stores the email files themselves.
>>>>
>>>> There ain't too many files created, it's just a few files per email
>>>> user, and basically one directory per user's mailbox.
>>>> Each mailbox has a index file that is updated upon every new email
>>>> received or moved, deleted, read, etc.
>>>>
>>>> I think in this scenario the overhead may be acceptable for us.
>>>>
>>>>
>>>> Regards,
>>>>
>>>> Webert Lima
>>>> DevOps Engineer at MAV Tecnologia
>>>> *Belo Horizonte - Brasil*
>>>> *IRC NICK - WebertRLZ*
>>>>
>>>>
>>>> On Wed, Jun 13, 2018 at 9:51 AM Yan, Zheng <uker...@gmail.com> wrote:
>>>>
>>>>> On Wed, Jun 13, 2018 at 3:34 AM Webert de Souza Lima
>>>>> <webert.b...@gmail.com> wrote:
>>>>> >
>>>>> > hello,
>>>>> >
>>>>> > is there any performance impact on cephfs for using file layouts to
>>>>> bind a specific directory in cephfs to a given pool? Of course, such pool
>>>>> is not the default data pool for this cephfs.
>>>>> >
>>>>>
>>>>> For each file, no matter which pool file data are stored,  mds alway
>>>>> create an object in the default data pool. The object in default data
>>>>> pool is used for storing backtrace. So files stored in non-default
>>>>> pool have extra overhead on file creation. For large file, the
>>>>> overhead can be neglect. But for lots of small files, the overhead may
>>>>> affect performance.
>>>>>
>>>>>
>>>>> > Regards,
>>>>> >
>>>>> > Webert Lima
>>>>> > DevOps Engineer at MAV Tecnologia
>>>>> > Belo Horizonte - Brasil
>>>>> > IRC NICK - WebertRLZ
>>>>> > _______________________________________________
>>>>> > ceph-users mailing list
>>>>> > ceph-users@lists.ceph.com
>>>>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>>>
>>>> _______________________________________________
>>>> ceph-users mailing list
>>>> ceph-users@lists.ceph.com
>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>>
>>>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to