On 6/11/21 3:52 AM, Szabo, Istvan (Agoda) wrote:
> Hi,
>
> Can you suggest me what is a good cephfs design? I've never used it, only rgw 
> and rbd we have, but want to give a try. Howvere in the mail list I saw a 
> huge amount of issues with cephfs so would like to go with some let's say 
> bulletproof best practices.

You've read many practical answers to your question so far.   My
contribution is:  cephfs has to 'win' over the long term because moving
'known interesting' data over a network will always take less time than
having a client 'file system' move whole storage blocks over the fiber
or wire then have to sort out the bits the application actually wants. 
The only way that doesn't happen is if the 'wires' are dramatically
faster than the hosts and lightly loaded -- not what's expected.

So, long term: cephfs has the logical ability to out-perform other
block-backed (rbd/iscsi) choices.  But not today.  The thing that makes
it 'seem slow' now is dealing with the multi-user file/record level
contention block devices don't have to face.  Over time I expect
directory trees might be shared with a 'one user' flag that might allow
the client to interact with the mons/osds directly and require very
little mds traffic. That will win over rbd+fs designs because of 'more
of what the user wants per network packet' issues.

So, eventually (Year?  Years?  Decades?)  I think RadosGW and cephfs
will bear most of the ceph traffic.  But for today -- if a host is the
sole user of a directory tree -- rbd + xfs (ymmv)

HC





_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to