Hi Jorge,

I think it depends on your workload.

On Tue, May 25, 2021 at 7:43 PM Jorge Garcia <jgar...@soe.ucsc.edu> wrote:
>
> This may be too broad of a topic, or opening a can of worms, but we are
> running a CEPH environment and I was wondering if there's any guidance
> about this question:
>
> Given that some group would like to store 50-100 TBs of data on CEPH and
> use it from a linux environment, are there any advantages or
> disadvantages in terms of performance/ease of use/learning curve to
> using cephfs vs using a block device thru rbd vs using object storage
> thru rgw? Here are my general thoughts:
>
> cephfs - Until recently, you were not allowed to have multiple
> filesystems. Not sure about performance.
>

I/O performance can be /very/ good.  Metadata performance has can
vary.  If you need shared POSIX access ("native" or NFS or SMB), you
need cephfs.

> rbd - Can only be mounted on one system at a time, but I guess that
> filesystem could then be served using NFS.

Yes, but it's single attach.

>
> rgw - A different usage model from regular linux file/directory
> structure. Are there advantages to forcing people to use this interface?

There are advantages.  S3 has become a preferred interface for some
applications, especially analytics (e.g., Hadoop, Spark, PrestoSql)).

>
> I'm tempted to set up 3 separate areas and try them and compare the
> results, but I'm wondering if somebody has done some similar experiment
> in the past.

Not sure, good question.

Matt

>
> Thanks for any help you can provide!
>
> Jorge
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>


-- 

Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103

http://www.redhat.com/en/technologies/storage

tel.  734-821-5101
fax.  734-769-8938
cel.  734-216-5309
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to