Hi Jesper,

can you state your suggestion more precisely? I have a similar setup and I’m 
also interested.

If i understand you right, you suggest to create an RBD image for data that 
attachs to a VM with installed samba Server.

But what would be the „best“ way to connect? Kernel module mapping or iSCSI 
targets.  

Another possibilty would be to create an RBD Image containing data and samba 
and use it with QEMU.

Regards

Marco Savoca

Von: jes...@krogh.cc
Gesendet: Samstag, 14. März 2020 09:15
An: Amudhan P
Cc: ceph-users
Betreff: [ceph-users] Re: New 3 node Ceph cluster

Hi.

Unless there is plans for going to Petabyte scale with it - then I really
dont see the benefits of getting CephFS involved over just an RBD image
with a VM running standard samba on top.

More performant and less complexity to handle - zero gains (by my book)

Jesper

> Hi,
>
> I am planning to create a new 3 node ceph storage cluster.
>
> I will be using Cephfs + with samba for max 10 clients for upload and
> download.
>
> Storage Node HW is Intel Xeon E5v2 8 core single Proc, 32GB RAM and 10Gb
> Nic 2 nos., 6TB SATA  HDD 24 Nos. each node, OS separate SSD disk.
>
> Earlier I have tested orchestration using ceph-deploy in the test setup.
> now, is there any other alternative to ceph-deploy?
>
> Can I restrict folder access to the user using cephfs + vfs samba or
> should
> I use ceph client + samba?
>
> Ubuntu or Centos?
>
> Any block size consideration for object size, metadata when using cephfs?
>
> Idea or suggestion from existing users. I am also going to start to
> explore
> all the above.
>
> regards
> Amudhan
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to