Actually, that is exactly what I was looking for.

Thanks.

Ian

On Thu, Oct 27, 2022 at 3:31 PM Federico Lucifredi <feder...@redhat.com>
wrote:

> Not exactly what you asked, but just to make sure you are aware, there is
> a project delivering Windows native Ceph drivers. If performance is an
> issue, these are going to outperform anything you could ever do with SMB —
> at the tradeoff of maintaining one more driver on your client side.
>
> https://docs.ceph.com/en/latest/install/windows-install/
> <https://urldefense.com/v3/__https://docs.ceph.com/en/latest/install/windows-install/__;!!Mih3wA!EjeuLD-hOVekg7A0DI1hHdGNJdxK_yYonW1FHjZ32ZgtkzxdPj4Zi91R1DuXQN5zgUtSRum_m2oQmPP_fA$>
>
> Ironically, the Windows drivers are presently faster than the
> corresponding Linux drivers.
>
> Best -F
>
> -- "'Problem' is a bleak word for challenge" - Richard Fish
> _________________________________________
> Federico Lucifredi
> Product Management Director, Ceph Storage Platform
> Red Hat
> A273 4F57 58C0 7FE8 838D 4F87 AEEB EC18 4A73 88AC
> redhat.com
> <https://urldefense.com/v3/__http://redhat.com__;!!Mih3wA!EjeuLD-hOVekg7A0DI1hHdGNJdxK_yYonW1FHjZ32ZgtkzxdPj4Zi91R1DuXQN5zgUtSRum_m2p59mCMhA$>
>   TRIED. TESTED. TRUSTED.
>
>
> On Thu, Oct 27, 2022 at 6:25 PM Bailey Allison <balli...@45drives.com>
> wrote:
>
>> Hi,
>>
>> That is most likely possible but the difference in performance from doing
>> CephFS + Samba compared to RBD + Ceph iSCSI + Windows SMB would probably be
>> extremely noticeable in a not very good way.
>>
>> As Wyll mentioned recommended way is to just share out SMB on top of an
>> exisitng CephFS mount (also is how NFS is done within Ceph but through FUSE
>> within dashboard) With CephFS + Samba you can also make use of Windows ACLs
>> assuming you have an Active Directory to take advantage of and can get true
>> Windows permissions with CephFS. It can also be clustered together using
>> CTDB.
>>
>> Regards,
>>
>> Bailey
>>
>> >-----Original Message-----
>> >From: Ian Kaufman <ikauf...@ucsd.edu>
>> >Sent: October 27, 2022 6:36 PM
>> >To: Wyll Ingersoll <wyllys.ingers...@keepertech.com>
>> >Cc: ceph-users <ceph-users@ceph.io>
>> >Subject: [ceph-users] Re: SMB and ceph question
>> >
>> >Would it be plausible to have Windows DFS servers mount the Ceph cluster
>> via iSCSI? And then share the data out in a more Windows native way?
>> >
>> >Thanks,
>> >
>> >Ian
>> >
>> >On Thu, Oct 27, 2022 at 1:50 PM Wyll Ingersoll <
>> wyllys.ingers...@keepertech.com> wrote:
>> >
>> >
>> > No - the recommendation is just to mount /cephfs using the kernel
>> > module and then share it via standard VFS module from Samba. Pretty
>> simple.
>> > ________________________________
>> > From: Christophe BAILLON <c...@20h.com>
>> > Sent: Thursday, October 27, 2022 4:08 PM
>> > To: Wyll Ingersoll <wyllys.ingers...@keepertech.com>
>> > Cc: Eugen Block <ebl...@nde.ag>; ceph-users <ceph-users@ceph.io>
>> > Subject: Re: [ceph-users] Re: SMB and ceph question
>> >
>> > Re
>> >
>> > Ok, I thought there was a module like ganesha for the nfs to install
>> > directly on the cluster...
>> >
>> > ----- Mail original -----
>> > > De: "Wyll Ingersoll" <wyllys.ingers...@keepertech.com>
>> > > À: "Eugen Block" <ebl...@nde.ag>, "ceph-users" <ceph-users@ceph.io>
>> > > Envoyé: Jeudi 27 Octobre 2022 15:25:36
>> > > Objet: [ceph-users] Re: SMB and ceph question
>> >
>> > > I don't think there is anything particularly special about exposing
>> > /cephfs (or
>> > > subdirs thereof) over SMB with SAMBA.  We've done it for years over
>> > various
>> > > releases of both Ceph and Samba.
>> > > Basically, you create a NAS server host that mounts /cephfs and run
>> > Samba on
>> > > that host.  You share whatever subdirectories you need to in the
>> > > usual
>> > way.
>> > > SMB clients mount from the Samba service and have no knowledge of
>> > > the underlying storage.
>> > >
>> > >
>> > > ________________________________
>> > > From: Eugen Block <ebl...@nde.ag>
>> > > Sent: Thursday, October 27, 2022 5:40 AM
>> > > To: ceph-users@ceph.io <ceph-users@ceph.io>
>> > > Subject: [ceph-users] Re: SMB and ceph question
>> > >
>> > > Hi,
>> > >
>> > > the SUSE docs [1] are not that old, they apply for Ceph Pacific.
>> > > Have you tried it yet?
>> > > Maybe the upstream docs could adapt the SUSE docs, just an idea if
>> > > there aren't any guides yet on docs.ceph.com
>> <https://urldefense.com/v3/__http://docs.ceph.com__;!!Mih3wA!EjeuLD-hOVekg7A0DI1hHdGNJdxK_yYonW1FHjZ32ZgtkzxdPj4Zi91R1DuXQN5zgUtSRum_m2rEgbaEnA$>
>> .
>> > >
>> > > Regards,
>> > > Eugen
>> > >
>> > > [1]
>> > https://urldefense.com/v3/__https://documentation.suse.com/ses/7.1/sin
>> > gle-html/ses-admin/*cha-ses-cifs__;Iw!!Mih3wA!F9Rm9GTQUhNjvwz3VY5d0dzb
>> > 3NJh3cu1RloE77GMTacJcisHsm-4qWIJnOgZpMOJssRy0FXaQLawteMecnFbza-Jdb8p5x
>> > Q$
>> >
>> > >
>> > > Zitat von Christophe BAILLON <c...@20h.com>:
>> > >
>> > >> Hello,
>> > >>
>> > >> For a side project, we need to expose cephfs datas to legacy users
>> > >> via SMB, I don't find the official way in ceph doc to do that.
>> > >> In old suze doc I found ref to ceph-samba, but I can't find any
>> > >> informations on ceph official doc.
>> > >> We have a small cephadm dedicated cluster to do that, can you help
>> > >> me to find the best way to deploy samba on top ?
>> > >>
>> > >> Regards
>> > >>
>> > >> --
>> > >> Christophe BAILLON
>> > >> Mobile :: +336 16 400 522
>> > >> Work ::
>> > https://urldefense.com/v3/__https://eyona.com__;!!Mih3wA!F9Rm9GTQUhNjv
>> > wz3VY5d0dzb3NJh3cu1RloE77GMTacJcisHsm-4qWIJnOgZpMOJssRy0FXaQLawteMecnF
>> > bza-JcX_Le0w$
>> >
>> > >> Twitter ::
>> > https://urldefense.com/v3/__https://twitter.com/ctof__;!!Mih3wA!F9Rm9G
>> > TQUhNjvwz3VY5d0dzb3NJh3cu1RloE77GMTacJcisHsm-4qWIJnOgZpMOJssRy0FXaQLaw
>> > teMecnFbza-J6BlyF8c$
>> >
>> > >> _______________________________________________
>> > >> ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send
>> > >> an email to ceph-users-le...@ceph.io
>> > >
>> > >
>> > >
>> > > _______________________________________________
>> > > ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an
>> > > email to ceph-users-le...@ceph.io
>> > > _______________________________________________
>> > > ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an
>> > > email to ceph-users-le...@ceph.io
>> >
>> > --
>> > Christophe BAILLON
>> > Mobile :: +336 16 400 522
>> > Work ::
>> > https://urldefense.com/v3/__https://eyona.com__;!!Mih3wA!F9Rm9GTQUhNjv
>> > wz3VY5d0dzb3NJh3cu1RloE77GMTacJcisHsm-4qWIJnOgZpMOJssRy0FXaQLawteMecnF
>> > bza-JcX_Le0w$
>> >
>> > Twitter ::
>> > https://urldefense.com/v3/__https://twitter.com/ctof__;!!Mih3wA!F9Rm9G
>> > TQUhNjvwz3VY5d0dzb3NJh3cu1RloE77GMTacJcisHsm-4qWIJnOgZpMOJssRy0FXaQLaw
>> > teMecnFbza-J6BlyF8c$
>> >
>> > _______________________________________________
>> > ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an
>> > email to ceph-users-le...@ceph.io
>> >
>>
>>
>> --
>> Ian Kaufman
>> Research Systems Administrator
>> UC San Diego, Jacobs School of Engineering ikaufman AT ucsd DOT edu
>>
>> *UC San Diego is working thoughtfully and strategically to consider our
>> return to campus, with safety as the top priority.  Stay informed about UC
>> San Diego developments and updates in response to COVID-19 at
>> https://returntolearn.ucsd.edu <https://returntolearn.ucsd.edu/>*
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an
>> email to ceph-users-le...@ceph.io
>>
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@ceph.io
>> To unsubscribe send an email to ceph-users-le...@ceph.io
>>
>

-- 
Ian Kaufman
Research Systems Administrator
UC San Diego, Jacobs School of Engineering ikaufman AT ucsd DOT edu

*UC San Diego is working thoughtfully and strategically to consider our
return to campus, with safety as the top priority.  Stay informed about UC
San Diego developments and updates in response to COVID-19 at
https://returntolearn.ucsd.edu <https://returntolearn.ucsd.edu/>*
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to