Thanks Ric, thanks again Ronny.

I have a lot of good info now! I am going to try ocfs2.

Thanks


-- Jim
-----Original Message-----
From: Ric Wheeler [mailto:rwhee...@redhat.com] 
Sent: Thursday, September 14, 2017 4:35 AM
To: Ronny Aasen; James Okken; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] access ceph filesystem at storage level and not via 
ethernet

On 09/14/2017 11:17 AM, Ronny Aasen wrote:
> On 14. sep. 2017 00:34, James Okken wrote:
>> Thanks Ronny! Exactly the info I need. And kinda of what I thought 
>> the answer would be as I was typing and thinking clearer about what I 
>> was asking. I just was hoping CEPH would work like this since the 
>> openstack fuel tools deploy CEPH storage nodes easily.
>> I agree I would not be using CEPH for its strengths.
>>
>> I am interested further in what you've said in this paragraph though:
>>
>> "if you want to have FC SAN attached storage on servers, shareable 
>> between servers in a usable fashion I would rather mount the same SAN 
>> lun on multiple servers and use a cluster filesystem like ocfs or gfs 
>> that is made for this kind of solution."
>>
>> Please allow me to ask you a few questions regarding that even though 
>> it isn't CEPH specific.
>>
>> Do you mean gfs/gfs2 global file system?
>>
>> Does ocfs and/or gfs require some sort of management/clustering 
>> server to maintain and manage? (akin to a CEPH OSD) I'd love to find 
>> a distributed/cluster filesystem where I can just partition and 
>> format. And then be able to mount and use that same SAN datastore 
>> from multiple servers without a management server.
>> If ocfs or gfs do need a server of this sort does it needed to be 
>> involved in the I/O? or will I be able to mount the datastore, 
>> similar to any other disk and the IO goes across the fiberchannel?
>
> i only have experience with ocfs. but i think gfs works similarish. 
> There are quite a few cluster filesystems to choose from.
> https://en.wikipedia.org/wiki/Clustered_file_system
>
> servers that are mounting ocfs shared filesystems must have 
> ocfs2-tools installed. have access to the common shared FC lun via FC.  
> they need to be aware of the other ocfs servers of the same lun, that 
> you define in a /etc/ocfs/cluster.conf configfile and the ocfs daemon must be 
> running.
>
> then it is just a matter of making the ocfs (on one server) and adding 
> it to fstab (of all servers) and mount.
>
>
>> One final question, if you don't mind, do you think I could use 
>> ext4or xfs and "mount the same SAN lun on multiple servers" if I can 
>> guarantee each server will only right to its own specific directory 
>> and never anywhere the other servers will be writing? (I even have 
>> the SAN mapped to each server using different lun's)
>
> mounting the same (non cluster) filesystem on multiple servers is 
> guaranteed to destroy the filesystem, you will have multiple servers 
> writing in the same metadata area, the same journal area and generaly 
> shitting over each other.
> luckily i think most modern filesystems would detect that the FS is 
> mounted somewhere else and prevent you from mounting it again without big fat 
> warnings.
>
> kind regards
> Ronny Aasen

In general, you can get shared file systems (i.e., the clients can all see the 
same files and directories) with lots of different approaches:

* use a shared disk file system like GFS2, OCFS2 - all of the "clients" where 
the applications run are part of the cluster and each server attaches to the 
shared storage (through iSCSI, FC, whatever). They do require HA cluster 
infrastructure for things like fencing

* use a distributed file system like cephfs, glusterfs, etc - your clients 
access through a file system specific protocol, they don't see raw storage

* take any file system (local or other) and re-export it as a client/server 
type of file system by using an NFS server or Samba server

Ric

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to