[ceph-users] Ceph Cluster with 3 Machines

2018-05-29 Thread Joshua Collins

Hi

I've had a go at setting up a Ceph cluster but I've ran into some issues.

I have 3 physical machines to set up a Ceph cluster, and two of these 
machines will be part of a HA pair using corosync and Pacemaker.


I keep running into filesystem lock issues on unmount when I have a 
machine running an OSD and monitor, while also mounting an RBD pool.


Moving the OSD and monitor to a VM so that I could mount the RBD on the 
host hasn't fixed the issue.


Is there a way to set this up to avoid the filesystem lock issues I'm 
encountering?


Thanks in advance

Josh
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph Cluster with 3 Machines

2018-05-29 Thread David Turner
Using the kernel driver to map RBDs to a host with OSDs is known to cause
system locks.  The answer to avoiding this is to use rbd-nbd or rbd-fuse
instead of the kernel driver if you NEED to map the RBD to the same host as
any OSDs.

On Tue, May 29, 2018 at 7:34 AM Joshua Collins 
wrote:

> Hi
>
> I've had a go at setting up a Ceph cluster but I've ran into some issues.
>
> I have 3 physical machines to set up a Ceph cluster, and two of these
> machines will be part of a HA pair using corosync and Pacemaker.
>
> I keep running into filesystem lock issues on unmount when I have a
> machine running an OSD and monitor, while also mounting an RBD pool.
>
> Moving the OSD and monitor to a VM so that I could mount the RBD on the
> host hasn't fixed the issue.
>
> Is there a way to set this up to avoid the filesystem lock issues I'm
> encountering?
>
> Thanks in advance
>
> Josh
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com