Hi ,
We have CEPH RBD with OCFS2 mounted servers. we are facing i/o errors simultaneously while move the folder using one nodes in the same disk other nodes data replicating with below said error (Copying is not having any problem). Workaround if we remount the partition this issue get resolved but after sometime problem again reoccurred. please help on this issue. Note : We have total 5 Nodes, here two nodes working fine other nodes are showing like below input/output error on moved data's. ls -althr ls: cannot access LITE_3_0_M4_1_TEST: Input/output error ls: cannot access LITE_3_0_M4_1_OLD: Input/output error total 0 d????????? ? ? ? ? ? LITE_3_0_M4_1_TEST d????????? ? ? ? ? ? LITE_3_0_M4_1_OLD Regards Prabu ---- On Fri, 22 May 2015 17:33:04 +0530 Frédéric Nass <frederic.n...@univ-lorraine.fr> wrote ---- Hi, Waiting for CephFS, you can use clustered filesystem like OCFS2 or GFS2 on top of RBD mappings so that each host can access the same device and clustered filesystem. Regards, Frédéric. Le 21/05/2015 16:10, gjprabu a écrit : -- Frédéric Nass Sous direction des Infrastructures, Direction du Numérique, Université de Lorraine. Tél : 03.83.68.53.83 _______________________________________________ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com Hi All, We are using rbd and map the same rbd image to the rbd device on two different client but i can't see the data until i umount and mount -a partition. Kindly share the solution for this issue. Example create rbd image named foo map foo to /dev/rbd0 on server A, mount /dev/rbd0 to /mnt map foo to /dev/rbd0 on server B, mount /dev/rbd0 to /mnt Regards Prabu _______________________________________________ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com