You’re using a file system on 2 hosts that is not cluster aware.  Metadata 
written on hosta is not sent to hostb in this case.  You may be interested in 
looking at cephfs for this use case.


Michael Kuriger
mk7...@yp.com
818-649-7235
MikeKuriger (IM)

From: Rafał Michalak <rafa...@gmail.com<mailto:rafa...@gmail.com>>
Date: Wednesday, January 14, 2015 at 5:20 AM
To: "ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>" 
<ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>>
Subject: [ceph-users] two mount points, two diffrent data

Hello I have trouble with this situation

#node1
mount /dev/rbd/rbd/test /mnt
cd /mnt
touch test1
ls (i see test1, OK)

#node2
mount /dev/rbd/rbd/test /mnt
cd /mnt
(i see test1, OK)
touch test2
ls (i see test2, OK)

#node1
ls (i see test1, BAD)
touch test3
ls (i see test1, test3 BAD)

#node2
ls (i see test1, test2 BAD)

Why data not replicating on mounting fs ?
I try with filesystems ext4 and xfs
The data is visible only when unmounted and mounted again

I check health in "ceph status" and is HEALTH_OK

What I doing wrong ?
Thanks for any help.


My system
Ubuntu 14.04.01 LTS

#ceph --version
ceph version 0.80.7 (6c0127fcb58008793d3c8b62d925bc91963672a3)

#modinfo libceph
filename:       /lib/modules/3.13.0-44-generic/kernel/net/ceph/libceph.ko
license:         GPL
description:    Ceph filesystem for Linux
author:           Patience Warnick 
<patie...@newdream.net<mailto:patie...@newdream.net>>
author:           Yehuda Sadeh 
<yeh...@hq.newdream.net<mailto:yeh...@hq.newdream.net>>
author:           Sage Weil <s...@newdream.net<mailto:s...@newdream.net>>
srcversion:     B8E83D4DFC53B113603CF52
depends:        libcrc32c
intree:            Y
vermagic:       3.13.0-44-generic SMP mod_unload modversions
signer:           Magrathea: Glacier signing key
sig_key:          50:8C:3B:4B:F1:08:ED:36:B6:06:2F:81:27:82:F7:7C:37:B9:85:37
sig_hashalgo:   sha512

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to