Hi all,
I'm using nfs-ganesha v2.4 to build a nfs over rgw with Ceph v11.0.2. I did
some experiments and I faced something that I can't understand:

1. I tried to create a new bucket in ceph cluster with s3cmd and the new
bucket had been successfully created. However, I couldn't see the new
bucket in the mount point, but I can still access to this bucket in the
mount point.

2. I mounted 2 points(nfs-client A and nfs-client B). When I created a new
bucket in nfs-client A, the new bucket showed up in the nfs-client B with a
short delay. And the new bucket had been successfully created in the
backend(ceph cluster) too.

I check the log file and the source code, and I found that, when we created
a new bucket in ceph cluster, nfs-ganesha sync FSAL but not MDCache. So in
the mount point we cannot see the new bucket.  However, if we create a new
bucket in nfs-client, nfs-ganesha will sync both FSAL and MDCache.

It doesn't seem like a bug, can somebody tell me why nfs-ganesha work so?
Would it cost too much if we sync both FSAL and MDCache if we update ceph
cluster?

Thanks for your attention!
-- 

Tao CHEN

Élève ingénieur Système Réseaux et Télécommunications

*Université de Technologie de Troyes(UTT)*

Email: sebastien.che...@gmail.com
------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most 
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
_______________________________________________
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel

Reply via email to