This can happen if all your servers were unreachable for a few seconds. The
situation must have rectified during the restart. We could confirm if you
change the log level on nfs to DEBUG and send us the log.
Thanks
-Shehjar
Jurgen Winkler wrote:
Hi,
i noticed a strange behavior with NFS and
Joshua,
You are right. Even though GlusterFS native client provides redundancy and
high availability, the act of mounting itself is from a single server.
Standard way to work around this is to have a ucarp based VIP just for the
purpose of mounting. Other ways include techniques mentioned above
This is interesting. write-behind guarantees ordering of read/write such
that you always read written data. Do you happen to know if Mercurial reads
and writes from different file descriptors on the same file? Can you give us
logs and configuration details of your setup?
Avati
On Mon, Jun 6,
Hi,
I got the same problem as Juergen,
My volume is a simple replicated volume with 2 host and GlusterFS 3.2.0
Volume Name: poolsave
Type: Replicate
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: ylal2950:/soft/gluster-data
Brick2: ylal2960:/soft/gluster-data
Options
Hey,
I've tried looking at the documentation and googled some.
I couldn't find what I was looking for, hope am not asking anything that is
obvious.
So I've two servers, and four clients.
They seem to work fine, except that.. one of the childs lists some directories
as empty.
which do show