On 10/19/2014 06:56 PM, Niels de Vos wrote:
On Sat, Oct 18, 2014 at 01:24:12PM +0200, Demeter Tibor wrote:
Hi,
[root@node0 ~]# tail -n 20 /var/log/glusterfs/nfs.log
[2014-10-18 07:41:06.136035] E [graph.c:307:glusterfs_graph_init] 0-nfs-server:
initializing translator failed
[2014-10-18
Hi,
This is the full nfs.log after delete reboot.
It is refers to portmap registering problem.
[root@node0 glusterfs]# cat nfs.log
[2014-10-20 06:48:43.221136] I [glusterfsd.c:1959:main] 0-/usr/sbin/glusterfs:
Started running /usr/sbin/glusterfs version 3.5.2 (/usr/sbin/glusterfs -s
localhost
Ok, no problem. The issue is very rare, even with our setup - we have seen it
only once on one site even though we have been in production for several months
now. For now, we can live with that IMO.
And, thanks again.
Anirban___
Gluster-users
On 18/10/14 12:46, James Payne wrote:
Not in my particular use case which is where in Windows a new folder or file is
created through explorer. The new folder is created by Windows with the name
'New Folder' which almost certainly the user will the rename. The same goes
with newly created
Also it's funny, because meanwhile the portmap are listening on localhost.
[root@node0 log]# netstat -tunlp | grep 111
tcp0 0 0.0.0.0:111 0.0.0.0:* LISTEN
4709/rpcbind
tcp6 0 0 :::111 :::*LISTEN
On 18/10/14 20:31, Justin Clift wrote:
- Original Message -
snip
Right now, distributed-geo-rep has bunch of known issues with deletes
and renames. Part of the issue was solved with a patch sent to upstream
recently. But still it doesn't solve complete issue.
snip
Do we have an idea
On Mon, Oct 20, 2014 at 09:04:28AM +0200, Demeter Tibor wrote:
Hi,
This is the full nfs.log after delete reboot.
It is refers to portmap registering problem.
[root@node0 glusterfs]# cat nfs.log
[2014-10-20 06:48:43.221136] I [glusterfsd.c:1959:main]
0-/usr/sbin/glusterfs: Started
Hi,
I've spotted what may be a small bug (or an unavoidable feature?) with
the way a gluster volume reports free space while a replicated brick is
re-syncing, or it may be that there's a setting I need to change.
Using gluster 3.5.2 on CentOS 7, I created a volume with 3 servers using
3
Hi,
Thank you for you reply.
I did your recommendations, but there are no changes.
In the nfs.log there are no new things.
[root@node0 glusterfs]# reboot
Connection to 172.16.0.10 closed by remote host.
Connection to 172.16.0.10 closed.
[tdemeter@sirius-31 ~]$ ssh root@172.16.0.10
- Original Message -
The solution involves changelog crash consistency among other things.
Since this feature itself is targeted for glusterfs-3.7, I would say the
complete solution would be available with glusterfs-3.7
One the major challenges in solving it involves
On 10/16/2014 2:48 PM, Łukasz Zygmański wrote:
Hello,
I am new to this list and new to GlusterFS, so I would be grateful if you
could help me.
I am trying to do this setup:
client1(10.75.2.45)
|
| MTU 1500
V
(10.75.2.41)
gluster1gluster2
(10.75.2.43) ---
11 matches
Mail list logo