Dragon,
If /mnt/master-vol2, then the log file would be
mnt-master-vol2.log[].
Are you seeing this log file too empty? Could you attach the logs of the mount
point
that were log-rotated?
thanks,
krish
- Original Message -
> Hello Krish,
>
> i mounted the Volume Vol2 like this "mount -
Hello Krish,
i mounted the Volume Vol2 like this "mount -t glusterfs node2:/vol2 /mnt/master-vol2/". In the logrotation from yesterday are no lines about the problem.
--
Dragon,
Which mount point are you trying to access your files in the volume from?
If you are attempting to access
Hi Krish,
i can only give last lines from the log. Client Log "mnt-master.log" is empty.
NFS.Log ->
+--+
[2013-09-18 09:21:09.240892] I [rpc-clnt.c:1676:rpc_clnt_reconfig] 0-vol2-client-3: changing port to 49155 (from
Dragon,
Could you attach the brick and client log files? This information
is not sufficient. The error messages in etc-glusterfs.. log regarding
actor error makes me believe that the client volfile is pointing to
glusterd (management daemon) to be the brick process. So, it would
help if you provid
Hallo Krish,
here the Results:
#gluster volume info vol2
Volume Name: vol2
Type: Stripe
Volume ID: b6adce45-ee99-4f2b-bd48-9d7e0fbcb827
Status: Started
Number of Bricks: 1 x 4 = 4
Transport-type: tcp
Bricks:
Brick1: node1:/glusterfs/vol2
Brick2: node2:/glusterfs/vol2
Brick3: node3:/glusterfs/vol
Dragon,
Could you attach brick log files, client log file(s) and output of the following
commands,
gluster volume info VOLNAME
gluster volume status VOLNAME
Could you attach the "etc-glusterfs.." log as well?
thanks,
krish
- Original Message -
> Hello,
> i didnt find any hint of an erro
Hello,
i didnt find any hint of an error. Now i restart all server and watched the "etc-glusterfs.." log. The only thing i found is: "rpc actor failed to complete successfully". All peers looks good and the volume too. i can see the files in the data folder of each brick, but after fuse mount on
Hello,
i have a testsystem with 1*4=4 Bricks in Stripe. After the network lost on one brick and restart the brick, i cant see any files and folders on mounted client via nativ client. peer status shows all bricks, volume info shows correct volume. i restartet glusterd on all bricks and remounte