Re: [Gluster-users] geo-rep: -1 (Directory not empty) warning - STATUS Faulty

2016-09-16 Thread ML mail
Thanks for the command, this is the result of that ls:

lrwxrwxrwx 1 root root 60 Jul 26 16:12 
/data/cloud-pro/brick/.glusterfs/f7/eb/f7eb9d21-d39a-4dd6-941c-46d430e18aa2 -> 
../../aa/63/aa63d3f7-656e-4e92-a016-a878714e89f8/Booking




On Friday, September 16, 2016 11:58 AM, Aravinda  wrote:
We can check in brick backend.

ls -ld $BRICK_ROOT/.glusterfs/f7/eb/f7eb9d21-d39a-4dd6-941c-46d430e18aa2

regards
Aravinda


On Thursday 15 September 2016 09:12 PM, ML mail wrote:
> So I ran a on my master a "find /mybrick -name 'File 2016.xlsx'" and got the 
> following two entries:
>
> 355235   19 drwxr-xr-x   3 www-data www-data3 Sep 15 12:48 
> ./data/username/files_encryption/keys/files/Dir/File\ 2016.xlsx
> 355234   56 -rw-r--r--   2 www-data www-data44308 Sep 15 12:48 
> ./data/username/files/Dir/File\ 2016.xlsx
>
>
> As you can see, there is one file and one directory named 'File 2016.xlsx'. 
> This is actually the web application Nextcloud when it uses encryption: the 
> file is the encrypted file and the directory named like the file contains 
> some encryption keys to encrypt/decrypt that specific file.
>
> Now the next thing would be to find out if geo-rep failed on the file or the 
> directory named 'File 2016.xlsx'. Any ideas how I can check that? and what 
> else would you need to debug this issue?
>
> Regards
> ML
>
>
>
>
>
> On Thursday, September 15, 2016 9:12 AM, Aravinda  wrote:
> Thanks for the logs from Master. Below error says directory not empty
> but names looks like they are files. Please confirm from Master Volume
> that path is directory or file.
>
> [2016-09-13 19:30:57.475649] W [fuse-bridge.c:1787:fuse_rename_cbk]
> 0-glusterfs-fuse: 25: /.gfid/f7eb9d21-d39a-4dd6-941c-46d430e18aa2/File
> 2016.xlsx.ocTransferId1333449197.part ->
> /.gfid/f7eb9d21-d39a-4dd6-941c-46d430e18aa2/File 2016.xlsx => -1
> (Directory not empty)
>
>
> regards
> Aravinda
>
>
> On Wednesday 14 September 2016 12:49 PM, ML mail wrote:
>> As requested you will find below the last few 100 lines of the geo-rep log 
>> file from the master node.
>>
>>
>> FILE: 
>> /var/log/glusterfs/geo-replication/cloud-pro/ssh%3A%2F%2Froot%40192.168.20.107%3Agluster%3A%2F%2F127.0.0.1%3Acloud-pro-geo.log
>> 
>>
>> [2016-09-13 19:39:48.686060] I 
>> [syncdutils(/data/cloud-pro/brick):220:finalize] : exiting.
>> [2016-09-13 19:39:48.688112] I [repce(agent):92:service_loop] RepceServer: 
>> terminating on reaching EOF.
>> [2016-09-13 19:39:48.688461] I [syncdutils(agent):220:finalize] : 
>> exiting.
>> [2016-09-13 19:39:49.518510] I [monitor(monitor):343:monitor] Monitor: 
>> worker(/data/cloud-pro/brick) died in startup phase
>> [2016-09-13 19:39:59.675405] I [monitor(monitor):266:monitor] Monitor: 
>> 
>> [2016-09-13 19:39:59.675740] I [monitor(monitor):267:monitor] Monitor: 
>> starting gsyncd worker
>> [2016-09-13 19:39:59.768181] I [gsyncd(/data/cloud-pro/brick):710:main_i] 
>> : syncing: gluster://localhost:cloud-pro -> 
>> ssh://r...@gfs1geo.domain.tld:gluster://localhost:cloud-pro-geo
>> [2016-09-13 19:39:59.768640] I [changelogagent(agent):73:__init__] 
>> ChangelogAgent: Agent listining...
>> [2016-09-13 19:40:02.554076] I 
>> [master(/data/cloud-pro/brick):83:gmaster_builder] : setting up xsync 
>> change detection mode
>> [2016-09-13 19:40:02.554500] I [master(/data/cloud-pro/brick):367:__init__] 
>> _GMaster: using 'rsync' as the sync engine
>> [2016-09-13 19:40:02.555332] I 
>> [master(/data/cloud-pro/brick):83:gmaster_builder] : setting up 
>> changelog change detection mode
>> [2016-09-13 19:40:02.555600] I [master(/data/cloud-pro/brick):367:__init__] 
>> _GMaster: using 'rsync' as the sync engine
>> [2016-09-13 19:40:02.556711] I 
>> [master(/data/cloud-pro/brick):83:gmaster_builder] : setting up 
>> changeloghistory change detection mode
>> [2016-09-13 19:40:02.556983] I [master(/data/cloud-pro/brick):367:__init__] 
>> _GMaster: using 'rsync' as the sync engine
>> [2016-09-13 19:40:04.692655] I [master(/data/cloud-pro/brick):1249:register] 
>> _GMaster: xsync temp directory: 
>> /var/lib/misc/glusterfsd/cloud-pro/ssh%3A%2F%2Froot%40192.168.20.107%3Agluster%3A%2F%2F127.0.0.1%3Acloud-pro-geo/6b844f56a11ecd24d5e36242f045e58c/xsync
>> [2016-09-13 19:40:04.692945] I 
>> [resource(/data/cloud-pro/brick):1491:service_loop] GLUSTER: Register time: 
>> 1473795604
>> [2016-09-13 19:40:04.724827] I [master(/data/cloud-pro/brick):510:crawlwrap] 
>> _GMaster: primary master with volume id d99af2fa-439b-4a21-bf3a-38f3849f87ec 
>> ...
>> [2016-09-13 19:40:04.734505] I [master(/data/cloud-pro/brick):519:crawlwrap] 
>> _GMaster: crawl interval: 1 seconds
>> [2016-09-13 19:40:04.754487] I [master(/data/cloud-pro/brick):1163:crawl] 
>> _GMaster: starting history crawl... turns: 1, stime: (1473688054, 0), etime: 
>> 1473795604
>> [2016-09-13 19:40:05.757395] I [master(/data/cloud-pro/brick):1192:crawl] 
>> _GMaster: slave's time: (1473688054, 0)
>> [2016-09-13 19:40:05.

Re: [Gluster-users] geo-rep: -1 (Directory not empty) warning - STATUS Faulty

2016-09-16 Thread Aravinda

We can check in brick backend.

ls -ld $BRICK_ROOT/.glusterfs/f7/eb/f7eb9d21-d39a-4dd6-941c-46d430e18aa2

regards
Aravinda

On Thursday 15 September 2016 09:12 PM, ML mail wrote:

So I ran a on my master a "find /mybrick -name 'File 2016.xlsx'" and got the 
following two entries:

355235   19 drwxr-xr-x   3 www-data www-data3 Sep 15 12:48 
./data/username/files_encryption/keys/files/Dir/File\ 2016.xlsx
355234   56 -rw-r--r--   2 www-data www-data44308 Sep 15 12:48 
./data/username/files/Dir/Ein\ und\ Ausgaben\ File\ 2016.xlsx


As you can see, there is one file and one directory named 'File 2016.xlsx'. 
This is actually the web application Nextcloud when it uses encryption: the 
file is the encrypted file and the directory named like the file contains some 
encryption keys to encrypt/decrypt that specific file.

Now the next thing would be to find out if geo-rep failed on the file or the 
directory named 'File 2016.xlsx'. Any ideas how I can check that? and what else 
would you need to debug this issue?

Regards
ML





On Thursday, September 15, 2016 9:12 AM, Aravinda  wrote:
Thanks for the logs from Master. Below error says directory not empty
but names looks like they are files. Please confirm from Master Volume
that path is directory or file.

[2016-09-13 19:30:57.475649] W [fuse-bridge.c:1787:fuse_rename_cbk]
0-glusterfs-fuse: 25: /.gfid/f7eb9d21-d39a-4dd6-941c-46d430e18aa2/File
2016.xlsx.ocTransferId1333449197.part ->
/.gfid/f7eb9d21-d39a-4dd6-941c-46d430e18aa2/File 2016.xlsx => -1
(Directory not empty)


regards
Aravinda


On Wednesday 14 September 2016 12:49 PM, ML mail wrote:

As requested you will find below the last few 100 lines of the geo-rep log file 
from the master node.


FILE: 
/var/log/glusterfs/geo-replication/cloud-pro/ssh%3A%2F%2Froot%40192.168.20.107%3Agluster%3A%2F%2F127.0.0.1%3Acloud-pro-geo.log


[2016-09-13 19:39:48.686060] I [syncdutils(/data/cloud-pro/brick):220:finalize] 
: exiting.
[2016-09-13 19:39:48.688112] I [repce(agent):92:service_loop] RepceServer: 
terminating on reaching EOF.
[2016-09-13 19:39:48.688461] I [syncdutils(agent):220:finalize] : exiting.
[2016-09-13 19:39:49.518510] I [monitor(monitor):343:monitor] Monitor: 
worker(/data/cloud-pro/brick) died in startup phase
[2016-09-13 19:39:59.675405] I [monitor(monitor):266:monitor] Monitor: 

[2016-09-13 19:39:59.675740] I [monitor(monitor):267:monitor] Monitor: starting 
gsyncd worker
[2016-09-13 19:39:59.768181] I [gsyncd(/data/cloud-pro/brick):710:main_i] : 
syncing: gluster://localhost:cloud-pro -> 
ssh://r...@gfs1geo.domain.tld:gluster://localhost:cloud-pro-geo
[2016-09-13 19:39:59.768640] I [changelogagent(agent):73:__init__] 
ChangelogAgent: Agent listining...
[2016-09-13 19:40:02.554076] I [master(/data/cloud-pro/brick):83:gmaster_builder] 
: setting up xsync change detection mode
[2016-09-13 19:40:02.554500] I [master(/data/cloud-pro/brick):367:__init__] 
_GMaster: using 'rsync' as the sync engine
[2016-09-13 19:40:02.555332] I [master(/data/cloud-pro/brick):83:gmaster_builder] 
: setting up changelog change detection mode
[2016-09-13 19:40:02.555600] I [master(/data/cloud-pro/brick):367:__init__] 
_GMaster: using 'rsync' as the sync engine
[2016-09-13 19:40:02.556711] I [master(/data/cloud-pro/brick):83:gmaster_builder] 
: setting up changeloghistory change detection mode
[2016-09-13 19:40:02.556983] I [master(/data/cloud-pro/brick):367:__init__] 
_GMaster: using 'rsync' as the sync engine
[2016-09-13 19:40:04.692655] I [master(/data/cloud-pro/brick):1249:register] 
_GMaster: xsync temp directory: 
/var/lib/misc/glusterfsd/cloud-pro/ssh%3A%2F%2Froot%40192.168.20.107%3Agluster%3A%2F%2F127.0.0.1%3Acloud-pro-geo/6b844f56a11ecd24d5e36242f045e58c/xsync
[2016-09-13 19:40:04.692945] I 
[resource(/data/cloud-pro/brick):1491:service_loop] GLUSTER: Register time: 
1473795604
[2016-09-13 19:40:04.724827] I [master(/data/cloud-pro/brick):510:crawlwrap] 
_GMaster: primary master with volume id d99af2fa-439b-4a21-bf3a-38f3849f87ec ...
[2016-09-13 19:40:04.734505] I [master(/data/cloud-pro/brick):519:crawlwrap] 
_GMaster: crawl interval: 1 seconds
[2016-09-13 19:40:04.754487] I [master(/data/cloud-pro/brick):1163:crawl] 
_GMaster: starting history crawl... turns: 1, stime: (1473688054, 0), etime: 
1473795604
[2016-09-13 19:40:05.757395] I [master(/data/cloud-pro/brick):1192:crawl] 
_GMaster: slave's time: (1473688054, 0)
[2016-09-13 19:40:05.832189] E [repce(/data/cloud-pro/brick):207:__call__] 
RepceClient: call 28167:140561674450688:1473795605.78 (entry_ops) failed on 
peer with OSError
[2016-09-13 19:40:05.832458] E 
[syncdutils(/data/cloud-pro/brick):276:log_raise_exception] : FAIL:
Traceback (most recent call last):
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/gsyncd.py", line 
201, in main
main_i()
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/gsyncd.py", line 
720, in main_i
local.service_loop(*[r for r in [rem

Re: [Gluster-users] geo-rep: -1 (Directory not empty) warning - STATUS Faulty

2016-09-15 Thread ML mail
So I ran a on my master a "find /mybrick -name 'File 2016.xlsx'" and got the 
following two entries:

355235   19 drwxr-xr-x   3 www-data www-data3 Sep 15 12:48 
./data/username/files_encryption/keys/files/Dir/File\ 2016.xlsx
355234   56 -rw-r--r--   2 www-data www-data44308 Sep 15 12:48 
./data/username/files/Dir/Ein\ und\ Ausgaben\ File\ 2016.xlsx


As you can see, there is one file and one directory named 'File 2016.xlsx'. 
This is actually the web application Nextcloud when it uses encryption: the 
file is the encrypted file and the directory named like the file contains some 
encryption keys to encrypt/decrypt that specific file.

Now the next thing would be to find out if geo-rep failed on the file or the 
directory named 'File 2016.xlsx'. Any ideas how I can check that? and what else 
would you need to debug this issue?

Regards
ML





On Thursday, September 15, 2016 9:12 AM, Aravinda  wrote:
Thanks for the logs from Master. Below error says directory not empty 
but names looks like they are files. Please confirm from Master Volume 
that path is directory or file.

[2016-09-13 19:30:57.475649] W [fuse-bridge.c:1787:fuse_rename_cbk] 
0-glusterfs-fuse: 25: /.gfid/f7eb9d21-d39a-4dd6-941c-46d430e18aa2/File 
2016.xlsx.ocTransferId1333449197.part -> 
/.gfid/f7eb9d21-d39a-4dd6-941c-46d430e18aa2/File 2016.xlsx => -1 
(Directory not empty)


regards
Aravinda


On Wednesday 14 September 2016 12:49 PM, ML mail wrote:
> As requested you will find below the last few 100 lines of the geo-rep log 
> file from the master node.
>
>
> FILE: 
> /var/log/glusterfs/geo-replication/cloud-pro/ssh%3A%2F%2Froot%40192.168.20.107%3Agluster%3A%2F%2F127.0.0.1%3Acloud-pro-geo.log
> 
>
> [2016-09-13 19:39:48.686060] I 
> [syncdutils(/data/cloud-pro/brick):220:finalize] : exiting.
> [2016-09-13 19:39:48.688112] I [repce(agent):92:service_loop] RepceServer: 
> terminating on reaching EOF.
> [2016-09-13 19:39:48.688461] I [syncdutils(agent):220:finalize] : 
> exiting.
> [2016-09-13 19:39:49.518510] I [monitor(monitor):343:monitor] Monitor: 
> worker(/data/cloud-pro/brick) died in startup phase
> [2016-09-13 19:39:59.675405] I [monitor(monitor):266:monitor] Monitor: 
> 
> [2016-09-13 19:39:59.675740] I [monitor(monitor):267:monitor] Monitor: 
> starting gsyncd worker
> [2016-09-13 19:39:59.768181] I [gsyncd(/data/cloud-pro/brick):710:main_i] 
> : syncing: gluster://localhost:cloud-pro -> 
> ssh://r...@gfs1geo.domain.tld:gluster://localhost:cloud-pro-geo
> [2016-09-13 19:39:59.768640] I [changelogagent(agent):73:__init__] 
> ChangelogAgent: Agent listining...
> [2016-09-13 19:40:02.554076] I 
> [master(/data/cloud-pro/brick):83:gmaster_builder] : setting up xsync 
> change detection mode
> [2016-09-13 19:40:02.554500] I [master(/data/cloud-pro/brick):367:__init__] 
> _GMaster: using 'rsync' as the sync engine
> [2016-09-13 19:40:02.555332] I 
> [master(/data/cloud-pro/brick):83:gmaster_builder] : setting up 
> changelog change detection mode
> [2016-09-13 19:40:02.555600] I [master(/data/cloud-pro/brick):367:__init__] 
> _GMaster: using 'rsync' as the sync engine
> [2016-09-13 19:40:02.556711] I 
> [master(/data/cloud-pro/brick):83:gmaster_builder] : setting up 
> changeloghistory change detection mode
> [2016-09-13 19:40:02.556983] I [master(/data/cloud-pro/brick):367:__init__] 
> _GMaster: using 'rsync' as the sync engine
> [2016-09-13 19:40:04.692655] I [master(/data/cloud-pro/brick):1249:register] 
> _GMaster: xsync temp directory: 
> /var/lib/misc/glusterfsd/cloud-pro/ssh%3A%2F%2Froot%40192.168.20.107%3Agluster%3A%2F%2F127.0.0.1%3Acloud-pro-geo/6b844f56a11ecd24d5e36242f045e58c/xsync
> [2016-09-13 19:40:04.692945] I 
> [resource(/data/cloud-pro/brick):1491:service_loop] GLUSTER: Register time: 
> 1473795604
> [2016-09-13 19:40:04.724827] I [master(/data/cloud-pro/brick):510:crawlwrap] 
> _GMaster: primary master with volume id d99af2fa-439b-4a21-bf3a-38f3849f87ec 
> ...
> [2016-09-13 19:40:04.734505] I [master(/data/cloud-pro/brick):519:crawlwrap] 
> _GMaster: crawl interval: 1 seconds
> [2016-09-13 19:40:04.754487] I [master(/data/cloud-pro/brick):1163:crawl] 
> _GMaster: starting history crawl... turns: 1, stime: (1473688054, 0), etime: 
> 1473795604
> [2016-09-13 19:40:05.757395] I [master(/data/cloud-pro/brick):1192:crawl] 
> _GMaster: slave's time: (1473688054, 0)
> [2016-09-13 19:40:05.832189] E [repce(/data/cloud-pro/brick):207:__call__] 
> RepceClient: call 28167:140561674450688:1473795605.78 (entry_ops) failed on 
> peer with OSError
> [2016-09-13 19:40:05.832458] E 
> [syncdutils(/data/cloud-pro/brick):276:log_raise_exception] : FAIL:
> Traceback (most recent call last):
> File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/gsyncd.py", line 
> 201, in main
> main_i()
> File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/gsyncd.py", line 
> 720, in main_i
> local.service_loop(*[r for r in [remote] if r])
> File "/usr/lib/x86_64-li

Re: [Gluster-users] geo-rep: -1 (Directory not empty) warning - STATUS Faulty

2016-09-15 Thread Aravinda
Thanks for the logs from Master. Below error says directory not empty 
but names looks like they are files. Please confirm from Master Volume 
that path is directory or file.


[2016-09-13 19:30:57.475649] W [fuse-bridge.c:1787:fuse_rename_cbk] 
0-glusterfs-fuse: 25: /.gfid/f7eb9d21-d39a-4dd6-941c-46d430e18aa2/File 
2016.xlsx.ocTransferId1333449197.part -> 
/.gfid/f7eb9d21-d39a-4dd6-941c-46d430e18aa2/File 2016.xlsx => -1 
(Directory not empty)



regards
Aravinda

On Wednesday 14 September 2016 12:49 PM, ML mail wrote:

As requested you will find below the last few 100 lines of the geo-rep log file 
from the master node.


FILE: 
/var/log/glusterfs/geo-replication/cloud-pro/ssh%3A%2F%2Froot%40192.168.20.107%3Agluster%3A%2F%2F127.0.0.1%3Acloud-pro-geo.log


[2016-09-13 19:39:48.686060] I [syncdutils(/data/cloud-pro/brick):220:finalize] 
: exiting.
[2016-09-13 19:39:48.688112] I [repce(agent):92:service_loop] RepceServer: 
terminating on reaching EOF.
[2016-09-13 19:39:48.688461] I [syncdutils(agent):220:finalize] : exiting.
[2016-09-13 19:39:49.518510] I [monitor(monitor):343:monitor] Monitor: 
worker(/data/cloud-pro/brick) died in startup phase
[2016-09-13 19:39:59.675405] I [monitor(monitor):266:monitor] Monitor: 

[2016-09-13 19:39:59.675740] I [monitor(monitor):267:monitor] Monitor: starting 
gsyncd worker
[2016-09-13 19:39:59.768181] I [gsyncd(/data/cloud-pro/brick):710:main_i] : 
syncing: gluster://localhost:cloud-pro -> 
ssh://r...@gfs1geo.domain.tld:gluster://localhost:cloud-pro-geo
[2016-09-13 19:39:59.768640] I [changelogagent(agent):73:__init__] 
ChangelogAgent: Agent listining...
[2016-09-13 19:40:02.554076] I [master(/data/cloud-pro/brick):83:gmaster_builder] 
: setting up xsync change detection mode
[2016-09-13 19:40:02.554500] I [master(/data/cloud-pro/brick):367:__init__] 
_GMaster: using 'rsync' as the sync engine
[2016-09-13 19:40:02.555332] I [master(/data/cloud-pro/brick):83:gmaster_builder] 
: setting up changelog change detection mode
[2016-09-13 19:40:02.555600] I [master(/data/cloud-pro/brick):367:__init__] 
_GMaster: using 'rsync' as the sync engine
[2016-09-13 19:40:02.556711] I [master(/data/cloud-pro/brick):83:gmaster_builder] 
: setting up changeloghistory change detection mode
[2016-09-13 19:40:02.556983] I [master(/data/cloud-pro/brick):367:__init__] 
_GMaster: using 'rsync' as the sync engine
[2016-09-13 19:40:04.692655] I [master(/data/cloud-pro/brick):1249:register] 
_GMaster: xsync temp directory: 
/var/lib/misc/glusterfsd/cloud-pro/ssh%3A%2F%2Froot%40192.168.20.107%3Agluster%3A%2F%2F127.0.0.1%3Acloud-pro-geo/6b844f56a11ecd24d5e36242f045e58c/xsync
[2016-09-13 19:40:04.692945] I 
[resource(/data/cloud-pro/brick):1491:service_loop] GLUSTER: Register time: 
1473795604
[2016-09-13 19:40:04.724827] I [master(/data/cloud-pro/brick):510:crawlwrap] 
_GMaster: primary master with volume id d99af2fa-439b-4a21-bf3a-38f3849f87ec ...
[2016-09-13 19:40:04.734505] I [master(/data/cloud-pro/brick):519:crawlwrap] 
_GMaster: crawl interval: 1 seconds
[2016-09-13 19:40:04.754487] I [master(/data/cloud-pro/brick):1163:crawl] 
_GMaster: starting history crawl... turns: 1, stime: (1473688054, 0), etime: 
1473795604
[2016-09-13 19:40:05.757395] I [master(/data/cloud-pro/brick):1192:crawl] 
_GMaster: slave's time: (1473688054, 0)
[2016-09-13 19:40:05.832189] E [repce(/data/cloud-pro/brick):207:__call__] 
RepceClient: call 28167:140561674450688:1473795605.78 (entry_ops) failed on 
peer with OSError
[2016-09-13 19:40:05.832458] E 
[syncdutils(/data/cloud-pro/brick):276:log_raise_exception] : FAIL:
Traceback (most recent call last):
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/gsyncd.py", line 
201, in main
main_i()
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/gsyncd.py", line 
720, in main_i
local.service_loop(*[r for r in [remote] if r])
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/resource.py", line 
1497, in service_loop
g3.crawlwrap(oneshot=True)
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/master.py", line 
571, in crawlwrap
self.crawl()
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/master.py", line 
1201, in crawl
self.changelogs_batch_process(changes)
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/master.py", line 
1107, in changelogs_batch_process
self.process(batch)
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/master.py", line 
992, in process
self.process_change(change, done, retry)
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/master.py", line 
934, in process_change
failures = self.slave.server.entry_ops(entries)
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/repce.py", line 
226, in __call__
return self.ins(self.meth, *a)
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/repce.py", line 
208, in __call__
raise res
OSError: [Errno 39] Directory not empty
[2016-09-13 19:40:05.834218] I [s

Re: [Gluster-users] geo-rep: -1 (Directory not empty) warning - STATUS Faulty

2016-09-14 Thread ML mail
As requested you will find below the last few 100 lines of the geo-rep log file 
from the master node.


FILE: 
/var/log/glusterfs/geo-replication/cloud-pro/ssh%3A%2F%2Froot%40192.168.20.107%3Agluster%3A%2F%2F127.0.0.1%3Acloud-pro-geo.log


[2016-09-13 19:39:48.686060] I [syncdutils(/data/cloud-pro/brick):220:finalize] 
: exiting.
[2016-09-13 19:39:48.688112] I [repce(agent):92:service_loop] RepceServer: 
terminating on reaching EOF.
[2016-09-13 19:39:48.688461] I [syncdutils(agent):220:finalize] : exiting.
[2016-09-13 19:39:49.518510] I [monitor(monitor):343:monitor] Monitor: 
worker(/data/cloud-pro/brick) died in startup phase
[2016-09-13 19:39:59.675405] I [monitor(monitor):266:monitor] Monitor: 

[2016-09-13 19:39:59.675740] I [monitor(monitor):267:monitor] Monitor: starting 
gsyncd worker
[2016-09-13 19:39:59.768181] I [gsyncd(/data/cloud-pro/brick):710:main_i] 
: syncing: gluster://localhost:cloud-pro -> 
ssh://r...@gfs1geo.domain.tld:gluster://localhost:cloud-pro-geo
[2016-09-13 19:39:59.768640] I [changelogagent(agent):73:__init__] 
ChangelogAgent: Agent listining...
[2016-09-13 19:40:02.554076] I 
[master(/data/cloud-pro/brick):83:gmaster_builder] : setting up xsync 
change detection mode
[2016-09-13 19:40:02.554500] I [master(/data/cloud-pro/brick):367:__init__] 
_GMaster: using 'rsync' as the sync engine
[2016-09-13 19:40:02.555332] I 
[master(/data/cloud-pro/brick):83:gmaster_builder] : setting up changelog 
change detection mode
[2016-09-13 19:40:02.555600] I [master(/data/cloud-pro/brick):367:__init__] 
_GMaster: using 'rsync' as the sync engine
[2016-09-13 19:40:02.556711] I 
[master(/data/cloud-pro/brick):83:gmaster_builder] : setting up 
changeloghistory change detection mode
[2016-09-13 19:40:02.556983] I [master(/data/cloud-pro/brick):367:__init__] 
_GMaster: using 'rsync' as the sync engine
[2016-09-13 19:40:04.692655] I [master(/data/cloud-pro/brick):1249:register] 
_GMaster: xsync temp directory: 
/var/lib/misc/glusterfsd/cloud-pro/ssh%3A%2F%2Froot%40192.168.20.107%3Agluster%3A%2F%2F127.0.0.1%3Acloud-pro-geo/6b844f56a11ecd24d5e36242f045e58c/xsync
[2016-09-13 19:40:04.692945] I 
[resource(/data/cloud-pro/brick):1491:service_loop] GLUSTER: Register time: 
1473795604
[2016-09-13 19:40:04.724827] I [master(/data/cloud-pro/brick):510:crawlwrap] 
_GMaster: primary master with volume id d99af2fa-439b-4a21-bf3a-38f3849f87ec ...
[2016-09-13 19:40:04.734505] I [master(/data/cloud-pro/brick):519:crawlwrap] 
_GMaster: crawl interval: 1 seconds
[2016-09-13 19:40:04.754487] I [master(/data/cloud-pro/brick):1163:crawl] 
_GMaster: starting history crawl... turns: 1, stime: (1473688054, 0), etime: 
1473795604
[2016-09-13 19:40:05.757395] I [master(/data/cloud-pro/brick):1192:crawl] 
_GMaster: slave's time: (1473688054, 0)
[2016-09-13 19:40:05.832189] E [repce(/data/cloud-pro/brick):207:__call__] 
RepceClient: call 28167:140561674450688:1473795605.78 (entry_ops) failed on 
peer with OSError
[2016-09-13 19:40:05.832458] E 
[syncdutils(/data/cloud-pro/brick):276:log_raise_exception] : FAIL: 
Traceback (most recent call last):
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/gsyncd.py", line 
201, in main
main_i()
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/gsyncd.py", line 
720, in main_i
local.service_loop(*[r for r in [remote] if r])
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/resource.py", line 
1497, in service_loop
g3.crawlwrap(oneshot=True)
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/master.py", line 
571, in crawlwrap
self.crawl()
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/master.py", line 
1201, in crawl
self.changelogs_batch_process(changes)
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/master.py", line 
1107, in changelogs_batch_process
self.process(batch)
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/master.py", line 
992, in process
self.process_change(change, done, retry)
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/master.py", line 
934, in process_change
failures = self.slave.server.entry_ops(entries)
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/repce.py", line 
226, in __call__
return self.ins(self.meth, *a)
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/repce.py", line 
208, in __call__
raise res
OSError: [Errno 39] Directory not empty
[2016-09-13 19:40:05.834218] I [syncdutils(/data/cloud-pro/brick):220:finalize] 
: exiting.
[2016-09-13 19:40:05.836263] I [repce(agent):92:service_loop] RepceServer: 
terminating on reaching EOF.
[2016-09-13 19:40:05.836611] I [syncdutils(agent):220:finalize] : exiting.
[2016-09-13 19:40:06.558499] I [monitor(monitor):343:monitor] Monitor: 
worker(/data/cloud-pro/brick) died in startup phase
[2016-09-13 19:40:16.715217] I [monitor(monitor):266:monitor] Monitor: 

[2016-09-13 19:40:16.71551

Re: [Gluster-users] geo-rep: -1 (Directory not empty) warning - STATUS Faulty

2016-09-13 Thread Aravinda

Thanks, but couldn't find one log file output. To get the log file path

gluster volume geo-replication  :: config 
log_file


regards
Aravinda

On Wednesday 14 September 2016 12:02 PM, ML mail wrote:

Dear Aravinda,

As requested I have attached within this mail a file containing the last 
100-300 lines of the three files located in the requested directory on the 
master node.

Let me know if you did not receive the file, I am not sure it is possible to 
attach files on this mailing list.

Regards,
ML




On Wednesday, September 14, 2016 6:14 AM, Aravinda  wrote:
Please share the logs from Master node which is
Faulty(/var/log/glusterfs/geo-replication/__/*.log)

regards
Aravinda


On Wednesday 14 September 2016 01:10 AM, ML mail wrote:

Hi,

I just discovered that one of my replicated glusterfs volumes is not being 
geo-replicated to my slave node (STATUS Faulty). The log file on the geo-rep 
slave node indicates an error with a directory which seems not to be empty. 
Below you will find the full log entry for this problem which gets repeated 
every 5 seconds.

I am using GlusterFS 3.7.12 on Debian 8 with FUSE mount on the clients.


How can I fix this issue?

Thanks in advance for your help

Regards
ML


Log File: 
/var/log/glusterfs/geo-replication-slaves/d99af2fa-439b-4a21-bf3a-38f3849f87ec:gluster%3A%2F%2F127.0.0.1%3Acloud-pro-geo.gluster.log

[2016-09-13 19:30:52.098881] I [MSGID: 100030] [glusterfsd.c:2338:main] 
0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.7.12 
(args: /usr/sbin/glusterfs --aux-gfid-mount --acl 
--log-file=/var/log/glusterfs/geo-replication-slaves/d99af2fa-439b-4a21-bf3a-38f3849f87ec:gluster%3A%2F%2F127.0.0.1%3Acloud-pro-geo.gluster.log
 --volfile-server=localhost --volfile-id=cloud-pro-geo --client-pid=-1 
/tmp/gsyncd-aux-mount-X9XX0v)
[2016-09-13 19:30:52.109030] I [MSGID: 101190] 
[event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started thread with 
index 1
[2016-09-13 19:30:52.111565] I [graph.c:269:gf_add_cmdline_options] 
0-cloud-pro-geo-md-cache: adding option 'cache-posix-acl' for volume 
'cloud-pro-geo-md-cache' with value 'true'
[2016-09-13 19:30:52.113344] I [MSGID: 101190] 
[event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started thread with 
index 2
[2016-09-13 19:30:52.113710] I [MSGID: 114020] [client.c:2106:notify] 
0-cloud-pro-geo-client-0: parent translators are ready, attempting connect on 
transport
Final graph:
+--+
1: volume cloud-pro-geo-client-0
2: type protocol/client
3: option ping-timeout 42
4: option remote-host gfs1geo.domain.tld
5: option remote-subvolume /data/cloud-pro-geo/brick
6: option transport-type socket
7: option username 92671588-f829-4c03-a80f-6299b059452e
8: option password e1d425d4-dfe7-477e-8dc1-3704c9c9df83
9: option send-gids true
10: end-volume
11:
12: volume cloud-pro-geo-dht
13: type cluster/distribute
14: subvolumes cloud-pro-geo-client-0
15: end-volume
16:
17: volume cloud-pro-geo-write-behind
18: type performance/write-behind
19: subvolumes cloud-pro-geo-dht
20: end-volume
21:
22: volume cloud-pro-geo-read-ahead
23: type performance/read-ahead
24: subvolumes cloud-pro-geo-write-behind
25: end-volume
26:
27: volume cloud-pro-geo-readdir-ahead
28: type performance/readdir-ahead
29: subvolumes cloud-pro-geo-read-ahead
30: end-volume
31:
32: volume cloud-pro-geo-io-cache
33: type performance/io-cache
34: subvolumes cloud-pro-geo-readdir-ahead
35: end-volume
36:
37: volume cloud-pro-geo-quick-read
38: type performance/quick-read
39: subvolumes cloud-pro-geo-io-cache
40: end-volume
41:
42: volume cloud-pro-geo-open-behind
43: type performance/open-behind
44: subvolumes cloud-pro-geo-quick-read
45: end-volume
46:
47: volume cloud-pro-geo-md-cache
48: type performance/md-cache
49: option cache-posix-acl true
50: subvolumes cloud-pro-geo-open-behind
51: end-volume
52:
53: volume cloud-pro-geo
54: type debug/io-stats
55: option log-level INFO
56: option latency-measurement off
57: option count-fop-hits off
58: subvolumes cloud-pro-geo-md-cache
59: end-volume
60:
61: volume posix-acl-autoload
62: type system/posix-acl
63: subvolumes cloud-pro-geo
64: end-volume
65:
66: volume gfid-access-autoload
67: type features/gfid-access
68: subvolumes posix-acl-autoload
69: end-volume
70:
71: volume meta-autoload
72: type meta
73: subvolumes gfid-access-autoload
74: end-volume
75:
+--+
[2016-09-13 19:30:52.115096] I [rpc-clnt.c:1868:rpc_clnt_reconfig] 
0-cloud-pro-geo-client-0: changing port to 49154 (from 0)
[2016-09-13 19:30:52.121610] I [MSGID: 114057] 
[client-handshake.c:1437:select_server_supported_programs] 
0-cloud-pro-geo-client-0: Using Program GlusterFS 3.3, Num (1298437), Version 
(330)
[2016-09-13 19:30:52.12

Re: [Gluster-users] geo-rep: -1 (Directory not empty) warning - STATUS Faulty

2016-09-13 Thread ML mail
Dear Aravinda,

As requested I have attached within this mail a file containing the last 
100-300 lines of the three files located in the requested directory on the 
master node.

Let me know if you did not receive the file, I am not sure it is possible to 
attach files on this mailing list.

Regards,
ML




On Wednesday, September 14, 2016 6:14 AM, Aravinda  wrote:
Please share the logs from Master node which is 
Faulty(/var/log/glusterfs/geo-replication/__/*.log)

regards
Aravinda


On Wednesday 14 September 2016 01:10 AM, ML mail wrote:
> Hi,
>
> I just discovered that one of my replicated glusterfs volumes is not being 
> geo-replicated to my slave node (STATUS Faulty). The log file on the geo-rep 
> slave node indicates an error with a directory which seems not to be empty. 
> Below you will find the full log entry for this problem which gets repeated 
> every 5 seconds.
>
> I am using GlusterFS 3.7.12 on Debian 8 with FUSE mount on the clients.
>
>
> How can I fix this issue?
>
> Thanks in advance for your help
>
> Regards
> ML
>
>
> Log File: 
> /var/log/glusterfs/geo-replication-slaves/d99af2fa-439b-4a21-bf3a-38f3849f87ec:gluster%3A%2F%2F127.0.0.1%3Acloud-pro-geo.gluster.log
>
> [2016-09-13 19:30:52.098881] I [MSGID: 100030] [glusterfsd.c:2338:main] 
> 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.7.12 
> (args: /usr/sbin/glusterfs --aux-gfid-mount --acl 
> --log-file=/var/log/glusterfs/geo-replication-slaves/d99af2fa-439b-4a21-bf3a-38f3849f87ec:gluster%3A%2F%2F127.0.0.1%3Acloud-pro-geo.gluster.log
>  --volfile-server=localhost --volfile-id=cloud-pro-geo --client-pid=-1 
> /tmp/gsyncd-aux-mount-X9XX0v)
> [2016-09-13 19:30:52.109030] I [MSGID: 101190] 
> [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started thread with 
> index 1
> [2016-09-13 19:30:52.111565] I [graph.c:269:gf_add_cmdline_options] 
> 0-cloud-pro-geo-md-cache: adding option 'cache-posix-acl' for volume 
> 'cloud-pro-geo-md-cache' with value 'true'
> [2016-09-13 19:30:52.113344] I [MSGID: 101190] 
> [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started thread with 
> index 2
> [2016-09-13 19:30:52.113710] I [MSGID: 114020] [client.c:2106:notify] 
> 0-cloud-pro-geo-client-0: parent translators are ready, attempting connect on 
> transport
> Final graph:
> +--+
> 1: volume cloud-pro-geo-client-0
> 2: type protocol/client
> 3: option ping-timeout 42
> 4: option remote-host gfs1geo.domain.tld
> 5: option remote-subvolume /data/cloud-pro-geo/brick
> 6: option transport-type socket
> 7: option username 92671588-f829-4c03-a80f-6299b059452e
> 8: option password e1d425d4-dfe7-477e-8dc1-3704c9c9df83
> 9: option send-gids true
> 10: end-volume
> 11:
> 12: volume cloud-pro-geo-dht
> 13: type cluster/distribute
> 14: subvolumes cloud-pro-geo-client-0
> 15: end-volume
> 16:
> 17: volume cloud-pro-geo-write-behind
> 18: type performance/write-behind
> 19: subvolumes cloud-pro-geo-dht
> 20: end-volume
> 21:
> 22: volume cloud-pro-geo-read-ahead
> 23: type performance/read-ahead
> 24: subvolumes cloud-pro-geo-write-behind
> 25: end-volume
> 26:
> 27: volume cloud-pro-geo-readdir-ahead
> 28: type performance/readdir-ahead
> 29: subvolumes cloud-pro-geo-read-ahead
> 30: end-volume
> 31:
> 32: volume cloud-pro-geo-io-cache
> 33: type performance/io-cache
> 34: subvolumes cloud-pro-geo-readdir-ahead
> 35: end-volume
> 36:
> 37: volume cloud-pro-geo-quick-read
> 38: type performance/quick-read
> 39: subvolumes cloud-pro-geo-io-cache
> 40: end-volume
> 41:
> 42: volume cloud-pro-geo-open-behind
> 43: type performance/open-behind
> 44: subvolumes cloud-pro-geo-quick-read
> 45: end-volume
> 46:
> 47: volume cloud-pro-geo-md-cache
> 48: type performance/md-cache
> 49: option cache-posix-acl true
> 50: subvolumes cloud-pro-geo-open-behind
> 51: end-volume
> 52:
> 53: volume cloud-pro-geo
> 54: type debug/io-stats
> 55: option log-level INFO
> 56: option latency-measurement off
> 57: option count-fop-hits off
> 58: subvolumes cloud-pro-geo-md-cache
> 59: end-volume
> 60:
> 61: volume posix-acl-autoload
> 62: type system/posix-acl
> 63: subvolumes cloud-pro-geo
> 64: end-volume
> 65:
> 66: volume gfid-access-autoload
> 67: type features/gfid-access
> 68: subvolumes posix-acl-autoload
> 69: end-volume
> 70:
> 71: volume meta-autoload
> 72: type meta
> 73: subvolumes gfid-access-autoload
> 74: end-volume
> 75:
> +--+
> [2016-09-13 19:30:52.115096] I [rpc-clnt.c:1868:rpc_clnt_reconfig] 
> 0-cloud-pro-geo-client-0: changing port to 49154 (from 0)
> [2016-09-13 19:30:52.121610] I [MSGID: 114057] 
> [client-handshake.c:1437:select_server_supported_programs] 
> 0-cloud-pro-geo-client-0: Using Program GlusterFS 3.3, Num (1298437), Vers

Re: [Gluster-users] geo-rep: -1 (Directory not empty) warning - STATUS Faulty

2016-09-13 Thread Aravinda
Please share the logs from Master node which is 
Faulty(/var/log/glusterfs/geo-replication/__/*.log)


regards
Aravinda

On Wednesday 14 September 2016 01:10 AM, ML mail wrote:

Hi,

I just discovered that one of my replicated glusterfs volumes is not being 
geo-replicated to my slave node (STATUS Faulty). The log file on the geo-rep 
slave node indicates an error with a directory which seems not to be empty. 
Below you will find the full log entry for this problem which gets repeated 
every 5 seconds.

I am using GlusterFS 3.7.12 on Debian 8 with FUSE mount on the clients.


How can I fix this issue?

Thanks in advance for your help

Regards
ML


Log File: 
/var/log/glusterfs/geo-replication-slaves/d99af2fa-439b-4a21-bf3a-38f3849f87ec:gluster%3A%2F%2F127.0.0.1%3Acloud-pro-geo.gluster.log

[2016-09-13 19:30:52.098881] I [MSGID: 100030] [glusterfsd.c:2338:main] 
0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.7.12 
(args: /usr/sbin/glusterfs --aux-gfid-mount --acl 
--log-file=/var/log/glusterfs/geo-replication-slaves/d99af2fa-439b-4a21-bf3a-38f3849f87ec:gluster%3A%2F%2F127.0.0.1%3Acloud-pro-geo.gluster.log
 --volfile-server=localhost --volfile-id=cloud-pro-geo --client-pid=-1 
/tmp/gsyncd-aux-mount-X9XX0v)
[2016-09-13 19:30:52.109030] I [MSGID: 101190] 
[event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started thread with 
index 1
[2016-09-13 19:30:52.111565] I [graph.c:269:gf_add_cmdline_options] 
0-cloud-pro-geo-md-cache: adding option 'cache-posix-acl' for volume 
'cloud-pro-geo-md-cache' with value 'true'
[2016-09-13 19:30:52.113344] I [MSGID: 101190] 
[event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started thread with 
index 2
[2016-09-13 19:30:52.113710] I [MSGID: 114020] [client.c:2106:notify] 
0-cloud-pro-geo-client-0: parent translators are ready, attempting connect on 
transport
Final graph:
+--+
1: volume cloud-pro-geo-client-0
2: type protocol/client
3: option ping-timeout 42
4: option remote-host gfs1geo.domain.tld
5: option remote-subvolume /data/cloud-pro-geo/brick
6: option transport-type socket
7: option username 92671588-f829-4c03-a80f-6299b059452e
8: option password e1d425d4-dfe7-477e-8dc1-3704c9c9df83
9: option send-gids true
10: end-volume
11:
12: volume cloud-pro-geo-dht
13: type cluster/distribute
14: subvolumes cloud-pro-geo-client-0
15: end-volume
16:
17: volume cloud-pro-geo-write-behind
18: type performance/write-behind
19: subvolumes cloud-pro-geo-dht
20: end-volume
21:
22: volume cloud-pro-geo-read-ahead
23: type performance/read-ahead
24: subvolumes cloud-pro-geo-write-behind
25: end-volume
26:
27: volume cloud-pro-geo-readdir-ahead
28: type performance/readdir-ahead
29: subvolumes cloud-pro-geo-read-ahead
30: end-volume
31:
32: volume cloud-pro-geo-io-cache
33: type performance/io-cache
34: subvolumes cloud-pro-geo-readdir-ahead
35: end-volume
36:
37: volume cloud-pro-geo-quick-read
38: type performance/quick-read
39: subvolumes cloud-pro-geo-io-cache
40: end-volume
41:
42: volume cloud-pro-geo-open-behind
43: type performance/open-behind
44: subvolumes cloud-pro-geo-quick-read
45: end-volume
46:
47: volume cloud-pro-geo-md-cache
48: type performance/md-cache
49: option cache-posix-acl true
50: subvolumes cloud-pro-geo-open-behind
51: end-volume
52:
53: volume cloud-pro-geo
54: type debug/io-stats
55: option log-level INFO
56: option latency-measurement off
57: option count-fop-hits off
58: subvolumes cloud-pro-geo-md-cache
59: end-volume
60:
61: volume posix-acl-autoload
62: type system/posix-acl
63: subvolumes cloud-pro-geo
64: end-volume
65:
66: volume gfid-access-autoload
67: type features/gfid-access
68: subvolumes posix-acl-autoload
69: end-volume
70:
71: volume meta-autoload
72: type meta
73: subvolumes gfid-access-autoload
74: end-volume
75:
+--+
[2016-09-13 19:30:52.115096] I [rpc-clnt.c:1868:rpc_clnt_reconfig] 
0-cloud-pro-geo-client-0: changing port to 49154 (from 0)
[2016-09-13 19:30:52.121610] I [MSGID: 114057] 
[client-handshake.c:1437:select_server_supported_programs] 
0-cloud-pro-geo-client-0: Using Program GlusterFS 3.3, Num (1298437), Version 
(330)
[2016-09-13 19:30:52.121911] I [MSGID: 114046] 
[client-handshake.c:1213:client_setvolume_cbk] 0-cloud-pro-geo-client-0: 
Connected to cloud-pro-geo-client-0, attached to remote volume 
'/data/cloud-pro-geo/brick'.
[2016-09-13 19:30:52.121933] I [MSGID: 114047] 
[client-handshake.c:1224:client_setvolume_cbk] 0-cloud-pro-geo-client-0: Server 
and Client lk-version numbers are not same, reopening the fds
[2016-09-13 19:30:52.128072] I [fuse-bridge.c:5172:fuse_graph_setup] 0-fuse: 
switched to graph 0
[2016-09-13 19:30:52.128139] I [MSGID: 114035] 
[client-handshake.c:193:client_set_lk_vers