Re: [Gluster-users] cannot remove empty directory on gluster file system

2020-03-25 Thread Felix Kölzow

Dear Mauro,

I also faced this issue several times, even on a host with single brick
and single volume.


The solution for me was: I figured out the leftover file-names and
directory-names in the brick directory.

Let's suppose the file name is hiddenFile and hiddenDirectory.
Afterwards, go to

the corresponding path on mount directory which is related to your issue
and try to remove the file and directory, i.e.:

cd RECOVERY20190416/GlobNative/20190505/

rm hiddenFile

rm -r hiddenDirectory


I would be interested if this procedure is also working for you.

Regards,

Felix


On 25/03/2020 21:55, Mauro Tridici wrote:

Hi Strahil,

unfortunately, no process is holding file or directory.
Do you know if some other community user could help me?

Thank you,
Mauro




On 25 Mar 2020, at 21:08, Strahil Nikolov mailto:hunter86...@yahoo.com>> wrote:

You can also check if there is a process holding a file that was
deleted there:
lsof /tier2/OPA/archive/GOFS/RECOVERY20190416/GlobNative/20190505

If it's not that one , I'm out of ideas :)

It's not recommended to delete it from the bricks , so avoid that if
possible.

Best Regards,
Strahil Nikolov






В сряда, 25 март 2020 г., 21:12:58 Гринуич+2, Mauro Tridici
mailto:mauro.trid...@cmcc.it>> написа:





Hi Sttrahil,

thank you for your answer.
Directory is empty and no immutable bit has been assigned to it.

[athena-login2][/tier2/OPA/archive/GOFS/RECOVERY20190416/GlobNative/20190505]>
ls -la
total 8
drwxr-xr-x 2 das oclab_prod 4096 Mar 25 10:02 .
drwxr-xr-x 3 das oclab_prod 4096 Mar 25 10:02 ..

Any other idea related this issue?
Many thanks,
Mauro



On 25 Mar 2020, at 18:32, Strahil Nikolov mailto:hunter86...@yahoo.com>> wrote:

On March 25, 2020 3:32:59 PM GMT+02:00, Mauro Tridici
mailto:mauro.trid...@cmcc.it>> wrote:

Dear All,

some users tht use regularly our gluster file system are experiencing a
strange error during attempting to remove a empty directory.
All bricks are up and running, no perticular error has been detected,
but they are not able to remove it successfully.

This is the error they are receiving:

[athena-login2][/tier2/OPA/archive/GOFS]> rm -rf RECOVERY20190416/
rm: cannot remove `RECOVERY20190416/GlobNative/20190505': Directory not
empty

I tried to delete this directory from root user without success.
Do you have some suggestions to solve this issue?

Thank you in advance.
Kind Regards,
Mauro


What do you have in 'RECOVERY20190416/GlobNative/20190505' ?

Maybe you got an immutable bit (chattr +i) on any file/folder  ?

Best Regards,
Sttrahil Nikolov












Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] cannot remove empty directory on gluster file system

2020-03-25 Thread Strahil Nikolov
Take a look at Stefan Solbrig's e-mail 


Best Regards,
Strahil Nikolov


В сряда, 25 март 2020 г., 22:55:23 Гринуич+2, Mauro Tridici 
 написа: 





Hi Strahil,

unfortunately, no process is holding file or directory.
Do you know if some other community user could help me?

Thank you,
Mauro



> On 25 Mar 2020, at 21:08, Strahil Nikolov  wrote:
> 
> You can also check if there is a process holding a file that was deleted 
> there:
> lsof /tier2/OPA/archive/GOFS/RECOVERY20190416/GlobNative/20190505
> 
> If it's not that one , I'm out of ideas :)
> 
> It's not recommended to delete it from the bricks , so avoid that if possible.
> 
> Best Regards,
> Strahil Nikolov
> 
> 
> 
> 
> 
> 
> В сряда, 25 март 2020 г., 21:12:58 Гринуич+2, Mauro Tridici 
>  написа: 
> 
> 
> 
> 
> 
> Hi Sttrahil,
> 
> thank you for your answer.
> Directory is empty and no immutable bit has been assigned to it.
> 
> [athena-login2][/tier2/OPA/archive/GOFS/RECOVERY20190416/GlobNative/20190505]>
>  ls -la
> total 8
> drwxr-xr-x 2 das oclab_prod 4096 Mar 25 10:02 .
> drwxr-xr-x 3 das oclab_prod 4096 Mar 25 10:02 ..
> 
> Any other idea related this issue?
> Many thanks,
> Mauro
> 
> 
>> On 25 Mar 2020, at 18:32, Strahil Nikolov  wrote:
>> 
>> On March 25, 2020 3:32:59 PM GMT+02:00, Mauro Tridici 
>>  wrote:
>>> Dear All,
>>> 
>>> some users tht use regularly our gluster file system are experiencing a
>>> strange error during attempting to remove a empty directory.
>>> All bricks are up and running, no perticular error has been detected,
>>> but they are not able to remove it successfully.
>>> 
>>> This is the error they are receiving:
>>> 
>>> [athena-login2][/tier2/OPA/archive/GOFS]> rm -rf RECOVERY20190416/
>>> rm: cannot remove `RECOVERY20190416/GlobNative/20190505': Directory not
>>> empty
>>> 
>>> I tried to delete this directory from root user without success.
>>> Do you have some suggestions to solve this issue?
>>> 
>>> Thank you in advance.
>>> Kind Regards,
>>> Mauro
>> 
>> What do you have in 'RECOVERY20190416/GlobNative/20190505' ?
>> 
>> Maybe you got an immutable bit (chattr +i) on any file/folder  ?
>> 
>> Best Regards,
>> Sttrahil Nikolov
> 
> 
> 







Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] cannot remove empty directory on gluster file system

2020-03-25 Thread Mauro Tridici
Hi Strahil,

unfortunately, no process is holding file or directory.
Do you know if some other community user could help me?

Thank you,
Mauro



> On 25 Mar 2020, at 21:08, Strahil Nikolov  wrote:
> 
> You can also check if there is a process holding a file that was deleted 
> there:
> lsof /tier2/OPA/archive/GOFS/RECOVERY20190416/GlobNative/20190505
> 
> If it's not that one , I'm out of ideas :)
> 
> It's not recommended to delete it from the bricks , so avoid that if possible.
> 
> Best Regards,
> Strahil Nikolov
> 
> 
> 
> 
> 
> 
> В сряда, 25 март 2020 г., 21:12:58 Гринуич+2, Mauro Tridici 
>  написа: 
> 
> 
> 
> 
> 
> Hi Sttrahil,
> 
> thank you for your answer.
> Directory is empty and no immutable bit has been assigned to it.
> 
> [athena-login2][/tier2/OPA/archive/GOFS/RECOVERY20190416/GlobNative/20190505]>
>  ls -la
> total 8
> drwxr-xr-x 2 das oclab_prod 4096 Mar 25 10:02 .
> drwxr-xr-x 3 das oclab_prod 4096 Mar 25 10:02 ..
> 
> Any other idea related this issue?
> Many thanks,
> Mauro
> 
> 
>> On 25 Mar 2020, at 18:32, Strahil Nikolov  wrote:
>> 
>> On March 25, 2020 3:32:59 PM GMT+02:00, Mauro Tridici 
>>  wrote:
>>> Dear All,
>>> 
>>> some users tht use regularly our gluster file system are experiencing a
>>> strange error during attempting to remove a empty directory.
>>> All bricks are up and running, no perticular error has been detected,
>>> but they are not able to remove it successfully.
>>> 
>>> This is the error they are receiving:
>>> 
>>> [athena-login2][/tier2/OPA/archive/GOFS]> rm -rf RECOVERY20190416/
>>> rm: cannot remove `RECOVERY20190416/GlobNative/20190505': Directory not
>>> empty
>>> 
>>> I tried to delete this directory from root user without success.
>>> Do you have some suggestions to solve this issue?
>>> 
>>> Thank you in advance.
>>> Kind Regards,
>>> Mauro
>> 
>> What do you have in 'RECOVERY20190416/GlobNative/20190505' ?
>> 
>> Maybe you got an immutable bit (chattr +i) on any file/folder  ?
>> 
>> Best Regards,
>> Sttrahil Nikolov
> 
> 
> 






Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [EXT] cannot remove empty directory on gluster file system

2020-03-25 Thread Stefan Solbrig
Hi,

This happened to me many times.  In my case, the reason was this:

On the bricks, there were gluster link files present (The ones with zero bytes 
and permissions =1000 that are created if the real file is on a different brick 
than the brick that corresponds to the hash)  but the link target had gone. 

My solution was: go to the respective directories directly on the bricks, see 
if there link files present but no other files, and the delete the link files 
manually.   This worked for me.  (I have a distribute-only system.  No 
guaranteee)
However, this might leave stale hard links in  /path/to/brick/.glusterfs   but 
you can find these with
find /path/to/brick/.glusterfs  -type f -links 1   but be careful to delete 
these as well... double check that there really don't link anywhere.

I haven't discovered the original reason why there are stale glusterfs link 
files.  I suspect they are left when a unlink operation fails silently if one 
of the bricks is temporariliy down,  but I'm not sure.

best wishes,
Stefan

> Am 25.03.2020 um 21:08 schrieb Strahil Nikolov :
> 
> You can also check if there is a process holding a file that was deleted 
> there:
> lsof /tier2/OPA/archive/GOFS/RECOVERY20190416/GlobNative/20190505
> 
> If it's not that one , I'm out of ideas :)
> 
> It's not recommended to delete it from the bricks , so avoid that if possible.
> 
> Best Regards,
> Strahil Nikolov
> 
> 
> 
> 
> 
> 
> В сряда, 25 март 2020 г., 21:12:58 Гринуич+2, Mauro Tridici 
>  написа: 
> 
> 
> 
> 
> 
> Hi Sttrahil,
> 
> thank you for your answer.
> Directory is empty and no immutable bit has been assigned to it.
> 
> [athena-login2][/tier2/OPA/archive/GOFS/RECOVERY20190416/GlobNative/20190505]>
>  ls -la
> total 8
> drwxr-xr-x 2 das oclab_prod 4096 Mar 25 10:02 .
> drwxr-xr-x 3 das oclab_prod 4096 Mar 25 10:02 ..
> 
> Any other idea related this issue?
> Many thanks,
> Mauro
> 
> 
>> On 25 Mar 2020, at 18:32, Strahil Nikolov  wrote:
>> 
>> On March 25, 2020 3:32:59 PM GMT+02:00, Mauro Tridici 
>>  wrote:
>>> Dear All,
>>> 
>>> some users tht use regularly our gluster file system are experiencing a
>>> strange error during attempting to remove a empty directory.
>>> All bricks are up and running, no perticular error has been detected,
>>> but they are not able to remove it successfully.
>>> 
>>> This is the error they are receiving:
>>> 
>>> [athena-login2][/tier2/OPA/archive/GOFS]> rm -rf RECOVERY20190416/
>>> rm: cannot remove `RECOVERY20190416/GlobNative/20190505': Directory not
>>> empty
>>> 
>>> I tried to delete this directory from root user without success.
>>> Do you have some suggestions to solve this issue?
>>> 
>>> Thank you in advance.
>>> Kind Regards,
>>> Mauro
>> 
>> What do you have in 'RECOVERY20190416/GlobNative/20190505' ?
>> 
>> Maybe you got an immutable bit (chattr +i) on any file/folder  ?
>> 
>> Best Regards,
>> Sttrahil Nikolov
> 
> 
> 
> 
> 
> 
> 
> Community Meeting Calendar:
> 
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://bluejeans.com/441850968
> 
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users





Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] cannot remove empty directory on gluster file system

2020-03-25 Thread Strahil Nikolov
You can also check if there is a process holding a file that was deleted there:
lsof /tier2/OPA/archive/GOFS/RECOVERY20190416/GlobNative/20190505

If it's not that one , I'm out of ideas :)

It's not recommended to delete it from the bricks , so avoid that if possible.

Best Regards,
Strahil Nikolov






В сряда, 25 март 2020 г., 21:12:58 Гринуич+2, Mauro Tridici 
 написа: 





Hi Sttrahil,

thank you for your answer.
Directory is empty and no immutable bit has been assigned to it.

[athena-login2][/tier2/OPA/archive/GOFS/RECOVERY20190416/GlobNative/20190505]> 
ls -la
total 8
drwxr-xr-x 2 das oclab_prod 4096 Mar 25 10:02 .
drwxr-xr-x 3 das oclab_prod 4096 Mar 25 10:02 ..

Any other idea related this issue?
Many thanks,
Mauro


> On 25 Mar 2020, at 18:32, Strahil Nikolov  wrote:
> 
> On March 25, 2020 3:32:59 PM GMT+02:00, Mauro Tridici  
> wrote:
>> Dear All,
>> 
>> some users tht use regularly our gluster file system are experiencing a
>> strange error during attempting to remove a empty directory.
>> All bricks are up and running, no perticular error has been detected,
>> but they are not able to remove it successfully.
>> 
>> This is the error they are receiving:
>> 
>> [athena-login2][/tier2/OPA/archive/GOFS]> rm -rf RECOVERY20190416/
>> rm: cannot remove `RECOVERY20190416/GlobNative/20190505': Directory not
>> empty
>> 
>> I tried to delete this directory from root user without success.
>> Do you have some suggestions to solve this issue?
>> 
>> Thank you in advance.
>> Kind Regards,
>> Mauro
> 
> What do you have in 'RECOVERY20190416/GlobNative/20190505' ?
> 
> Maybe you got an immutable bit (chattr +i) on any file/folder  ?
> 
> Best Regards,
> Sttrahil Nikolov







Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] cannot remove empty directory on gluster file system

2020-03-25 Thread Mauro Tridici
Hi Sttrahil,

thank you for your answer.
Directory is empty and no immutable bit has been assigned to it.

[athena-login2][/tier2/OPA/archive/GOFS/RECOVERY20190416/GlobNative/20190505]> 
ls -la
total 8
drwxr-xr-x 2 das oclab_prod 4096 Mar 25 10:02 .
drwxr-xr-x 3 das oclab_prod 4096 Mar 25 10:02 ..

Any other idea related this issue?
Many thanks,
Mauro


> On 25 Mar 2020, at 18:32, Strahil Nikolov  wrote:
> 
> On March 25, 2020 3:32:59 PM GMT+02:00, Mauro Tridici  
> wrote:
>> Dear All,
>> 
>> some users tht use regularly our gluster file system are experiencing a
>> strange error during attempting to remove a empty directory.
>> All bricks are up and running, no perticular error has been detected,
>> but they are not able to remove it successfully.
>> 
>> This is the error they are receiving:
>> 
>> [athena-login2][/tier2/OPA/archive/GOFS]> rm -rf RECOVERY20190416/
>> rm: cannot remove `RECOVERY20190416/GlobNative/20190505': Directory not
>> empty
>> 
>> I tried to delete this directory from root user without success.
>> Do you have some suggestions to solve this issue?
>> 
>> Thank you in advance.
>> Kind Regards,
>> Mauro
> 
> What do you have in 'RECOVERY20190416/GlobNative/20190505' ?
> 
> Maybe you got an immutable bit (chattr +i) on any file/folder  ?
> 
> Best Regards,
> Sttrahil Nikolov






Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Geo-Replication File not Found on /.glusterfs/XX/XX/XXXXXXXXXXXX

2020-03-25 Thread Senén Vidal Blanco
Hi,
I have verified that the system is read-only, it does not let me delete or 
create files inside the slave volume.
I send you logs of what I have before stopping the Geo-replication.

Archivos.log
--

[2020-03-18 20:47:57.950339] I [MSGID: 100030] [glusterfsd.c:2867:main] 0-/
usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 7.3 (args: /
usr/sbin/glusterfs --acl --process-name fuse --volfile-server=samil --volfile-
id=/archivossamil /archivos) 
[2020-03-18 20:47:57.952274] I [glusterfsd.c:2594:daemonize] 0-glusterfs: Pid 
of current running process is 5779
[2020-03-18 20:47:57.959404] I [MSGID: 101190] [event-epoll.c:
682:event_dispatch_epoll_worker] 0-epoll: Started thread with index 0 
[2020-03-18 20:47:57.959535] I [MSGID: 101190] [event-epoll.c:
682:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1 
[2020-03-18 20:47:57.978600] I [MSGID: 114020] [client.c:2436:notify] 0-
archivossamil-client-0: parent translators are ready, attempting connect on 
transport 
Final graph:
+--
+
  1: volume archivossamil-client-0
  2: type protocol/client
  3: option ping-timeout 42
  4: option remote-host samil
  5: option remote-subvolume /brickarchivos/archivos
  6: option transport-type socket
  7: option transport.address-family inet
  8: option username 892b695d-8a06-42d8-9502-87146d2eab50
  9: option password 37130c51-989b-420d-a8d2-46bb3255435a
 10: option transport.socket.ssl-enabled off
 11: option transport.tcp-user-timeout 0
 12: option transport.socket.keepalive-time 20
 13: option transport.socket.keepalive-interval 2
 14: option transport.socket.keepalive-count 9
 15: option send-gids true
 16: end-volume
 17:  
 18: volume archivossamil-dht
 19: type cluster/distribute
 20: option lock-migration off
 21: option force-migration off
 22: subvolumes archivossamil-client-0
 23: end-volume
 24:  
 25: volume archivossamil-utime
 26: type features/utime
 27: option noatime on
 28: subvolumes archivossamil-dht
 29: end-volume
 30:  
 31: volume archivossamil-write-behind
 32: type performance/write-behind
 33: subvolumes archivossamil-utime
 34: end-volume
 35:  
 36: volume archivossamil-read-ahead
 37: type performance/read-ahead
 38: subvolumes archivossamil-write-behind
 39: end-volume
 40:  
 41: volume archivossamil-readdir-ahead
 42: type performance/readdir-ahead
 43: option parallel-readdir off
 44: option rda-request-size 131072
 45: option rda-cache-limit 10MB
 46: subvolumes archivossamil-read-ahead
 47: end-volume
 48:  
 49: volume archivossamil-io-cache
 50: type performance/io-cache
 51: subvolumes archivossamil-readdir-ahead
 52: end-volume
 53:  
 54: volume archivossamil-open-behind
 55: type performance/open-behind
 56: subvolumes archivossamil-io-cache
 57: end-volume
 58:  
 59: volume archivossamil-quick-read
 60: type performance/quick-read
 61: subvolumes archivossamil-open-behind
 62: end-volume
 63:  
 64: volume archivossamil-md-cache
 65: type performance/md-cache
 66: option cache-posix-acl true
 67: subvolumes archivossamil-quick-read
 68: end-volume
 69:  
 70: volume archivossamil-io-threads
 71: type performance/io-threads
 72: subvolumes archivossamil-md-cache
 73: end-volume
 74:  
 75: volume archivossamil
 76: type debug/io-stats
 77: option log-level INFO
 78: option threads 16
 79: option latency-measurement off
 80: option count-fop-hits off
 81: option global-threading off
 82: subvolumes archivossamil-io-threads
 83: end-volume
 84:  
 85: volume posix-acl-autoload
 86: type system/posix-acl
 87: subvolumes archivossamil
 88: end-volume
 89:  
 90: volume meta-autoload
 91: type meta
 92: subvolumes posix-acl-autoload
 93: end-volume
 94:  
+--
+
[2020-03-18 20:47:57.979407] I [rpc-clnt.c:1963:rpc_clnt_reconfig] 0-
archivossamil-client-0: changing port to 49153 (from 0)
[2020-03-18 20:47:57.979662] I [socket.c:865:__socket_shutdown] 0-
archivossamil-client-0: intentional socket shutdown(12)
[2020-03-18 20:47:57.980195] I [MSGID: 114057] [client-handshake.c:
1375:select_server_supported_programs] 0-archivossamil-client-0: Using Program 
GlusterFS 4.x v1, Num (1298437), Version (400) 
[2020-03-18 20:47:57.983891] I [MSGID: 114046] [client-handshake.c:
1105:client_setvolume_cbk] 0-archivossamil-client-0: Connected to 
archivossamil-client-0, attached to remote volume '/brickarchivos/archivos'. 
[2020-03-18 20:47:57.985173] I [fuse-bridge.c:5166:fuse_init] 0-glusterfs-
fuse: FUSE inited with protocol versions: glusterfs 7.24 kernel 7.27
[2020-03-18 20:47:57.985205] I [fuse-bridge.c:5777:fuse_graph_sync] 0-fuse: 
switched to graph 0
[2020-03-18 21:04:09.804023] I 

Re: [Gluster-users] Geo-Replication File not Found on /.glusterfs/XX/XX/XXXXXXXXXXXX

2020-03-25 Thread Sunny Kumar
Hi Senén,

By any chance you perform any operation   on slave volume; like
deleting data directly from slave volume.

Also If possible please share geo-rep slave logs.

/sunny

On Wed, Mar 25, 2020 at 9:15 AM Senén Vidal Blanco
 wrote:
>
> Hi,
> I have a problem with the Geo-Replication system.
> The first synchronization was successful a few days ago. But after a bit of
> filming I run into an error message preventing the sync from continuing.
> I summarize a little the data on the configuration:
>
> Debian 10
> Glusterfs 7.3
> Master volume: archivosvao
> Slave volume: archivossamil
>
> volume geo-replication archivosvao samil::archivossamil config
> access_mount:false
> allow_network:
> change_detector:changelog
> change_interval:5
> changelog_archive_format:%Y%m
> changelog_batch_size:727040
> changelog_log_file:/var/log/glusterfs/geo-replication/
> archivosvao_samil_archivossamil/changes-${local_id}.log
> changelog_log_level:INFO
> checkpoint:0
> cli_log_file:/var/log/glusterfs/geo-replication/cli.log
> cli_log_level:INFO
> connection_timeout:60
> georep_session_working_dir:/var/lib/glusterd/geo-replication/
> archivosvao_samil_archivossamil/
> gfid_conflict_resolution:true
> gluster_cli_options:
> gluster_command:gluster
> gluster_command_dir:/usr/sbin
> gluster_log_file:/var/log/glusterfs/geo-replication/
> archivosvao_samil_archivossamil/mnt-${local_id}.log
> gluster_log_level:INFO
> gluster_logdir:/var/log/glusterfs
> gluster_params:aux-gfid-mount acl
> gluster_rundir:/var/run/gluster
> glusterd_workdir:/var/lib/glusterd
> gsyncd_miscdir:/var/lib/misc/gluster/gsyncd
> ignore_deletes:false
> isolated_slaves:
> log_file:/var/log/glusterfs/geo-replication/archivosvao_samil_archivossamil/
> gsyncd.log
> log_level:INFO
> log_rsync_performance:false
> master_disperse_count:1
> master_distribution_count:1
> master_replica_count:1
> max_rsync_retries:10
> meta_volume_mnt:/var/run/gluster/shared_storage
> pid_file:/var/run/gluster/gsyncd-archivosvao-samil-archivossamil.pid
> remote_gsyncd:
> replica_failover_interval:1
> rsync_command:rsync
> rsync_opt_existing:true
> rsync_opt_ignore_missing_args:true
> rsync_options:
> rsync_ssh_options:
> slave_access_mount:false
> slave_gluster_command_dir:/usr/sbin
> slave_gluster_log_file:/var/log/glusterfs/geo-replication-slaves/
> archivosvao_samil_archivossamil/mnt-${master_node}-${master_brick_id}.log
> slave_gluster_log_file_mbr:/var/log/glusterfs/geo-replication-slaves/
> archivosvao_samil_archivossamil/mnt-mbr-${master_node}-${master_brick_id}.log
> slave_gluster_log_level:INFO
> slave_gluster_params:aux-gfid-mount acl
> slave_log_file:/var/log/glusterfs/geo-replication-slaves/
> archivosvao_samil_archivossamil/gsyncd.log
> slave_log_level:INFO
> slave_timeout:120
> special_sync_mode:
> ssh_command:ssh
> ssh_options:-oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /var/
> lib/glusterd/geo-replication/secret.pem
> ssh_options_tar:-oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /
> var/lib/glusterd/geo-replication/tar_ssh.pem
> ssh_port:22
> state_file:/var/lib/glusterd/geo-replication/archivosvao_samil_archivossamil/
> monitor.status
> state_socket_unencoded:
> stime_xattr_prefix:trusted.glusterfs.c7fa7778-
> f2e4-48f9-8817-5811c09964d5.8d4c7ef7-35fc-497a-9425-66f4aced159b
> sync_acls:true
> sync_jobs:3
> sync_method:rsync
> sync_xattrs:true
> tar_command:tar
> use_meta_volume:false
> use_rsync_xattrs:false
> working_dir:/var/lib/misc/gluster/gsyncd/archivosvao_samil_archivossamil/
>
>
> gluster> volume info
>
> Volume Name: archivossamil
> Type: Distribute
> Volume ID: 8d4c7ef7-35fc-497a-9425-66f4aced159b
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1
> Transport-type: tcp
> Bricks:
> Brick1: samil:/brickarchivos/archivos
> Options Reconfigured:
> nfs.disable: on
> storage.fips-mode-rchecksum: on
> transport.address-family: inet
> features.read-only: on
>
> Volume Name: archivosvao
> Type: Distribute
> Volume ID: c7fa7778-f2e4-48f9-8817-5811c09964d5
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1
> Transport-type: tcp
> Bricks:
> Brick1: vao:/brickarchivos/archivos
> Options Reconfigured:
> nfs.disable: on
> storage.fips-mode-rchecksum: on
> transport.address-family: inet
> geo-replication.indexing: on
> geo-replication.ignore-pid-check: on
> changelog.changelog: on
>
> Volume Name: home
> Type: Replicate
> Volume ID: 74522542-5d7a-4fdd-9cea-76bf1ff27e7d
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x 2 = 2
> Transport-type: tcp
> Bricks:
> Brick1: samil:/brickhome/home
> Brick2: vao:/brickhome/home
> Options Reconfigured:
> performance.client-io-threads: off
> nfs.disable: on
> storage.fips-mode-rchecksum: on
> transport.address-family: inet
>
>
> These errors appear in the master logs:
>
>
>
> .
>
> [2020-03-25 09:00:12.554226] I [master(worker /brickarchivos/archivos):
> 1991:syncjob] Syncer: Sync Time Takenjob=1   num_files=2 return_code=0
> duration=0.0483
> [2020-03-25 

Re: [Gluster-users] Georeplication questions

2020-03-25 Thread wodel youchi
Hi again,

I have more questions about geo-replication on oVirt-RHHI

Need to understand somethings about DR on oVirt-HI

1 - What does mean : Scheduling regular backups using geo-replication
(point 3.3.4 RHHI 1.7 Doc Maintaining RHHI)
 - Does this mean creating a check-point?
 - If yes, does this mean that the geo-replication process will sync
data up to that check-point and then stops the synchronization, then repeat
the same cycle the day after? does this mean that the minimum RPO is one
day?

2 - Why the use of this check-point? why not let the geo-replication work
all the time?
3 - Geo-replication can be started/stoped after at any time, could this be
a problem when executing a disaster-recovery?
4 - I am running a test environnement, and it's not up all the time, but I
noticed this, if the geo-replication is running and the time of the
schedule (the check-point hour) is reached, the replication is stopped.
What does this mean?
5 - I created a snapshot of a VM on the source Manager, I synced the volume
then I executed a DR, The VM was started on the Target Manager but the VM
didn't have its snapshot, any idea???

Regards and be safe .


Le ven. 4 oct. 2019 à 06:32, Satheesaran Sundaramoorthi 
a écrit :

> Hello Wodel,
>
> Glad to hear for considering gluster geo-replication for oVirt-gluster HC
> infra.
> More answers inline
>
> On Wed, Oct 2, 2019 at 3:54 PM wodel youchi 
> wrote:
>
>> Hi,
>>
>> oVirt Hyperconverged disaster recovery uses georeplication to replicate
>> the volume containing the VMs.
>>
>> What I know about georeplication is that it is an asynchronous
>> replication.
>>
>> My questions are :
>> - How the replication of the VMs is done, are only the changes
>> synchronized?
>>
> Gluster geo-replication should be configured from the gluster volume that
> backs the storage domain at your
> site to other( at the remote site ). Gluste geo-rep uses 'rsync' under the
> hoods, so post the first sync,
> only the changes are synchronized
>
>> - What is the interval of this replication? can this interval be
>> configured taking into consideration the bandwidth of the replication link.
>>
> You have to make use of oVirt Manager to schedule your remote sync.
> Click on gluster volume backed storage domain -> click on 'Remote data
> sync setup' -> create a schedule
>
>> - How can the RPO be measured in the case of a georeplication?
>>
> Its completely based on the schedule that you set.
> Geo-replication process by itself, sets the checkpoint at the master
> volume and it guarantees
> that contents will be synced till that checkpoint to the slave, when
> completed.
>
> Thanks,
> Satheesaran S ( sas )
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] cannot remove empty directory on gluster file system

2020-03-25 Thread Strahil Nikolov
On March 25, 2020 3:32:59 PM GMT+02:00, Mauro Tridici  
wrote:
>Dear All,
>
>some users tht use regularly our gluster file system are experiencing a
>strange error during attempting to remove a empty directory.
>All bricks are up and running, no perticular error has been detected,
>but they are not able to remove it successfully.
>
>This is the error they are receiving:
>
>[athena-login2][/tier2/OPA/archive/GOFS]> rm -rf RECOVERY20190416/
>rm: cannot remove `RECOVERY20190416/GlobNative/20190505': Directory not
>empty
>
>I tried to delete this directory from root user without success.
>Do you have some suggestions to solve this issue?
>
>Thank you in advance.
>Kind Regards,
>Mauro

What do you have in 'RECOVERY20190416/GlobNative/20190505' ?

Maybe you got an immutable bit (chattr +i) on any file/folder  ?

Best Regards,
Sttrahil Nikolov




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] cannot remove empty directory on gluster file system

2020-03-25 Thread Mauro Tridici
Dear All,

some users tht use regularly our gluster file system are experiencing a strange 
error during attempting to remove a empty directory.
All bricks are up and running, no perticular error has been detected, but they 
are not able to remove it successfully.

This is the error they are receiving:

[athena-login2][/tier2/OPA/archive/GOFS]> rm -rf RECOVERY20190416/
rm: cannot remove `RECOVERY20190416/GlobNative/20190505': Directory not empty

I tried to delete this directory from root user without success.
Do you have some suggestions to solve this issue?

Thank you in advance.
Kind Regards,
Mauro



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Geo-Replication File not Found on /.glusterfs/XX/XX/XXXXXXXXXXXX

2020-03-25 Thread Senén Vidal Blanco
Hi,
I have a problem with the Geo-Replication system.
The first synchronization was successful a few days ago. But after a bit of 
filming I run into an error message preventing the sync from continuing.
I summarize a little the data on the configuration:

Debian 10
Glusterfs 7.3
Master volume: archivosvao
Slave volume: archivossamil

volume geo-replication archivosvao samil::archivossamil config
access_mount:false
allow_network:
change_detector:changelog
change_interval:5
changelog_archive_format:%Y%m
changelog_batch_size:727040
changelog_log_file:/var/log/glusterfs/geo-replication/
archivosvao_samil_archivossamil/changes-${local_id}.log
changelog_log_level:INFO
checkpoint:0
cli_log_file:/var/log/glusterfs/geo-replication/cli.log
cli_log_level:INFO
connection_timeout:60
georep_session_working_dir:/var/lib/glusterd/geo-replication/
archivosvao_samil_archivossamil/
gfid_conflict_resolution:true
gluster_cli_options:
gluster_command:gluster
gluster_command_dir:/usr/sbin
gluster_log_file:/var/log/glusterfs/geo-replication/
archivosvao_samil_archivossamil/mnt-${local_id}.log
gluster_log_level:INFO
gluster_logdir:/var/log/glusterfs
gluster_params:aux-gfid-mount acl
gluster_rundir:/var/run/gluster
glusterd_workdir:/var/lib/glusterd
gsyncd_miscdir:/var/lib/misc/gluster/gsyncd
ignore_deletes:false
isolated_slaves:
log_file:/var/log/glusterfs/geo-replication/archivosvao_samil_archivossamil/
gsyncd.log
log_level:INFO
log_rsync_performance:false
master_disperse_count:1
master_distribution_count:1
master_replica_count:1
max_rsync_retries:10
meta_volume_mnt:/var/run/gluster/shared_storage
pid_file:/var/run/gluster/gsyncd-archivosvao-samil-archivossamil.pid
remote_gsyncd:
replica_failover_interval:1
rsync_command:rsync
rsync_opt_existing:true
rsync_opt_ignore_missing_args:true
rsync_options:
rsync_ssh_options:
slave_access_mount:false
slave_gluster_command_dir:/usr/sbin
slave_gluster_log_file:/var/log/glusterfs/geo-replication-slaves/
archivosvao_samil_archivossamil/mnt-${master_node}-${master_brick_id}.log
slave_gluster_log_file_mbr:/var/log/glusterfs/geo-replication-slaves/
archivosvao_samil_archivossamil/mnt-mbr-${master_node}-${master_brick_id}.log
slave_gluster_log_level:INFO
slave_gluster_params:aux-gfid-mount acl
slave_log_file:/var/log/glusterfs/geo-replication-slaves/
archivosvao_samil_archivossamil/gsyncd.log
slave_log_level:INFO
slave_timeout:120
special_sync_mode:
ssh_command:ssh
ssh_options:-oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /var/
lib/glusterd/geo-replication/secret.pem
ssh_options_tar:-oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /
var/lib/glusterd/geo-replication/tar_ssh.pem
ssh_port:22
state_file:/var/lib/glusterd/geo-replication/archivosvao_samil_archivossamil/
monitor.status
state_socket_unencoded:
stime_xattr_prefix:trusted.glusterfs.c7fa7778-
f2e4-48f9-8817-5811c09964d5.8d4c7ef7-35fc-497a-9425-66f4aced159b
sync_acls:true
sync_jobs:3
sync_method:rsync
sync_xattrs:true
tar_command:tar
use_meta_volume:false
use_rsync_xattrs:false
working_dir:/var/lib/misc/gluster/gsyncd/archivosvao_samil_archivossamil/


gluster> volume info
 
Volume Name: archivossamil
Type: Distribute
Volume ID: 8d4c7ef7-35fc-497a-9425-66f4aced159b
Status: Started
Snapshot Count: 0
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: samil:/brickarchivos/archivos
Options Reconfigured:
nfs.disable: on
storage.fips-mode-rchecksum: on
transport.address-family: inet
features.read-only: on
 
Volume Name: archivosvao
Type: Distribute
Volume ID: c7fa7778-f2e4-48f9-8817-5811c09964d5
Status: Started
Snapshot Count: 0
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: vao:/brickarchivos/archivos
Options Reconfigured:
nfs.disable: on
storage.fips-mode-rchecksum: on
transport.address-family: inet
geo-replication.indexing: on
geo-replication.ignore-pid-check: on
changelog.changelog: on
 
Volume Name: home
Type: Replicate
Volume ID: 74522542-5d7a-4fdd-9cea-76bf1ff27e7d
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: samil:/brickhome/home
Brick2: vao:/brickhome/home
Options Reconfigured:
performance.client-io-threads: off
nfs.disable: on
storage.fips-mode-rchecksum: on
transport.address-family: inet


These errors appear in the master logs:



.

[2020-03-25 09:00:12.554226] I [master(worker /brickarchivos/archivos):
1991:syncjob] Syncer: Sync Time Takenjob=1   num_files=2 return_code=0  
 
duration=0.0483
[2020-03-25 09:00:12.772688] I [master(worker /brickarchivos/archivos):
1991:syncjob] Syncer: Sync Time Takenjob=2   num_files=3 return_code=0  
 
duration=0.0539
[2020-03-25 09:00:13.112986] I [master(worker /brickarchivos/archivos):
1991:syncjob] Syncer: Sync Time Takenjob=1   num_files=2 return_code=0  
 
duration=0.0575
[2020-03-25 09:00:13.311976] I [master(worker /brickarchivos/archivos):
1991:syncjob] Syncer: Sync Time Takenjob=2   num_files=1 return_code=0  
 
duration=0.0379
[2020-03-25 09:00:13.382845]