Re: [Gluster-users] VMs paused - unknown storage error - Stale file handle - distribute 2 - replica 3 volume with sharding

2018-12-13 Thread Marco Lorenzo Crociani

Hi,
is there a way to recover file from "Stale file handle" errors?

Here some of the tests we have done:

- compared the extended attributes of all of the three replicas of the 
involved shard. Found identical attributes.


- compared SHA512 message digest of all of the three replicas of the 
involved shard. Found identical digests.


- tried to delete the shard from a replica set, one at a time, along 
with its hard link. Shard is always rebuilt correctly but error from 
client persists.


Regards,

--
Marco Crociani

Il 22/11/18 13:19, Marco Lorenzo Crociani ha scritto:

Hi,
I opened a bug on gluster because I have reading errors on files on a 
gluster volume:

https://bugzilla.redhat.com/show_bug.cgi?id=1652548

The files are many of the VMs images of the oVirt DATA storage domain. 
oVirt pause the vms because unknown storage errors.
It's impossibile to copy/clone, manage some snapshots of these vms. The 
errors on the low level are "stale file handle".

Volume is distribute 2 replicate 3 with sharding.

Should I open a bug also on oVirt?

Gluster 3.12.15-1.el7
oVirt 4.2.6.4-1.el7

Regards,




___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] VMs paused - unknown storage error - Stale file handle - distribute 2 - replica 3 volume with sharding

2018-11-22 Thread Marco Lorenzo Crociani

Hi,
I opened a bug on gluster because I have reading errors on files on a 
gluster volume:

https://bugzilla.redhat.com/show_bug.cgi?id=1652548

The files are many of the VMs images of the oVirt DATA storage domain. 
oVirt pause the vms because unknown storage errors.
It's impossibile to copy/clone, manage some snapshots of these vms. The 
errors on the low level are "stale file handle".

Volume is distribute 2 replicate 3 with sharding.

Should I open a bug also on oVirt?

Gluster 3.12.15-1.el7
oVirt 4.2.6.4-1.el7

Regards,

--
Marco Crociani
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] ETA for 3.10.12 (was "Planned for the 30th of Mar, 2018")

2018-04-12 Thread Marco Lorenzo Crociani

On 09/04/2018 21:36, Shyam Ranganathan wrote:

On 04/09/2018 04:48 AM, Marco Lorenzo Crociani wrote:

On 06/04/2018 19:33, Shyam Ranganathan wrote:

Hi,

We postponed this and I did not announce this to the lists. The number
of bugs fixed against 3.10.12 is low, and I decided to move this to the
30th of Apr instead.

Is there a specific fix that you are looking for in the release?



Hi,
yes, it's this: https://review.gluster.org/19730
https://bugzilla.redhat.com/show_bug.cgi?id=1442983


We will roll out 3.10.12 including this fix in a few days, we have a
3.12 build and release tomorrow, hence looking to get 3.10 done by this
weekend.

Thanks for your patience!



Hi,
ok thanks, stand by for the release!

--
Marco Crociani
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] ETA for 3.10.12 (was "Planned for the 30th of Mar, 2018")

2018-04-09 Thread Marco Lorenzo Crociani

On 06/04/2018 19:33, Shyam Ranganathan wrote:

Hi,

We postponed this and I did not announce this to the lists. The number
of bugs fixed against 3.10.12 is low, and I decided to move this to the
30th of Apr instead.

Is there a specific fix that you are looking for in the release?



Hi,
yes, it's this: https://review.gluster.org/19730
https://bugzilla.redhat.com/show_bug.cgi?id=1442983

Regards,

--
Marco Crociani


Thanks,
Shyam

On 04/06/2018 11:47 AM, Marco Lorenzo Crociani wrote:

Hi,
are there any news for 3.10.12 release?

Regards,



___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] ETA for 3.10.12 (was "Planned for the 30th of Mar, 2018")

2018-04-06 Thread Marco Lorenzo Crociani

Hi,
are there any news for 3.10.12 release?

Regards,

--
Marco Crociani
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Bug 1442983 on 3.10.11 Unable to acquire lock for gluster volume leading to 'another transaction in progress' error

2018-03-16 Thread Marco Lorenzo Crociani

On 16/03/2018 13:24, Atin Mukherjee wrote:
Have sent a backport request https://review.gluster.org/19730 at 
release-3.10 branch. Hopefully this fix will be picked up in next update.


Ok thanks!

--
Marco Crociani
Prisma Telecom Testing S.r.l.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Bug 1442983 on 3.10.11 Unable to acquire lock for gluster volume leading to 'another transaction in progress' error

2018-03-16 Thread Marco Lorenzo Crociani

Hi,
I'm hitting bug https://bugzilla.redhat.com/show_bug.cgi?id=1442983
on glusterfs 3.10.11 and oVirt 4.1.9 (and before on glusterfs 3.8.14)

The bug report says fixed in glusterfs-3.12.2-1

Is there a plan to backport the fix to 3.10.x releases or the only way 
to fix is upgrade to 3.12?


Regards,

--
Marco Crociani
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] glusterd-locks.c:572:glusterd_mgmt_v3_lock

2018-03-15 Thread Marco Lorenzo Crociani

Hi,
have you found a solution?

Regards,

--
Marco Crociani
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Really long df-h and high swap usage after rsync 3.7.11

2016-07-21 Thread Marco Lorenzo Crociani

Hello,
I have done a swapoff and then the df -h run fast.

# free -m
  totalusedfree  shared buff/cache   
available

Mem:   79585720 280  16 19561825
Swap: 0   0   0


time df -h
real0m0.054s
user0m0.000s
sys0m0.003s

Should I reduce swappiness? Now it is 60.
Is it really needed all that ram to mount twelve glusterfs volumes ( 
~3764 GB )?


Regards,

--
Marco Crociani
Prisma Telecom Testing S.r.l.
via Petrocchi, 4  20127 MILANO  ITALY
Phone:  +39 02 26113507
Fax:  +39 02 26113597
e-mail:  mar...@prismatelecomtesting.com
web:  http://www.prismatelecomtesting.com




On 20/07/2016 11:46, Marco Lorenzo Crociani wrote:

Hi,
I have a CentOS7 with rsnapshot that mount glusterfs volume and do 
backups every day.


# yum list installed |grep glusterfs
glusterfs.x86_64   3.7.11-1.el7
glusterfs-api.x86_64   3.7.11-1.el7
glusterfs-client-xlators.x86_643.7.11-1.el7
glusterfs-fuse.x86_64  3.7.11-1.el7
glusterfs-libs.x86_64  3.7.11-1.el7

# free -m
  totalusedfree  shared buff/cache   
available

Mem:   79583401 213   4 43434146
Swap:  806324675596


After running the backups df -h is really slow:

# time df -h
File system   Dim. Usati Dispon. Uso% Montato su
[.]
s25gfs.ovirt:VOL_***  100G   40G 61G  40% /mnt/VOL_***
s25gfs.ovirt:VOL_*** 100G   50G 51G  50% /mnt/VOL_***
s25gfs.ovirt:VOL_***200G  138G 63G  69% /mnt/VOL_***
s25gfs.ovirt:VOL_*** 500G  412G 89G  83% /mnt/VOL_***
s25gfs.ovirt:VOL_***  500G  246G255G  50% /mnt/VOL_***
s25gfs.ovirt:VOL_*** 200G   98G103G  49% /mnt/VOL_***
s25gfs.ovirt:VOL_***200G   90G111G  45% /mnt/VOL_***
s25gfs.ovirt:VOL_***  100G   43G 58G  43% /mnt/VOL_***
s25gfs.ovirt:VOL_***500G  385G116G  77% /mnt/VOL_***
s25gfs.ovirt:VOL_*** 100G   52G 49G  52% /mnt/VOL_***
s25gfs.ovirt:VOL_***100G   15G 86G  15% /mnt/VOL_***
s25gfs.ovirt:VOL_***   400G  348G 53G  87% /mnt/VOL_***

real0m24.068s
user0m0.003s
sys0m0.002s

while on another machine it took:
real0m0.057s
user0m0.000s
sys0m0.006s

after umounting all gluster volumes and mounting those back:

# free -m
  totalusedfree  shared buff/cache   
available

Mem:   7958 6523014   4 42916915
Swap:  8063  558008

time df -h
real0m0.037s
user0m0.001s
sys0m0.002s

I mount the volumes in fstab with:
s25gfs.ovirt:VOL_***/mnt/VOL_*** glusterfs 
defaults,acl0 0


Is there any memory leak or something nasty?
Regards,



___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Really long df-h and high swap usage after rsync 3.7.11

2016-07-20 Thread Marco Lorenzo Crociani

Hi,
I have a CentOS7 with rsnapshot that mount glusterfs volume and do 
backups every day.


# yum list installed |grep glusterfs
glusterfs.x86_64   3.7.11-1.el7
glusterfs-api.x86_64   3.7.11-1.el7
glusterfs-client-xlators.x86_643.7.11-1.el7
glusterfs-fuse.x86_64  3.7.11-1.el7
glusterfs-libs.x86_64  3.7.11-1.el7

# free -m
  totalusedfree  shared buff/cache   
available

Mem:   79583401 213   4 43434146
Swap:  806324675596


After running the backups df -h is really slow:

# time df -h
File system   Dim. Usati Dispon. Uso% Montato su
[.]
s25gfs.ovirt:VOL_***  100G   40G 61G  40% /mnt/VOL_***
s25gfs.ovirt:VOL_*** 100G   50G 51G  50% /mnt/VOL_***
s25gfs.ovirt:VOL_***200G  138G 63G  69% /mnt/VOL_***
s25gfs.ovirt:VOL_*** 500G  412G 89G  83% /mnt/VOL_***
s25gfs.ovirt:VOL_***  500G  246G255G  50% /mnt/VOL_***
s25gfs.ovirt:VOL_*** 200G   98G103G  49% /mnt/VOL_***
s25gfs.ovirt:VOL_***200G   90G111G  45% /mnt/VOL_***
s25gfs.ovirt:VOL_***  100G   43G 58G  43% /mnt/VOL_***
s25gfs.ovirt:VOL_***500G  385G116G  77% /mnt/VOL_***
s25gfs.ovirt:VOL_*** 100G   52G 49G  52% /mnt/VOL_***
s25gfs.ovirt:VOL_***100G   15G 86G  15% /mnt/VOL_***
s25gfs.ovirt:VOL_***   400G  348G 53G  87% /mnt/VOL_***

real0m24.068s
user0m0.003s
sys0m0.002s

while on another machine it took:
real0m0.057s
user0m0.000s
sys0m0.006s

after umounting all gluster volumes and mounting those back:

# free -m
  totalusedfree  shared buff/cache   
available

Mem:   7958 6523014   4 42916915
Swap:  8063  558008

time df -h
real0m0.037s
user0m0.001s
sys0m0.002s

I mount the volumes in fstab with:
s25gfs.ovirt:VOL_***/mnt/VOL_*** glusterfs   
defaults,acl0 0


Is there any memory leak or something nasty?
Regards,

--
Marco Crociani
Prisma Telecom Testing S.r.l.
via Petrocchi, 4  20127 MILANO  ITALY
Phone:  +39 02 26113507
Fax:  +39 02 26113597
e-mail:  mar...@prismatelecomtesting.com
web:  http://www.prismatelecomtesting.com

Questa email (e I suoi allegati) costituisce informazione riservata e 
confidenziale e può essere soggetto a legal privilege. Può essere utilizzata 
esclusivamente dai suoi destinatari legittimi.  Se avete ricevuto questa email 
per errore, siete pregati di informarne immediatamente il mittente e quindi 
cancellarla.  A meno che non siate stati a ciò espressamente autorizzati, la 
diffusione o la riproduzione di questa email o del suo contenuto non sono 
consentiti.

 Salvo che questa email sia espressamente qualificata come offerta o 
accettazione contrattuale, il mittente non intende con questa email dare vita 
ad un vincolo giuridico e questa email non può essere interpretata quale 
offerta o accettazione che possa dare vita ad un contratto. Qualsiasi opinione 
manifestata in questa email è un'opinione personale del mittente, salvo che il 
mittente dichiari espressamente che si tratti di un'opinione di Prisma 
Engineering.


***

 This e-mail (including any attachments) is private and confidential, and may 
be privileged.  It is for the exclusive use of the intended recipient(s).  If 
you have received this email in error, please inform the sender immediately and 
then delete this email.  Unless you have been given specific permission to do 
so, please do not distribute or copy this email or its contents.
 Unless the text of this email specifically states that it is a contractual 
offer or acceptance, the sender does not intend to create a legal relationship 
and this email shall not constitute an offer or acceptance which could give 
rise to a contract. Any views expressed in this communication are those of the 
individual sender, except where the sender specifically states them to be the 
views of Prisma Engineering.

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster 3.7.5 - S57glusterfind-delete-post.py error

2015-12-09 Thread Marco Lorenzo Crociani

Directory is present

# ls -la /var/lib/glusterd/
totale 60
drwxr-xr-x. 13 root root 4096  3 dic 15:34 .
drwxr-xr-x. 25 root root 4096  9 dic 12:40 ..
drwxr-xr-x.  3 root root 4096 24 ott 10:06 bitd
-rw---.  1 root root   66  3 dic 15:34 glusterd.info
drwxr-xr-x.  3 root root 4096  2 dic 17:24 glustershd
drwxr-xr-x.  2 root root 4096 24 ott 16:44 groups
drwxr-xr-x.  3 root root 4096  7 ott 18:27 hooks
drwxr-xr-x.  3 root root 4096  2 dic 17:24 nfs
-rw---.  1 root root   24  9 lug 18:19 options
drwxr-xr-x.  2 root root 4096  3 dic 15:34 peers
drwxr-xr-x.  3 root root 4096  2 dic 17:24 quotad
drwxr-xr-x.  3 root root 4096 24 ott 10:06 scrub
drwxr-xr-x.  2 root root 4096  3 dic 15:34 snaps
drwxr-xr-x.  2 root root 4096 24 ott 16:44 ss_brick
drwxr-xr-x.  8 root root 4096 11 nov 14:03 vols


On 09/12/2015 12:44, Aravinda wrote:
Thanks. I will fix the issue. Was directory /var/lib/glusterd deleted 
post installation? (During any cleanup process)


This cleanup script was expecting /var/lib/glusterd/glusterfind 
directory to be present. Now I will handle the script to ignore if 
that directory not present.


Opened a bug for the same and sent patch to fix the issue. (Once 
review complete, we will make it available in 3.7.7 release)

Bug: https://bugzilla.redhat.com/show_bug.cgi?id=1289935
Patch: http://review.gluster.org/#/c/12923/

Thanks for reporting the issue.

regards
Aravinda

On 12/09/2015 03:35 PM, Marco Lorenzo Crociani wrote:

Hi,

# /var/lib/glusterd/hooks/1/delete/post/S57glusterfind-delete-post.py 
--volname=VOL_ZIMBRA

Traceback (most recent call last):
  File 
"/var/lib/glusterd/hooks/1/delete/post/S57glusterfind-delete-post.py", line 
60, in 

main()
  File 
"/var/lib/glusterd/hooks/1/delete/post/S57glusterfind-delete-post.py", line 
43, in main

for session in os.listdir(glusterfind_dir):
OSError: [Errno 2] No such file or directory: 
'/var/lib/glusterd/glusterfind'



# which glusterfind
/usr/bin/glusterfind


Regards,

Marco Crociani

On 07/12/2015 14:45, Aravinda wrote:
Looks like failed to execute the Cleanup script as part of Volume 
delete.


Please run the following command in the failed node and let us know 
the output and return code.


/var/lib/glusterd/hooks/1/delete/post/S57glusterfind-delete-post.py 
--volname=VOL_ZIMBRA

echo $?

This error can be ignored if not using Glusterfind.
regards
Aravinda
On 11/11/2015 06:55 PM, Marco Lorenzo Crociani wrote:

Hi,
I removed one volume from the ovirt console.
oVirt 3.5.4
Gluster 3.7.5
CentOS release 6.7

In the logs there where these errors:


[2015-11-11 13:03:29.783491] I [run.c:190:runner_log] 
(-->/usr/lib64/glusterfs/3.7.5/xlator/mgmt/glusterd.so(+0x5fc75) 
[0x7fc7d6002c75] 
-->/usr/lib64/glusterfs/3.7.5/xlator/mgmt/glusterd.so(glusterd_hooks_run_hooks+0x4cc) 
[0x7fc7d60920bc] -->/usr/lib64/libglusterfs.so.0(runner_log+0x11e) 
[0x7fc7e162868e] ) 0-management: Ran script: 
/var/lib/glusterd/hooks/1/stop/pre/S29CTDB-teardown.sh 
--volname=VOL_ZIMBRA --last=no
[2015-11-11 13:03:29.789594] E [run.c:190:runner_log] 
(-->/usr/lib64/glusterfs/3.7.5/xlator/mgmt/glusterd.so(+0x5fc75) 
[0x7fc7d6002c75] 
-->/usr/lib64/glusterfs/3.7.5/xlator/mgmt/glusterd.so(glusterd_hooks_run_hooks+0x470) 
[0x7fc7d6092060] -->/usr/lib64/libglusterfs.so.0(runner_log+0x11e) 
[0x7fc7e162868e] ) 0-management: Failed to execute script: 
/var/lib/glusterd/hooks/1/stop/pre/S30samba-stop.sh 
--volname=VOL_ZIMBRA --last=no
[2015-11-11 13:03:29.790807] I [MSGID: 106132] 
[glusterd-utils.c:1371:glusterd_service_stop] 0-management: brick 
already stopped
[2015-11-11 13:03:31.108959] I [MSGID: 106540] 
[glusterd-utils.c:4105:glusterd_nfs_pmap_deregister] 0-glusterd: 
De-registered MOUNTV3 successfully
[2015-11-11 13:03:31.109881] I [MSGID: 106540] 
[glusterd-utils.c:4114:glusterd_nfs_pmap_deregister] 0-glusterd: 
De-registered MOUNTV1 successfully
[2015-11-11 13:03:31.110725] I [MSGID: 106540] 
[glusterd-utils.c:4123:glusterd_nfs_pmap_deregister] 0-glusterd: 
De-registered NFSV3 successfully
[2015-11-11 13:03:31.111562] I [MSGID: 106540] 
[glusterd-utils.c:4132:glusterd_nfs_pmap_deregister] 0-glusterd: 
De-registered NLM v4 successfully
[2015-11-11 13:03:31.112396] I [MSGID: 106540] 
[glusterd-utils.c:4141:glusterd_nfs_pmap_deregister] 0-glusterd: 
De-registered NLM v1 successfully
[2015-11-11 13:03:31.113225] I [MSGID: 106540] 
[glusterd-utils.c:4150:glusterd_nfs_pmap_deregister] 0-glusterd: 
De-registered ACL v3 successfully
[2015-11-11 13:03:32.212071] I [MSGID: 106132] 
[glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: bitd 
already stopped
[2015-11-11 13:03:32.212862] I [MSGID: 106132] 
[glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: scrub 
already stopped
[2015-11-11 13:03:32.213099] I [MSGID: 106144] 
[glusterd-pmap.c:274:pmap_registry_remove] 0-pmap: removing brick 
/gluster/VOL_ZIMBRA/brick on port 49191
[2015-11-11 13:03:32.282685] I [MSGID: 106144] 
[glusterd-pmap.c:274:pmap_registry_re

Re: [Gluster-users] Gluster 3.7.5 - S57glusterfind-delete-post.py error

2015-12-09 Thread Marco Lorenzo Crociani

Hi,

# /var/lib/glusterd/hooks/1/delete/post/S57glusterfind-delete-post.py 
--volname=VOL_ZIMBRA

Traceback (most recent call last):
  File 
"/var/lib/glusterd/hooks/1/delete/post/S57glusterfind-delete-post.py", 
line 60, in 

main()
  File 
"/var/lib/glusterd/hooks/1/delete/post/S57glusterfind-delete-post.py", 
line 43, in main

for session in os.listdir(glusterfind_dir):
OSError: [Errno 2] No such file or directory: 
'/var/lib/glusterd/glusterfind'



# which glusterfind
/usr/bin/glusterfind


Regards,

Marco Crociani

On 07/12/2015 14:45, Aravinda wrote:

Looks like failed to execute the Cleanup script as part of Volume delete.

Please run the following command in the failed node and let us know 
the output and return code.


/var/lib/glusterd/hooks/1/delete/post/S57glusterfind-delete-post.py 
--volname=VOL_ZIMBRA

echo $?

This error can be ignored if not using Glusterfind.
regards
Aravinda
On 11/11/2015 06:55 PM, Marco Lorenzo Crociani wrote:

Hi,
I removed one volume from the ovirt console.
oVirt 3.5.4
Gluster 3.7.5
CentOS release 6.7

In the logs there where these errors:


[2015-11-11 13:03:29.783491] I [run.c:190:runner_log] 
(-->/usr/lib64/glusterfs/3.7.5/xlator/mgmt/glusterd.so(+0x5fc75) 
[0x7fc7d6002c75] 
-->/usr/lib64/glusterfs/3.7.5/xlator/mgmt/glusterd.so(glusterd_hooks_run_hooks+0x4cc) 
[0x7fc7d60920bc] -->/usr/lib64/libglusterfs.so.0(runner_log+0x11e) 
[0x7fc7e162868e] ) 0-management: Ran script: 
/var/lib/glusterd/hooks/1/stop/pre/S29CTDB-teardown.sh 
--volname=VOL_ZIMBRA --last=no
[2015-11-11 13:03:29.789594] E [run.c:190:runner_log] 
(-->/usr/lib64/glusterfs/3.7.5/xlator/mgmt/glusterd.so(+0x5fc75) 
[0x7fc7d6002c75] 
-->/usr/lib64/glusterfs/3.7.5/xlator/mgmt/glusterd.so(glusterd_hooks_run_hooks+0x470) 
[0x7fc7d6092060] -->/usr/lib64/libglusterfs.so.0(runner_log+0x11e) 
[0x7fc7e162868e] ) 0-management: Failed to execute script: 
/var/lib/glusterd/hooks/1/stop/pre/S30samba-stop.sh 
--volname=VOL_ZIMBRA --last=no
[2015-11-11 13:03:29.790807] I [MSGID: 106132] 
[glusterd-utils.c:1371:glusterd_service_stop] 0-management: brick 
already stopped
[2015-11-11 13:03:31.108959] I [MSGID: 106540] 
[glusterd-utils.c:4105:glusterd_nfs_pmap_deregister] 0-glusterd: 
De-registered MOUNTV3 successfully
[2015-11-11 13:03:31.109881] I [MSGID: 106540] 
[glusterd-utils.c:4114:glusterd_nfs_pmap_deregister] 0-glusterd: 
De-registered MOUNTV1 successfully
[2015-11-11 13:03:31.110725] I [MSGID: 106540] 
[glusterd-utils.c:4123:glusterd_nfs_pmap_deregister] 0-glusterd: 
De-registered NFSV3 successfully
[2015-11-11 13:03:31.111562] I [MSGID: 106540] 
[glusterd-utils.c:4132:glusterd_nfs_pmap_deregister] 0-glusterd: 
De-registered NLM v4 successfully
[2015-11-11 13:03:31.112396] I [MSGID: 106540] 
[glusterd-utils.c:4141:glusterd_nfs_pmap_deregister] 0-glusterd: 
De-registered NLM v1 successfully
[2015-11-11 13:03:31.113225] I [MSGID: 106540] 
[glusterd-utils.c:4150:glusterd_nfs_pmap_deregister] 0-glusterd: 
De-registered ACL v3 successfully
[2015-11-11 13:03:32.212071] I [MSGID: 106132] 
[glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: bitd 
already stopped
[2015-11-11 13:03:32.212862] I [MSGID: 106132] 
[glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: scrub 
already stopped
[2015-11-11 13:03:32.213099] I [MSGID: 106144] 
[glusterd-pmap.c:274:pmap_registry_remove] 0-pmap: removing brick 
/gluster/VOL_ZIMBRA/brick on port 49191
[2015-11-11 13:03:32.282685] I [MSGID: 106144] 
[glusterd-pmap.c:274:pmap_registry_remove] 0-pmap: removing brick 
/gluster/VOL_ZIMBRA/brick3 on port 49168
[2015-11-11 13:03:32.364079] I [MSGID: 101053] 
[mem-pool.c:616:mem_pool_destroy] 0-management: size=588 max=1 total=1
[2015-11-11 13:03:32.364111] I [MSGID: 101053] 
[mem-pool.c:616:mem_pool_destroy] 0-management: size=124 max=1 total=1
[2015-11-11 13:03:32.374604] I [MSGID: 101053] 
[mem-pool.c:616:mem_pool_destroy] 0-management: size=588 max=1 total=1
[2015-11-11 13:03:32.374640] I [MSGID: 101053] 
[mem-pool.c:616:mem_pool_destroy] 0-management: size=124 max=1 total=1
[2015-11-11 13:03:41.906892] I [MSGID: 106495] 
[glusterd-handler.c:3049:__glusterd_handle_getwd] 0-glusterd: 
Received getwd req
[2015-11-11 13:03:41.910931] E [run.c:190:runner_log] 
(-->/usr/lib64/glusterfs/3.7.5/xlator/mgmt/glusterd.so(+0xef3d2) 
[0x7fc7d60923d2] 
-->/usr/lib64/glusterfs/3.7.5/xlator/mgmt/glusterd.so(glusterd_hooks_run_hooks+0x470) 
[0x7fc7d6092060] -->/usr/lib64/libglusterfs.so.0(runner_log+0x11e) 
[0x7fc7e162868e] ) 0-management: Failed to execute script: 
/var/lib/glusterd/hooks/1/delete/post/S57glusterfind-delete-post.py 
--volname=VOL_ZIMBRA




___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users





--
Marco Crociani
Prisma Telecom Testing S.r.l.
via Petrocchi, 4  20127 MILANO  ITALY
Phone:  +39 02 26113507
Fax:  +39 02 26113597
e-mail:  mar...@prismatelecomtesting.com
we

[Gluster-users] Gluster 3.7.5 - S57glusterfind-delete-post.py error

2015-11-12 Thread Marco Lorenzo Crociani

Hi,
I removed one volume from the ovirt console.
oVirt 3.5.4
Gluster 3.7.5
CentOS release 6.7

In the logs there where these errors:


[2015-11-11 13:03:29.783491] I [run.c:190:runner_log] 
(-->/usr/lib64/glusterfs/3.7.5/xlator/mgmt/glusterd.so(+0x5fc75) 
[0x7fc7d6002c75] 
-->/usr/lib64/glusterfs/3.7.5/xlator/mgmt/glusterd.so(glusterd_hooks_run_hooks+0x4cc) 
[0x7fc7d60920bc] -->/usr/lib64/libglusterfs.so.0(runner_log+0x11e) 
[0x7fc7e162868e] ) 0-management: Ran script: 
/var/lib/glusterd/hooks/1/stop/pre/S29CTDB-teardown.sh 
--volname=VOL_ZIMBRA --last=no
[2015-11-11 13:03:29.789594] E [run.c:190:runner_log] 
(-->/usr/lib64/glusterfs/3.7.5/xlator/mgmt/glusterd.so(+0x5fc75) 
[0x7fc7d6002c75] 
-->/usr/lib64/glusterfs/3.7.5/xlator/mgmt/glusterd.so(glusterd_hooks_run_hooks+0x470) 
[0x7fc7d6092060] -->/usr/lib64/libglusterfs.so.0(runner_log+0x11e) 
[0x7fc7e162868e] ) 0-management: Failed to execute script: 
/var/lib/glusterd/hooks/1/stop/pre/S30samba-stop.sh --volname=VOL_ZIMBRA 
--last=no
[2015-11-11 13:03:29.790807] I [MSGID: 106132] 
[glusterd-utils.c:1371:glusterd_service_stop] 0-management: brick 
already stopped
[2015-11-11 13:03:31.108959] I [MSGID: 106540] 
[glusterd-utils.c:4105:glusterd_nfs_pmap_deregister] 0-glusterd: 
De-registered MOUNTV3 successfully
[2015-11-11 13:03:31.109881] I [MSGID: 106540] 
[glusterd-utils.c:4114:glusterd_nfs_pmap_deregister] 0-glusterd: 
De-registered MOUNTV1 successfully
[2015-11-11 13:03:31.110725] I [MSGID: 106540] 
[glusterd-utils.c:4123:glusterd_nfs_pmap_deregister] 0-glusterd: 
De-registered NFSV3 successfully
[2015-11-11 13:03:31.111562] I [MSGID: 106540] 
[glusterd-utils.c:4132:glusterd_nfs_pmap_deregister] 0-glusterd: 
De-registered NLM v4 successfully
[2015-11-11 13:03:31.112396] I [MSGID: 106540] 
[glusterd-utils.c:4141:glusterd_nfs_pmap_deregister] 0-glusterd: 
De-registered NLM v1 successfully
[2015-11-11 13:03:31.113225] I [MSGID: 106540] 
[glusterd-utils.c:4150:glusterd_nfs_pmap_deregister] 0-glusterd: 
De-registered ACL v3 successfully
[2015-11-11 13:03:32.212071] I [MSGID: 106132] 
[glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: bitd already 
stopped
[2015-11-11 13:03:32.212862] I [MSGID: 106132] 
[glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: scrub already 
stopped
[2015-11-11 13:03:32.213099] I [MSGID: 106144] 
[glusterd-pmap.c:274:pmap_registry_remove] 0-pmap: removing brick 
/gluster/VOL_ZIMBRA/brick on port 49191
[2015-11-11 13:03:32.282685] I [MSGID: 106144] 
[glusterd-pmap.c:274:pmap_registry_remove] 0-pmap: removing brick 
/gluster/VOL_ZIMBRA/brick3 on port 49168
[2015-11-11 13:03:32.364079] I [MSGID: 101053] 
[mem-pool.c:616:mem_pool_destroy] 0-management: size=588 max=1 total=1
[2015-11-11 13:03:32.364111] I [MSGID: 101053] 
[mem-pool.c:616:mem_pool_destroy] 0-management: size=124 max=1 total=1
[2015-11-11 13:03:32.374604] I [MSGID: 101053] 
[mem-pool.c:616:mem_pool_destroy] 0-management: size=588 max=1 total=1
[2015-11-11 13:03:32.374640] I [MSGID: 101053] 
[mem-pool.c:616:mem_pool_destroy] 0-management: size=124 max=1 total=1
[2015-11-11 13:03:41.906892] I [MSGID: 106495] 
[glusterd-handler.c:3049:__glusterd_handle_getwd] 0-glusterd: Received 
getwd req
[2015-11-11 13:03:41.910931] E [run.c:190:runner_log] 
(-->/usr/lib64/glusterfs/3.7.5/xlator/mgmt/glusterd.so(+0xef3d2) 
[0x7fc7d60923d2] 
-->/usr/lib64/glusterfs/3.7.5/xlator/mgmt/glusterd.so(glusterd_hooks_run_hooks+0x470) 
[0x7fc7d6092060] -->/usr/lib64/libglusterfs.so.0(runner_log+0x11e) 
[0x7fc7e162868e] ) 0-management: Failed to execute script: 
/var/lib/glusterd/hooks/1/delete/post/S57glusterfind-delete-post.py 
--volname=VOL_ZIMBRA


--
Marco Crociani
Prisma Telecom Testing S.r.l.
via Petrocchi, 4  20127 MILANO  ITALY
Phone:  +39 02 26113507
Fax:  +39 02 26113597
e-mail:  mar...@prismatelecomtesting.com
web:  http://www.prismatelecomtesting.com

Questa email (e I suoi allegati) costituisce informazione riservata e 
confidenziale e può essere soggetto a legal privilege. Può essere utilizzata 
esclusivamente dai suoi destinatari legittimi.  Se avete ricevuto questa email 
per errore, siete pregati di informarne immediatamente il mittente e quindi 
cancellarla.  A meno che non siate stati a ciò espressamente autorizzati, la 
diffusione o la riproduzione di questa email o del suo contenuto non sono 
consentiti.

 Salvo che questa email sia espressamente qualificata come offerta o 
accettazione contrattuale, il mittente non intende con questa email dare vita 
ad un vincolo giuridico e questa email non può essere interpretata quale 
offerta o accettazione che possa dare vita ad un contratto. Qualsiasi opinione 
manifestata in questa email è un'opinione personale del mittente, salvo che il 
mittente dichiari espressamente che si tratti di un'opinione di Prisma 
Engineering.


***

 This e-mail (including any attachments) is private and 

Re: [Gluster-users] Missing files after add new bricks and remove old ones - how to restore files

2015-11-05 Thread Marco Lorenzo Crociani

Any news?
glusterfs version is 3.7.5

On 30/10/2015 17:51, Marco Lorenzo Crociani wrote:

Hi Susant,
here the stats:

[root@s20 brick1]# stat .* *
  File: `.'
  Size: 78Blocks: 0  IO Block: 4096 directory
Device: 811h/2065dInode: 2481712637  Links: 7
Access: (0755/drwxr-xr-x)  Uid: (0/root)   Gid: (0/ root)
Access: 2015-10-30 15:26:57.565475699 +0100
Modify: 2015-08-04 12:30:56.604846056 +0200
Change: 2015-10-27 14:21:12.981420157 +0100
  File: `..'
  Size: 50Blocks: 0  IO Block: 4096 directory
Device: 811h/2065dInode: 495824  Links: 6
Access: (0755/drwxr-xr-x)  Uid: (0/root)   Gid: (0/ root)
Access: 2015-10-27 11:27:36.956230769 +0100
Modify: 2015-08-04 11:25:27.893410342 +0200
Change: 2015-08-04 11:25:27.893410342 +0200
  File: `.glusterfs'
  Size: 8192  Blocks: 24 IO Block: 4096 directory
Device: 811h/2065dInode: 2481712643  Links: 261
Access: (0600/drw---)  Uid: (0/root)   Gid: (0/ root)
Access: 2015-10-30 15:31:14.988775452 +0100
Modify: 2015-08-04 12:31:40.803715075 +0200
Change: 2015-08-04 12:31:40.803715075 +0200
  File: `.trashcan'
  Size: 24Blocks: 0  IO Block: 4096 directory
Device: 811h/2065dInode: 495865  Links: 3
Access: (0755/drwxr-xr-x)  Uid: (0/root)   Gid: (0/ root)
Access: 2015-10-26 18:32:17.369070847 +0100
Modify: 2015-08-04 11:36:11.357529000 +0200
Change: 2015-10-26 18:32:17.368070850 +0100
  File: `lost+found'
  Size: 6 Blocks: 0  IO Block: 4096 directory
Device: 811h/2065dInode: 2481712624  Links: 2
Access: (0700/drwx--)  Uid: (0/root)   Gid: (0/ root)
Access: 2015-10-26 18:55:08.274323554 +0100
Modify: 2014-01-18 21:48:37.0 +0100
Change: 2015-10-26 18:55:08.259323594 +0100
  File: `rh'
  Size: 6 Blocks: 0  IO Block: 4096 directory
Device: 811h/2065dInode: 495961  Links: 2
Access: (0755/drwxr-xr-x)  Uid: (0/root)   Gid: (0/ root)
Access: 2015-10-26 14:02:02.294771698 +0100
Modify: 2015-03-26 13:22:19.0 +0100
Change: 2015-10-26 18:32:17.384070805 +0100
  File: `zimbra'
  Size: 4096  Blocks: 8  IO Block: 4096 directory
Device: 811h/2065dInode: 495969  Links: 50
Access: (0755/drwxr-xr-x)  Uid: (0/root)   Gid: (0/ root)
Access: 2015-10-30 15:27:44.899346957 +0100
Modify: 2015-10-26 18:32:17.733069841 +0100
Change: 2015-10-26 18:32:17.733069841 +0100



[root@s21 brick2]#  stat .* *
  File: `.'
  Size: 78Blocks: 0  IO Block: 4096 directory
Device: 811h/2065dInode: 501309  Links: 7
Access: (0755/drwxr-xr-x)  Uid: (0/root)   Gid: (0/ root)
Access: 2015-10-30 17:45:23.983929018 +0100
Modify: 2015-08-04 12:30:56.602392330 +0200
Change: 2015-10-26 18:32:17.327779305 +0100
  File: `..'
  Size: 50Blocks: 0  IO Block: 4096 directory
Device: 811h/2065dInode: 2484780736  Links: 6
Access: (0755/drwxr-xr-x)  Uid: (0/root)   Gid: (0/ root)
Access: 2015-10-30 17:45:20.800922878 +0100
Modify: 2015-08-04 11:25:27.942732803 +0200
Change: 2015-08-04 11:25:27.942732803 +0200
  File: `.glusterfs'
  Size: 8192  Blocks: 24 IO Block: 4096 directory
Device: 811h/2065dInode: 501323  Links: 261
Access: (0600/drw---)  Uid: (0/root)   Gid: (0/ root)
Access: 2015-08-04 11:36:13.776967886 +0200
Modify: 2015-08-04 12:31:40.801477366 +0200
Change: 2015-08-04 12:31:40.801477366 +0200
  File: `.trashcan'
  Size: 24Blocks: 0  IO Block: 4096 directory
Device: 811h/2065dInode: 2484780773  Links: 3
Access: (0755/drwxr-xr-x)  Uid: (0/root)   Gid: (0/ root)
Access: 2015-10-26 18:32:17.324779299 +0100
Modify: 2015-08-04 11:36:11.357529000 +0200
Change: 2015-10-26 18:32:17.368779386 +0100
  File: `lost+found'
  Size: 6 Blocks: 0  IO Block: 4096 directory
Device: 811h/2065dInode: 501268  Links: 2
Access: (0700/drwx--)  Uid: (0/root)   Gid: (0/ root)
Access: 2015-10-26 18:32:17.371779392 +0100
Modify: 2014-01-18 21:48:37.0 +0100
Change: 2015-10-26 18:55:08.260516194 +0100
  File: `rh'
  Size: 6 Blocks: 0  IO Block: 4096 directory
Device: 811h/2065dInode: 2484780842  Links: 2
Access: (0755/drwxr-xr-x)  Uid: (0/root)   Gid: (0/ root)
Access: 2015-10-26 18:32:17.386779422 +0100
Modify: 2015-03-26 13:22:19.0 +0100
Change: 2015-10-26 18:32:17.384779418 +0100
  File: `zimbra'
  Size: 4096  Blocks: 8  IO Block: 4096 directory
Device: 811h/2065dInode: 2484780856  Links: 50
Access: (0755/drwxr-xr-x)  Uid: (0/root)   Gid: (0/ root)
Access: 2015-10-26 18:34:10.34939 +0100
Modify: 2015-10-26 18:32:17.733780116 +0100
Change: 2015-10-26 18:32:17.733780116 +0100






[root@s20 brick1]# stat zimbra/jdk-1.7.0_45/db/bin/*
  File: `zimbra/jdk-1.7.0_45/db/bin/dblook'
  Size: 5740

Re: [Gluster-users] Missing files after add new bricks and remove old ones - how to restore files

2015-10-30 Thread Marco Lorenzo Crociani
)   Gid: (0/ root)
Access: 2015-10-26 15:31:30.507105063 +0100
Modify: 2012-11-02 12:23:32.0 +0100
Change: 2015-10-26 15:31:30.508105065 +0100
  File: `zimbra/jdk-1.7.0_45/db/bin/setEmbeddedCP.bat'
  Size: 1278  Blocks: 8  IO Block: 4096   regular file
Device: 811h/2065dInode: 2484798191  Links: 2
Access: (0755/-rwxr-xr-x)  Uid: (0/root)   Gid: (0/ root)
Access: 2015-10-26 15:31:30.513105075 +0100
Modify: 2012-11-01 10:49:22.0 +0100
Change: 2015-10-26 15:31:30.513105075 +0100
  File: `zimbra/jdk-1.7.0_45/db/bin/setNetworkClientCP.bat'
  Size: 1284  Blocks: 8  IO Block: 4096   regular file
Device: 811h/2065dInode: 2484798194  Links: 2
Access: (0755/-rwxr-xr-x)  Uid: (0/root)   Gid: (0/ root)
Access: 2015-10-26 15:31:30.514105077 +0100
Modify: 2012-11-01 10:49:22.0 +0100
Change: 2015-10-26 15:31:30.515105079 +0100
  File: `zimbra/jdk-1.7.0_45/db/bin/setNetworkServerCP'
  Size: 1075  Blocks: 8  IO Block: 4096   regular file
Device: 811h/2065dInode: 2484798195  Links: 2
Access: (0755/-rwxr-xr-x)  Uid: (0/root)   Gid: (0/ root)
Access: 2015-10-26 15:31:30.515105079 +0100
Modify: 2012-11-01 10:49:22.0 +0100
Change: 2015-10-26 15:31:30.516105081 +0100
  File: `zimbra/jdk-1.7.0_45/db/bin/startNetworkServer'
  Size: 5807  Blocks: 16 IO Block: 4096   regular file
Device: 811h/2065dInode: 2484798198  Links: 2
Access: (0755/-rwxr-xr-x)  Uid: (0/root)   Gid: (0/ root)
Access: 2015-10-26 15:31:30.517105083 +0100
Modify: 2012-11-02 12:23:32.0 +0100
Change: 2015-10-26 15:31:30.518105085 +0100
  File: `zimbra/jdk-1.7.0_45/db/bin/startNetworkServer.bat'
  Size: 1397  Blocks: 8  IO Block: 4096   regular file
Device: 811h/2065dInode: 2484798200  Links: 2
Access: (0755/-rwxr-xr-x)  Uid: (0/root)   Gid: (0/ root)
Access: 2015-10-26 15:31:30.519105087 +0100
Modify: 2012-11-01 10:49:22.0 +0100
Change: 2015-10-26 15:31:30.519105087 +0100
  File: `zimbra/jdk-1.7.0_45/db/bin/sysinfo.bat'
  Size: 1389  Blocks: 8  IO Block: 4096   regular file
Device: 811h/2065dInode: 2484798206  Links: 2
Access: (0755/-rwxr-xr-x)  Uid: (0/root)   Gid: (0/ root)
Access: 2015-10-26 15:31:30.520105089 +0100
Modify: 2012-11-01 10:49:22.0 +0100
Change: 2015-10-26 15:31:30.521105091 +0100


Thanks,

Marco



On 30/10/2015 08:22, Susant Palai wrote:

Hi Marco,
 Can you send the stat of the files from the removed-brick?

Susant

- Original Message -
From: "Marco Lorenzo Crociani" <mar...@prismatelecomtesting.com>
To: gluster-users@gluster.org
Sent: Tuesday, 27 October, 2015 6:58:51 PM
Subject: [Gluster-users] Missing files after add new bricks and remove old ones 
- how to restore files

Hi,
we had a 2 node glusterfs cluster.
we had a 2x2 Distributed-Replicate volume on it.

It was:
Brick1: s20gfs.ovirt:/gluster/VOL/brick1
Brick2: s21gfs.ovirt:/gluster/VOL/brick2
Brick3: s20gfs.ovirt:/gluster/VOL/brick3
Brick4: s21gfs.ovirt:/gluster/VOL/brick4

We added more nodes to the cluster. So I wanted to redistribute the
bricks on the nodes.
I added 2 new bricks to the volume.

gluster volume add-brick VOL s22gfs.ovirt:/gluster/VOL/brick2
x`s23gfs.ovirt:/gluster/VOL/brick3

then I removed 2 old bricks
gluster volume remove-brick VOL s20gfs.ovirt:/gluster/VOL/brick1
s21gfs.ovirt:/gluster/VOL/brick2 start

checked the status
gluster volume remove-brick VOL s20gfs.ovirt:/gluster/VOL/brick1
s21gfs.ovirt:/gluster/VOL/brick2 status

when it was completed and I saw data on the new bricks I run:
gluster volume remove-brick VOL s20gfs.ovirt:/gluster/VOL/brick1
s21gfs.ovirt:/gluster/VOL/brick2 commit

Result:
some files missing from the volume. Those file are still on the removed
bricks.

First question. Have I followed a wrong procedure?

Second question. How to restore those files? I can't re-add back those
brick to the volume because it tolds me:
volume add-brick: failed: /gluster/VOL/brick1 is already part of a volume
If I use the force option I get the old files or they get erased?
Should I rsync from the  "unmounted" brick to the "mounted" volume or to
an "umounted" brick that is part of the volume?

Regards,




--
Marco Crociani
Prisma Telecom Testing S.r.l.
via Petrocchi, 4  20127 MILANO  ITALY
Phone:  +39 02 26113507
Fax:  +39 02 26113597
e-mail:  mar...@prismatelecomtesting.com
web:  http://www.prismatelecomtesting.com

Questa email (e I suoi allegati) costituisce informazione riservata e 
confidenziale e può essere soggetto a legal privilege. Può essere utilizzata 
esclusivamente dai suoi destinatari legittimi.  Se avete ricevuto questa email 
per errore, siete pregati di informarne immediatamente il mittente e quindi 
cancellarla.  A meno che non siate stati a ciò espressamente autorizzati, la 
diffusione o la riproduzione di questa email o del suo contenuto non 

[Gluster-users] Missing files after add new bricks and remove old ones - how to restore files

2015-10-27 Thread Marco Lorenzo Crociani

Hi,
we had a 2 node glusterfs cluster.
we had a 2x2 Distributed-Replicate volume on it.

It was:
Brick1: s20gfs.ovirt:/gluster/VOL/brick1
Brick2: s21gfs.ovirt:/gluster/VOL/brick2
Brick3: s20gfs.ovirt:/gluster/VOL/brick3
Brick4: s21gfs.ovirt:/gluster/VOL/brick4

We added more nodes to the cluster. So I wanted to redistribute the 
bricks on the nodes.

I added 2 new bricks to the volume.

gluster volume add-brick VOL s22gfs.ovirt:/gluster/VOL/brick2 
s23gfs.ovirt:/gluster/VOL/brick3


then I removed 2 old bricks
gluster volume remove-brick VOL s20gfs.ovirt:/gluster/VOL/brick1 
s21gfs.ovirt:/gluster/VOL/brick2 start


checked the status
gluster volume remove-brick VOL s20gfs.ovirt:/gluster/VOL/brick1 
s21gfs.ovirt:/gluster/VOL/brick2 status


when it was completed and I saw data on the new bricks I run:
gluster volume remove-brick VOL s20gfs.ovirt:/gluster/VOL/brick1 
s21gfs.ovirt:/gluster/VOL/brick2 commit


Result:
some files missing from the volume. Those file are still on the removed 
bricks.


First question. Have I followed a wrong procedure?

Second question. How to restore those files? I can't re-add back those 
brick to the volume because it tolds me:

volume add-brick: failed: /gluster/VOL/brick1 is already part of a volume
If I use the force option I get the old files or they get erased?
Should I rsync from the  "unmounted" brick to the "mounted" volume or to 
an "umounted" brick that is part of the volume?


Regards,

--
Marco Crociani
Prisma Telecom Testing S.r.l.
via Petrocchi, 4  20127 MILANO  ITALY
Phone:  +39 02 26113507
Fax:  +39 02 26113597
e-mail:  mar...@prismatelecomtesting.com
web:  http://www.prismatelecomtesting.com

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users