[Gluster-users] Strange problems with my Gluster Cluster

2017-01-14 Thread Fedele Stabile
Hello,

I have a 32-nodes cluster each node has 2 brick of 1 TB each and I
configured only one distributed volume using all my 64 bricks

I suppose to have 64 TB of disk online but I can see only 36 TB!!

All bricks are online 

but the result of a volume status says that

an operation of rebalance failed.

In glusterd.vol.log I can see 

glusterd_volume_rebalance_use_rsp_dict] 0-glusterd: failed to get index"
repeated 32 times between [2017-01-14 17:04:18.727159] and [2017-01-14
17:04:23.796719]

W [socket.c:588:__socket_rwv] 0-management: readv on
/var/run/gluster/gluster-rebalance-fc6f18b6-a06c-4fdf-ac08-23e9b4f8053e.sock
failed (No data available)

I [MSGID: 106007] [glusterd-rebalance.c:162:__glusterd_defrag_notify]
0-management: Rebalance process for volume scratch has disconnected.

[2017-01-14 17:05:43.502366] I [MSGID: 101053]
[mem-pool.c:616:mem_pool_destroy] 0-management: size=588 max=0 total=0

[2017-01-14 17:05:43.502378] I [MSGID: 101053]
[mem-pool.c:616:mem_pool_destroy] 0-management: size=124 max=0 total=0

[2017-01-14 17:11:21.827125] I [MSGID: 106499]
[glusterd-handler.c:4329:__glusterd_handle_status_volume] 0-management:
Received status volume req for volume scratch

[2017-01-14 17:11:21.859989] W [MSGID: 106217]
[glusterd-op-sm.c:4592:glusterd_op_modify_op_ctx] 0-management: Failed uuid
to hostname conversion

[2017-01-14 17:11:21.860009] W [MSGID: 106387]
[glusterd-op-sm.c:4696:glusterd_op_modify_op_ctx] 0-management: op_ctx
modification failed

[2017-01-14 17:11:33.694377] I [MSGID: 106488]
[glusterd-handler.c:1533:__glusterd_handle_cli_get_volume] 0-glusterd:
Received get vol req

 

Can anyone help me with this problem?

My glusterfs version is 3.7.8

 

Thank you in advance

Fedele Stabile

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] .deleted folder in glusterfs

2017-01-14 Thread Michele Soragni [fabbricadigitale]
Hi! 

I set a gluster storage with 3 replicated nodes with 1 brick each.

After 1 month, some days ago, all the data in my gluster storage
disappeared.

I've found all my data in each brick under .deleted folder.

What is .deleted folder used for in gluster?

Split brain file management?

 

Thanks

 

 

 

 



smime.p7s
Description: S/MIME cryptographic signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] CentOS Storage SIG Repo for 3.8 is behind 3 versions (3.8.5). Is it still being used and can I help?

2017-01-14 Thread Kaushal M
Thanks for doing this Daryl. The help is appreciated.

On Sat, Jan 14, 2017 at 12:37 AM, Daryl lee  wrote:
> Thanks everyone for the information.   I'm happy to help provide test repo 
> feedback on an ongoing basis to help things along which I'll do right now.
>
> Niels,
> If you require more or less information please let me know, happy to help.  
> Thanks for doing the builds!
>
> I deployed GlusterFS v3.8.8 successfully to 5 servers running CentOS Linux 
> release 7.3.1611 (Core) here are the results.
>
> 2 GlusterFS clients deployed the following packages:
> ---
> glusterfs x86_64   
> 3.8.8-1.el7centos-gluster38-test  
>  509 k
> glusterfs-api x86_64   3.8.8-1.el7
> centos-gluster38-test89 k
> glusterfs-client-xlators x86_64   3.8.8-1.el7 
>centos-gluster38-test   781 k
> glusterfs-fuse  x86_64   3.8.8-1.el7  
>   centos-gluster38-test   133 k
> glusterfs-libsx86_64   3.8.8-1.el7
> centos-gluster38-test   378 k
>
> Tests:
> ---
> Package DOWNLOAD/UPDATE/CLEANUP from repo  - SUCCESS
> Basic FUSE mount RW test to remote GlusterFS volume - SUCCESS
> Boot and basic functionality test of libvirt gfapi based KVM Virtual Machine 
> - SUCCESS
>
>
> 3 GlusterFS Brick/Volume servers running REPLICA 3 ARBITER 1 updated the 
> following packages:
> ---
> glusterfsx86_64 
> 3.8.8-1.el7 centos-gluster38-test 
> 509 k
> glusterfs-apix86_64 
> 3.8.8-1.el7 centos-gluster38-test 
>  89 k
> glusterfs-cli  x86_64 
> 3.8.8-1.el7 centos-gluster38-test 
> 182 k
> glusterfs-client-xlators   x86_64 3.8.8-1.el7 
> centos-gluster38-test 781 k
> glusterfs-fusex86_64 3.8.8-1.el7  
>centos-gluster38-test 133 k
> glusterfs-libs   x86_64 
> 3.8.8-1.el7 centos-gluster38-test 
> 378 k
> glusterfs-server x86_64 3.8.8-1.el7   
>   centos-gluster38-test 1.4 M
> userspace-rcux86_64 0.7.16-3.el7  
>   centos-gluster38-test  72 k
>
> Tests:
> ---
> Package DOWNLOAD/UPDATE/CLEANUP from repo - SUCCESS w/ warnings
> *  warning during updating of glusterfs-server-3.8.8-1.el7.x86_64 backing 
> up gluster .vol files saved to rpmsave.   This is expected.
> Bricks on all 3 servers started - SUCCESS
> Self Healing Daemon on all 3 servers started - SUCCESS
> Bitrot Daemon on all 3 servers started - SUCCESS
> Scrubber Daemon on all 3 servers started - SUCCESS
> First replica self healing - success
> Second replica self healing - success
> Arbiter replica self healing - success
>
>
> -Daryl
>
>
> -Original Message-
> From: Kaushal M [mailto:kshlms...@gmail.com]
> Sent: Friday, January 13, 2017 5:03 AM
> To: Daryl lee
> Cc: Pavel Szalbot; gluster-users; Niels de Vos
> Subject: Re: [Gluster-users] CentOS Storage SIG Repo for 3.8 is behind 3 
> versions (3.8.5). Is it still being used and can I help?
>
> Packages for 3.7, 3.8 and 3.9 are being built for the Storage SIG.
> Niels is very punctual about building them. The packages first land in the 
> respective testing repositories. If someone verifies that the packages are 
> okay, and gives Niels a heads-up, he pushes the packages to be signed and 
> added to the release repositories.
>
> The only issue is that Niels doesn't get enough (or any) verifications. And 
> the packages linger in testing.
>
> On Fri, Jan 13, 2017 at 3:31 PM, Pavel Szalbot  
> wrote:
>> Hi, you can install 3.8.7 from centos-gluster38-test using:
>>
>> yum