Re: [Gluster-users] Duplicate UUID entries in "gluster peer status" command

2016-11-20 Thread ABHISHEK PALIWAL
Hi Atin,

I will be waiting for your response.

On Mon, Nov 21, 2016 at 10:00 AM, ABHISHEK PALIWAL 
wrote:

> Hi Atin,
>
> System is the embedded system and these dates are before the system get in
> timer sync.
>
> Yes, I have also seen these two files in peers directory on 002500 board
> and I want to know the reason why gluster creates the second file when
> there is old file is exist. Even when you see the content of the these file
> are same.
>
> Is it possible for gluster if we fall in this situation then instead of
> manually doing the steps which you mentioned above gluster will take care
> of this?
>
> I have some questions:
>
> 1. based on the logs can we find out the reason for having two peers files
> with same contents.
> 2. is there any way to do it from gluster code.
>
> Regards,
> Abhishek
>
> Regards,
> Abhishek
>
> On Mon, Nov 21, 2016 at 9:52 AM, Atin Mukherjee 
> wrote:
>
>> atin@dhcp35-96:~/Downloads/gluster_users/abhishek_dup_uuid/
>> duplicate_uuid/glusterd_2500/peers$ ls -lrt
>> total 8
>> -rw---. 1 atin wheel 71 *Jan  1  1970* 5be8603b-18d0-4333-8590-38f918
>> a22857
>> -rw---. 1 atin wheel 71 Nov 18 03:31 26ae19a6-b58f-446a-b079-411d4e
>> e57450
>>
>> In board 2500 look at the date of the file 
>> 5be8603b-18d0-4333-8590-38f918a22857
>> (marked in bold). Not sure how did you end up having this file in such time
>> stamp. I am guessing this could be because of the set up been not cleaned
>> properly at the time of re-installation.
>>
>> Here is the steps what I'd recommend for now:
>>
>> 1. rename 26ae19a6-b58f-446a-b079-411d4ee57450 to
>> 5be8603b-18d0-4333-8590-38f918a22857, you should have only one entry in
>> the peers folder in board 2500.
>> 2. Bring down both glusterd instances
>> 3. Bring back one by one
>>
>> And then restart glusterd to see if the issue persists.
>>
>>
>>
>> On Mon, Nov 21, 2016 at 9:34 AM, ABHISHEK PALIWAL <
>> abhishpali...@gmail.com> wrote:
>>
>>> Hope you will see in the logs..
>>>
>>> On Mon, Nov 21, 2016 at 9:17 AM, ABHISHEK PALIWAL <
>>> abhishpali...@gmail.com> wrote:
>>>
 Hi Atin,

 It is not getting wipe off we have changed the configuration path from
 /var/lib/glusterd to /system/glusterd.

 So, they will remain as same as previous.

 On Mon, Nov 21, 2016 at 9:15 AM, Atin Mukherjee 
 wrote:

> Abhishek,
>
> rebooting the board does wipe of /var/lib/glusterd contents in your
> set up right (as per my earlier conversation with you) ? In that case, how
> are you ensuring that the same node gets back the older UUID? If you don't
> then this is bound to happen.
>
> On Mon, Nov 21, 2016 at 9:11 AM, ABHISHEK PALIWAL <
> abhishpali...@gmail.com> wrote:
>
>> Hi Team,
>>
>> Please lookinto this problem as this is very widely seen problem in
>> our system.
>>
>> We are having the setup of replicate volume setup with two brick but
>> after restarting the second board I am getting the duplicate entry in
>> "gluster peer status" command like below:
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> *# gluster peer status Number of Peers: 2  Hostname: 10.32.0.48 Uuid:
>> 5be8603b-18d0-4333-8590-38f918a22857 State: Peer in Cluster (Connected)
>>  Hostname: 10.32.0.48 Uuid: 5be8603b-18d0-4333-8590-38f918a22857 State:
>> Peer in Cluster (Connected) # *
>>
>> I am attaching all logs from both the boards and the command outputs
>> as well.
>>
>> So could you please check what is the reason to get in this situation
>> as it is very frequent in multiple case.
>>
>> Also, we are not replacing any board from setup just rebooting.
>>
>> --
>>
>> Regards
>> Abhishek Paliwal
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-users
>>
>
>
>
> --
>
> ~ Atin (atinm)
>



 --




 Regards
 Abhishek Paliwal

>>>
>>>
>>>
>>> --
>>>
>>>
>>>
>>>
>>> Regards
>>> Abhishek Paliwal
>>>
>>
>>
>>
>> --
>>
>> ~ Atin (atinm)
>>
>
>
>
> --
>
>
>
>
> Regards
> Abhishek Paliwal
>



-- 




Regards
Abhishek Paliwal
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] How to enable shared_storage?

2016-11-20 Thread Jiffin Tony Thottan



On 21/11/16 11:13, Alexandr Porunov wrote:

Version of glusterfs is 3.8.5

Here what I have installed:
rpm  -ivh 
http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-8.noarch.rpm

yum install centos-release-gluster
yum install glusterfs-server


It should be part of glusterfs-server. So can u check files provided by 
this, run rpm -qil 



yum install glusterfs-geo-replication

Unfortunately it doesn't work if I just add the script 
"/var/lib/glusterd/hooks/1/set/post/S32gluster_enable_shared_storage.sh" 
and restart "glusterd".




I didn't get that, when u rerun gluster v set all 
cluster.enable-shared-storage enable should work (I guess even glusterd 
restart is not required)
Or do u have any volumes named "gluster_shared_storage", if yes please 
remove it and rerun the cli.


--
Jiffin


It seems that I have to install something else..

Sincerely,
Alexandr



On Mon, Nov 21, 2016 at 6:58 AM, Jiffin Tony Thottan 
mailto:jthot...@redhat.com>> wrote:



On 21/11/16 01:07, Alexandr Porunov wrote:

I have installed it from rpm. No that file isn't there. The
folder "/var/lib/glusterd/hooks/1/set/post/" is empty..



which gluster version and what all gluster rpms have u installed?
For time being just download this file[1] and copy to above
location and rerun the same cli.

[1]

https://github.com/gluster/glusterfs/blob/master/extras/hook-scripts/set/post/S32gluster_enable_shared_storage.sh



--
Jiffin



Sincerely,
Alexandr

On Sun, Nov 20, 2016 at 2:55 PM, Jiffin Tony Thottan
mailto:jthot...@redhat.com>> wrote:

Did u install rpm or directly from sources. Can u check
whether following script is present?

/var/lib/glusterd/hooks/1/set/post/S32gluster_enable_shared_storage.sh

--

Jiffin


On 20/11/16 13:33, Alexandr Porunov wrote:

To enable shared storage I used next command:
# gluster volume set all cluster.enable-shared-storage enable

But it seems that it doesn't create gluster_shared_storage
automatically.

# gluster volume status gluster_shared_storage
Volume gluster_shared_storage does not exist

Do I need to manually create a volume
"gluster_shared_storage"? Do I need to manually create a
folder "/var/run/gluster/shared_storage"? Do I need to
manually mount it? Or something I don't need to do?

If I use 6 cluster nodes and I need to have a shared storage
on all of them then how to create a shared storage?
It says that it have to be with replication 2 or replication
3. But if we use shared storage on all of 6 nodes then we
have only 2 ways to create a volume:
1. Use replication 6
2. Use replication 3 with distribution.

Which way I need to use?

Sincerely,
Alexandr

On Sun, Nov 20, 2016 at 9:07 AM, Jiffin Tony Thottan
mailto:jthot...@redhat.com>> wrote:



On 19/11/16 21:47, Alexandr Porunov wrote:

Unfortunately I haven't this log file but I have
'run-gluster-shared_storage.log' and it has errors I
don't know why.

Here is the content of the
'run-gluster-shared_storage.log':



Make sure shared storage is up and running using
"gluster volume status gluster_shared_storage"

May be the issue is related to firewalld or iptables.
Try it after disabling them.

--

Jiffin

[2016-11-19 10:37:01.581737] I [MSGID: 100030]
[glusterfsd.c:2454:main] 0-/usr/sbin/glusterfs: Started
running /usr/sbin/glusterfs version 3.8.5 (args:
/usr/sbin/glusterfs --volfile-server=127.0.0.1
--volfile-id=gluster_shared_storage
/run/gluster/shared_storage)
[2016-11-19 10:37:01.641836] I [MSGID: 101190]
[event-epoll.c:628:event_dispatch_epoll_worker]
0-epoll: Started thread with index 1
[2016-11-19 10:37:01.642311] E
[glusterfsd-mgmt.c:1586:mgmt_getspec_cbk] 0-glusterfs:
failed to get the 'volume file' from server
[2016-11-19 10:37:01.642340] E
[glusterfsd-mgmt.c:1686:mgmt_getspec_cbk] 0-mgmt:
failed to fetch volume file (key:gluster_shared_storage)
[2016-11-19 10:37:01.642592] W
[glusterfsd.c:1327:cleanup_and_exit]
(-->/lib64/libgfrpc.so.0(rpc_clnt_handle_reply+0x90)
[0x7f95cd309770]
-->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x536)
[0x7f95cda3afc6]
-->/usr/sbin/glusterfs(cleanup_and_exit+0x6b)
[0x7f95cda34b4b] ) 0-: received signum (0), shutting down
[2016-11-19 10:37:01.642638] I
[fuse-brid

Re: [Gluster-users] How to force remove geo session?

2016-11-20 Thread Alexandr Porunov
Thank you for the explanation!

I will keep it in mind.

Sincerely,
Alexandr


On Mon, Nov 21, 2016 at 7:39 AM, Kotresh Hiremath Ravishankar <
khire...@redhat.com> wrote:

> Hi,
>
> Glad, you could get it rectified. But, having same slave volume for two
> different
> geo-rep sessions is never recommended. The two sessions end up writing to
> same slave node. It's always one master volume to many different slave
> volume
> configuration if required. If ssh-keys are deleted on slave for some
> reason,
> running geo-rep create command with force option would redistribute keys.
>
> 'gluster vol gep-rep  :: create puh-pem
> force'
>
> And yes, root user and non-root user is considered as two different
> sessions.
>
> Thanks and Regards,
> Kotresh H R
>
> - Original Message -
> > From: "Alexandr Porunov" 
> > To: "gluster-users@gluster.org List" 
> > Sent: Saturday, November 19, 2016 9:41:05 PM
> > Subject: Re: [Gluster-users] How to force remove geo session?
> >
> > OK, I have figured out what it was.
> >
> > I had a session not with 'root' user but with 'geoaccount' user.
> > It seems that we can't have 2 sessions to the one node (even if users are
> > different). After deleting a session with 'geoaccount' user I was able to
> > create a session with 'root' user.
> >
> > On Sat, Nov 19, 2016 at 3:51 PM, Alexandr Porunov <
> > alexandr.poru...@gmail.com > wrote:
> >
> >
> >
> > Hello,
> > I had a geo replication between master nodes and slave nodes. I have
> removed
> > ssh keys for authorization from slave nodes. Now I can neither create
> > session for slave nodes nor remove the old useless session. Is it
> possible
> > to manually remove a sessions from all the nodes?
> >
> > Here is the problem:
> >
> > # gluster volume geo-replication gv0 root@192.168.0.124::gv0 delete
> > reset-sync-time
> > Geo-replication session between gv0 and 192.168.0.124::gv0 does not
> exist.
> > geo-replication command failed
> >
> > # gluster volume geo-replication gv0 root@192.168.0.124::gv0 create
> ssh-port
> > 22 push-pem
> > Session between gv0 and 192.168.0.124::gv0 is already created.
> > geo-replication command failed
> >
> > Sincerely,
> > Alexandr
> >
> >
> > ___
> > Gluster-users mailing list
> > Gluster-users@gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] How to enable shared_storage?

2016-11-20 Thread Alexandr Porunov
Version of glusterfs is 3.8.5

Here what I have installed:
rpm  -ivh
http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-8.noarch.rpm
yum install centos-release-gluster
yum install glusterfs-server
yum install glusterfs-geo-replication

Unfortunately it doesn't work if I just add the script
"/var/lib/glusterd/hooks/1/set/post/S32gluster_enable_shared_storage.sh"
and restart "glusterd".

It seems that I have to install something else..

Sincerely,
Alexandr



On Mon, Nov 21, 2016 at 6:58 AM, Jiffin Tony Thottan 
wrote:

>
> On 21/11/16 01:07, Alexandr Porunov wrote:
>
> I have installed it from rpm. No that file isn't there. The folder
> "/var/lib/glusterd/hooks/1/set/post/" is empty..
>
>
> which gluster version and what all gluster rpms have u installed?
> For time being just download this file[1] and copy to  above location and
> rerun the same cli.
>
> [1] https://github.com/gluster/glusterfs/blob/master/extras/
> hook-scripts/set/post/S32gluster_enable_shared_storage.sh
>
> --
> Jiffin
>
>
> Sincerely,
> Alexandr
>
> On Sun, Nov 20, 2016 at 2:55 PM, Jiffin Tony Thottan 
> wrote:
>
>> Did u install rpm or directly from sources. Can u check whether following
>> script is present?
>> /var/lib/glusterd/hooks/1/set/post/S32gluster_enable_shared_storage.sh
>>
>> --
>>
>> Jiffin
>>
>>
>> On 20/11/16 13:33, Alexandr Porunov wrote:
>>
>> To enable shared storage I used next command:
>> # gluster volume set all cluster.enable-shared-storage enable
>>
>> But it seems that it doesn't create gluster_shared_storage automatically.
>>
>> # gluster volume status gluster_shared_storage
>> Volume gluster_shared_storage does not exist
>>
>> Do I need to manually create a volume "gluster_shared_storage"? Do I need
>> to manually create a folder "/var/run/gluster/shared_storage"? Do I need
>> to manually mount it? Or something I don't need to do?
>>
>> If I use 6 cluster nodes and I need to have a shared storage on all of
>> them then how to create a shared storage?
>> It says that it have to be with replication 2 or replication 3. But if we
>> use shared storage on all of 6 nodes then we have only 2 ways to create a
>> volume:
>> 1. Use replication 6
>> 2. Use replication 3 with distribution.
>>
>> Which way I need to use?
>>
>> Sincerely,
>> Alexandr
>>
>> On Sun, Nov 20, 2016 at 9:07 AM, Jiffin Tony Thottan > > wrote:
>>
>>>
>>>
>>> On 19/11/16 21:47, Alexandr Porunov wrote:
>>>
>>> Unfortunately I haven't this log file but I have
>>> 'run-gluster-shared_storage.log' and it has errors I don't know why.
>>>
>>> Here is the content of the 'run-gluster-shared_storage.log':
>>>
>>>
>>> Make sure shared storage is up and running using "gluster volume status
>>> gluster_shared_storage"
>>>
>>> May be the issue is related to firewalld or iptables. Try it after
>>> disabling them.
>>>
>>> --
>>>
>>> Jiffin
>>>
>>> [2016-11-19 10:37:01.581737] I [MSGID: 100030] [glusterfsd.c:2454:main]
>>> 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.8.5
>>> (args: /usr/sbin/glusterfs --volfile-server=127.0.0.1
>>> --volfile-id=gluster_shared_storage /run/gluster/shared_storage)
>>> [2016-11-19 10:37:01.641836] I [MSGID: 101190]
>>> [event-epoll.c:628:event_dispatch_epoll_worker] 0-epoll: Started thread
>>> with index 1
>>> [2016-11-19 10:37:01.642311] E [glusterfsd-mgmt.c:1586:mgmt_getspec_cbk]
>>> 0-glusterfs: failed to get the 'volume file' from server
>>> [2016-11-19 10:37:01.642340] E [glusterfsd-mgmt.c:1686:mgmt_getspec_cbk]
>>> 0-mgmt: failed to fetch volume file (key:gluster_shared_storage)
>>> [2016-11-19 10:37:01.642592] W [glusterfsd.c:1327:cleanup_and_exit]
>>> (-->/lib64/libgfrpc.so.0(rpc_clnt_handle_reply+0x90) [0x7f95cd309770]
>>> -->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x536) [0x7f95cda3afc6]
>>> -->/usr/sbin/glusterfs(cleanup_and_exit+0x6b) [0x7f95cda34b4b] ) 0-:
>>> received signum (0), shutting down
>>> [2016-11-19 10:37:01.642638] I [fuse-bridge.c:5793:fini] 0-fuse:
>>> Unmounting '/run/gluster/shared_storage'.
>>> [2016-11-19 10:37:18.798787] I [MSGID: 100030] [glusterfsd.c:2454:main]
>>> 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.8.5
>>> (args: /usr/sbin/glusterfs --volfile-server=127.0.0.1
>>> --volfile-id=gluster_shared_storage /run/gluster/shared_storage)
>>> [2016-11-19 10:37:18.813011] I [MSGID: 101190]
>>> [event-epoll.c:628:event_dispatch_epoll_worker] 0-epoll: Started thread
>>> with index 1
>>> [2016-11-19 10:37:18.813363] E [glusterfsd-mgmt.c:1586:mgmt_getspec_cbk]
>>> 0-glusterfs: failed to get the 'volume file' from server
>>> [2016-11-19 10:37:18.813386] E [glusterfsd-mgmt.c:1686:mgmt_getspec_cbk]
>>> 0-mgmt: failed to fetch volume file (key:gluster_shared_storage)
>>> [2016-11-19 10:37:18.813592] W [glusterfsd.c:1327:cleanup_and_exit]
>>> (-->/lib64/libgfrpc.so.0(rpc_clnt_handle_reply+0x90) [0x7f96ba4c7770]
>>> -->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x536) [0x7f96babf8fc6]
>>> -->/usr/sbin/glusterfs(cleanup_and_exit+0x6b) [0x7f96babf2b4b] ) 0-:
>>> r

Re: [Gluster-users] How to force remove geo session?

2016-11-20 Thread Kotresh Hiremath Ravishankar
Hi,

Glad, you could get it rectified. But, having same slave volume for two 
different
geo-rep sessions is never recommended. The two sessions end up writing to 
same slave node. It's always one master volume to many different slave volume
configuration if required. If ssh-keys are deleted on slave for some reason,
running geo-rep create command with force option would redistribute keys.

'gluster vol gep-rep  :: create puh-pem 
force'

And yes, root user and non-root user is considered as two different sessions.

Thanks and Regards,
Kotresh H R

- Original Message -
> From: "Alexandr Porunov" 
> To: "gluster-users@gluster.org List" 
> Sent: Saturday, November 19, 2016 9:41:05 PM
> Subject: Re: [Gluster-users] How to force remove geo session?
> 
> OK, I have figured out what it was.
> 
> I had a session not with 'root' user but with 'geoaccount' user.
> It seems that we can't have 2 sessions to the one node (even if users are
> different). After deleting a session with 'geoaccount' user I was able to
> create a session with 'root' user.
> 
> On Sat, Nov 19, 2016 at 3:51 PM, Alexandr Porunov <
> alexandr.poru...@gmail.com > wrote:
> 
> 
> 
> Hello,
> I had a geo replication between master nodes and slave nodes. I have removed
> ssh keys for authorization from slave nodes. Now I can neither create
> session for slave nodes nor remove the old useless session. Is it possible
> to manually remove a sessions from all the nodes?
> 
> Here is the problem:
> 
> # gluster volume geo-replication gv0 root@192.168.0.124::gv0 delete
> reset-sync-time
> Geo-replication session between gv0 and 192.168.0.124::gv0 does not exist.
> geo-replication command failed
> 
> # gluster volume geo-replication gv0 root@192.168.0.124::gv0 create ssh-port
> 22 push-pem
> Session between gv0 and 192.168.0.124::gv0 is already created.
> geo-replication command failed
> 
> Sincerely,
> Alexandr
> 
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] How to enable shared_storage?

2016-11-20 Thread Jiffin Tony Thottan


On 21/11/16 01:07, Alexandr Porunov wrote:
I have installed it from rpm. No that file isn't there. The folder 
"/var/lib/glusterd/hooks/1/set/post/" is empty..




which gluster version and what all gluster rpms have u installed?
For time being just download this file[1] and copy to  above location 
and rerun the same cli.


[1] 
https://github.com/gluster/glusterfs/blob/master/extras/hook-scripts/set/post/S32gluster_enable_shared_storage.sh


--
Jiffin


Sincerely,
Alexandr

On Sun, Nov 20, 2016 at 2:55 PM, Jiffin Tony Thottan 
mailto:jthot...@redhat.com>> wrote:


Did u install rpm or directly from sources. Can u check whether
following script is present?

/var/lib/glusterd/hooks/1/set/post/S32gluster_enable_shared_storage.sh

--

Jiffin


On 20/11/16 13:33, Alexandr Porunov wrote:

To enable shared storage I used next command:
# gluster volume set all cluster.enable-shared-storage enable

But it seems that it doesn't create gluster_shared_storage
automatically.

# gluster volume status gluster_shared_storage
Volume gluster_shared_storage does not exist

Do I need to manually create a volume "gluster_shared_storage"?
Do I need to manually create a folder
"/var/run/gluster/shared_storage"? Do I need to manually mount
it? Or something I don't need to do?

If I use 6 cluster nodes and I need to have a shared storage on
all of them then how to create a shared storage?
It says that it have to be with replication 2 or replication 3.
But if we use shared storage on all of 6 nodes then we have only
2 ways to create a volume:
1. Use replication 6
2. Use replication 3 with distribution.

Which way I need to use?

Sincerely,
Alexandr

On Sun, Nov 20, 2016 at 9:07 AM, Jiffin Tony Thottan
mailto:jthot...@redhat.com>> wrote:



On 19/11/16 21:47, Alexandr Porunov wrote:

Unfortunately I haven't this log file but I have
'run-gluster-shared_storage.log' and it has errors I don't
know why.

Here is the content of the 'run-gluster-shared_storage.log':



Make sure shared storage is up and running using "gluster
volume status gluster_shared_storage"

May be the issue is related to firewalld or iptables. Try it
after disabling them.

--

Jiffin

[2016-11-19 10:37:01.581737] I [MSGID: 100030]
[glusterfsd.c:2454:main] 0-/usr/sbin/glusterfs: Started
running /usr/sbin/glusterfs version 3.8.5 (args:
/usr/sbin/glusterfs --volfile-server=127.0.0.1
--volfile-id=gluster_shared_storage /run/gluster/shared_storage)
[2016-11-19 10:37:01.641836] I [MSGID: 101190]
[event-epoll.c:628:event_dispatch_epoll_worker] 0-epoll:
Started thread with index 1
[2016-11-19 10:37:01.642311] E
[glusterfsd-mgmt.c:1586:mgmt_getspec_cbk] 0-glusterfs:
failed to get the 'volume file' from server
[2016-11-19 10:37:01.642340] E
[glusterfsd-mgmt.c:1686:mgmt_getspec_cbk] 0-mgmt: failed to
fetch volume file (key:gluster_shared_storage)
[2016-11-19 10:37:01.642592] W
[glusterfsd.c:1327:cleanup_and_exit]
(-->/lib64/libgfrpc.so.0(rpc_clnt_handle_reply+0x90)
[0x7f95cd309770]
-->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x536)
[0x7f95cda3afc6]
-->/usr/sbin/glusterfs(cleanup_and_exit+0x6b)
[0x7f95cda34b4b] ) 0-: received signum (0), shutting down
[2016-11-19 10:37:01.642638] I [fuse-bridge.c:5793:fini]
0-fuse: Unmounting '/run/gluster/shared_storage'.
[2016-11-19 10:37:18.798787] I [MSGID: 100030]
[glusterfsd.c:2454:main] 0-/usr/sbin/glusterfs: Started
running /usr/sbin/glusterfs version 3.8.5 (args:
/usr/sbin/glusterfs --volfile-server=127.0.0.1
--volfile-id=gluster_shared_storage /run/gluster/shared_storage)
[2016-11-19 10:37:18.813011] I [MSGID: 101190]
[event-epoll.c:628:event_dispatch_epoll_worker] 0-epoll:
Started thread with index 1
[2016-11-19 10:37:18.813363] E
[glusterfsd-mgmt.c:1586:mgmt_getspec_cbk] 0-glusterfs:
failed to get the 'volume file' from server
[2016-11-19 10:37:18.813386] E
[glusterfsd-mgmt.c:1686:mgmt_getspec_cbk] 0-mgmt: failed to
fetch volume file (key:gluster_shared_storage)
[2016-11-19 10:37:18.813592] W
[glusterfsd.c:1327:cleanup_and_exit]
(-->/lib64/libgfrpc.so.0(rpc_clnt_handle_reply+0x90)
[0x7f96ba4c7770]
-->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x536)
[0x7f96babf8fc6]
-->/usr/sbin/glusterfs(cleanup_and_exit+0x6b)
[0x7f96babf2b4b] ) 0-: received signum (0), shutting down
[2016-11-19 10:37:18.813633] I [fuse-bridge.c:5793:fini]
0-fuse: Unmounting '/run/gluster/shared_storage'.
[2016-11-19 10:40:33.115685] I [MSGID: 100030]
[glusterfsd.c:2454:main] 0-/us

Re: [Gluster-users] Duplicate UUID entries in "gluster peer status" command

2016-11-20 Thread ABHISHEK PALIWAL
Hi Atin,

System is the embedded system and these dates are before the system get in
timer sync.

Yes, I have also seen these two files in peers directory on 002500 board
and I want to know the reason why gluster creates the second file when
there is old file is exist. Even when you see the content of the these file
are same.

Is it possible for gluster if we fall in this situation then instead of
manually doing the steps which you mentioned above gluster will take care
of this?

I have some questions:

1. based on the logs can we find out the reason for having two peers files
with same contents.
2. is there any way to do it from gluster code.

Regards,
Abhishek

Regards,
Abhishek

On Mon, Nov 21, 2016 at 9:52 AM, Atin Mukherjee  wrote:

> atin@dhcp35-96:~/Downloads/gluster_users/abhishek_dup_
> uuid/duplicate_uuid/glusterd_2500/peers$ ls -lrt
> total 8
> -rw---. 1 atin wheel 71 *Jan  1  1970* 5be8603b-18d0-4333-8590-
> 38f918a22857
> -rw---. 1 atin wheel 71 Nov 18 03:31 26ae19a6-b58f-446a-b079-
> 411d4ee57450
>
> In board 2500 look at the date of the file 
> 5be8603b-18d0-4333-8590-38f918a22857
> (marked in bold). Not sure how did you end up having this file in such time
> stamp. I am guessing this could be because of the set up been not cleaned
> properly at the time of re-installation.
>
> Here is the steps what I'd recommend for now:
>
> 1. rename 26ae19a6-b58f-446a-b079-411d4ee57450 to 
> 5be8603b-18d0-4333-8590-38f918a22857,
> you should have only one entry in the peers folder in board 2500.
> 2. Bring down both glusterd instances
> 3. Bring back one by one
>
> And then restart glusterd to see if the issue persists.
>
>
>
> On Mon, Nov 21, 2016 at 9:34 AM, ABHISHEK PALIWAL  > wrote:
>
>> Hope you will see in the logs..
>>
>> On Mon, Nov 21, 2016 at 9:17 AM, ABHISHEK PALIWAL <
>> abhishpali...@gmail.com> wrote:
>>
>>> Hi Atin,
>>>
>>> It is not getting wipe off we have changed the configuration path from
>>> /var/lib/glusterd to /system/glusterd.
>>>
>>> So, they will remain as same as previous.
>>>
>>> On Mon, Nov 21, 2016 at 9:15 AM, Atin Mukherjee 
>>> wrote:
>>>
 Abhishek,

 rebooting the board does wipe of /var/lib/glusterd contents in your set
 up right (as per my earlier conversation with you) ? In that case, how are
 you ensuring that the same node gets back the older UUID? If you don't then
 this is bound to happen.

 On Mon, Nov 21, 2016 at 9:11 AM, ABHISHEK PALIWAL <
 abhishpali...@gmail.com> wrote:

> Hi Team,
>
> Please lookinto this problem as this is very widely seen problem in
> our system.
>
> We are having the setup of replicate volume setup with two brick but
> after restarting the second board I am getting the duplicate entry in
> "gluster peer status" command like below:
>
>
>
>
>
>
>
>
>
>
>
> *# gluster peer status Number of Peers: 2  Hostname: 10.32.0.48 Uuid:
> 5be8603b-18d0-4333-8590-38f918a22857 State: Peer in Cluster (Connected)
>  Hostname: 10.32.0.48 Uuid: 5be8603b-18d0-4333-8590-38f918a22857 State:
> Peer in Cluster (Connected) # *
>
> I am attaching all logs from both the boards and the command outputs
> as well.
>
> So could you please check what is the reason to get in this situation
> as it is very frequent in multiple case.
>
> Also, we are not replacing any board from setup just rebooting.
>
> --
>
> Regards
> Abhishek Paliwal
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>



 --

 ~ Atin (atinm)

>>>
>>>
>>>
>>> --
>>>
>>>
>>>
>>>
>>> Regards
>>> Abhishek Paliwal
>>>
>>
>>
>>
>> --
>>
>>
>>
>>
>> Regards
>> Abhishek Paliwal
>>
>
>
>
> --
>
> ~ Atin (atinm)
>



-- 




Regards
Abhishek Paliwal
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Duplicate UUID entries in "gluster peer status" command

2016-11-20 Thread Atin Mukherjee
atin@dhcp35-96:~/Downloads/gluster_users/abhishek_dup_uuid/duplicate_uuid/glusterd_2500/peers$
ls -lrt
total 8
-rw---. 1 atin wheel 71 *Jan  1  1970*
5be8603b-18d0-4333-8590-38f918a22857
-rw---. 1 atin wheel 71 Nov 18 03:31
26ae19a6-b58f-446a-b079-411d4ee57450

In board 2500 look at the date of the file
5be8603b-18d0-4333-8590-38f918a22857 (marked in bold). Not sure how did you
end up having this file in such time stamp. I am guessing this could be
because of the set up been not cleaned properly at the time of
re-installation.

Here is the steps what I'd recommend for now:

1. rename 26ae19a6-b58f-446a-b079-411d4ee57450 to
5be8603b-18d0-4333-8590-38f918a22857, you should have only one entry in the
peers folder in board 2500.
2. Bring down both glusterd instances
3. Bring back one by one

And then restart glusterd to see if the issue persists.



On Mon, Nov 21, 2016 at 9:34 AM, ABHISHEK PALIWAL 
wrote:

> Hope you will see in the logs..
>
> On Mon, Nov 21, 2016 at 9:17 AM, ABHISHEK PALIWAL  > wrote:
>
>> Hi Atin,
>>
>> It is not getting wipe off we have changed the configuration path from
>> /var/lib/glusterd to /system/glusterd.
>>
>> So, they will remain as same as previous.
>>
>> On Mon, Nov 21, 2016 at 9:15 AM, Atin Mukherjee 
>> wrote:
>>
>>> Abhishek,
>>>
>>> rebooting the board does wipe of /var/lib/glusterd contents in your set
>>> up right (as per my earlier conversation with you) ? In that case, how are
>>> you ensuring that the same node gets back the older UUID? If you don't then
>>> this is bound to happen.
>>>
>>> On Mon, Nov 21, 2016 at 9:11 AM, ABHISHEK PALIWAL <
>>> abhishpali...@gmail.com> wrote:
>>>
 Hi Team,

 Please lookinto this problem as this is very widely seen problem in our
 system.

 We are having the setup of replicate volume setup with two brick but
 after restarting the second board I am getting the duplicate entry in
 "gluster peer status" command like below:











 *# gluster peer status Number of Peers: 2  Hostname: 10.32.0.48 Uuid:
 5be8603b-18d0-4333-8590-38f918a22857 State: Peer in Cluster (Connected)
  Hostname: 10.32.0.48 Uuid: 5be8603b-18d0-4333-8590-38f918a22857 State:
 Peer in Cluster (Connected) # *

 I am attaching all logs from both the boards and the command outputs as
 well.

 So could you please check what is the reason to get in this situation
 as it is very frequent in multiple case.

 Also, we are not replacing any board from setup just rebooting.

 --

 Regards
 Abhishek Paliwal

 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-users

>>>
>>>
>>>
>>> --
>>>
>>> ~ Atin (atinm)
>>>
>>
>>
>>
>> --
>>
>>
>>
>>
>> Regards
>> Abhishek Paliwal
>>
>
>
>
> --
>
>
>
>
> Regards
> Abhishek Paliwal
>



-- 

~ Atin (atinm)
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Duplicate UUID entries in "gluster peer status" command

2016-11-20 Thread ABHISHEK PALIWAL
Hope you will see in the logs..

On Mon, Nov 21, 2016 at 9:17 AM, ABHISHEK PALIWAL 
wrote:

> Hi Atin,
>
> It is not getting wipe off we have changed the configuration path from
> /var/lib/glusterd to /system/glusterd.
>
> So, they will remain as same as previous.
>
> On Mon, Nov 21, 2016 at 9:15 AM, Atin Mukherjee 
> wrote:
>
>> Abhishek,
>>
>> rebooting the board does wipe of /var/lib/glusterd contents in your set
>> up right (as per my earlier conversation with you) ? In that case, how are
>> you ensuring that the same node gets back the older UUID? If you don't then
>> this is bound to happen.
>>
>> On Mon, Nov 21, 2016 at 9:11 AM, ABHISHEK PALIWAL <
>> abhishpali...@gmail.com> wrote:
>>
>>> Hi Team,
>>>
>>> Please lookinto this problem as this is very widely seen problem in our
>>> system.
>>>
>>> We are having the setup of replicate volume setup with two brick but
>>> after restarting the second board I am getting the duplicate entry in
>>> "gluster peer status" command like below:
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> *# gluster peer status Number of Peers: 2  Hostname: 10.32.0.48 Uuid:
>>> 5be8603b-18d0-4333-8590-38f918a22857 State: Peer in Cluster (Connected)
>>>  Hostname: 10.32.0.48 Uuid: 5be8603b-18d0-4333-8590-38f918a22857 State:
>>> Peer in Cluster (Connected) # *
>>>
>>> I am attaching all logs from both the boards and the command outputs as
>>> well.
>>>
>>> So could you please check what is the reason to get in this situation as
>>> it is very frequent in multiple case.
>>>
>>> Also, we are not replacing any board from setup just rebooting.
>>>
>>> --
>>>
>>> Regards
>>> Abhishek Paliwal
>>>
>>> ___
>>> Gluster-users mailing list
>>> Gluster-users@gluster.org
>>> http://www.gluster.org/mailman/listinfo/gluster-users
>>>
>>
>>
>>
>> --
>>
>> ~ Atin (atinm)
>>
>
>
>
> --
>
>
>
>
> Regards
> Abhishek Paliwal
>



-- 




Regards
Abhishek Paliwal
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Duplicate UUID entries in "gluster peer status" command

2016-11-20 Thread ABHISHEK PALIWAL
Hi Atin,

It is not getting wipe off we have changed the configuration path from
/var/lib/glusterd to /system/glusterd.

So, they will remain as same as previous.

On Mon, Nov 21, 2016 at 9:15 AM, Atin Mukherjee  wrote:

> Abhishek,
>
> rebooting the board does wipe of /var/lib/glusterd contents in your set up
> right (as per my earlier conversation with you) ? In that case, how are you
> ensuring that the same node gets back the older UUID? If you don't then
> this is bound to happen.
>
> On Mon, Nov 21, 2016 at 9:11 AM, ABHISHEK PALIWAL  > wrote:
>
>> Hi Team,
>>
>> Please lookinto this problem as this is very widely seen problem in our
>> system.
>>
>> We are having the setup of replicate volume setup with two brick but
>> after restarting the second board I am getting the duplicate entry in
>> "gluster peer status" command like below:
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> *# gluster peer status Number of Peers: 2  Hostname: 10.32.0.48 Uuid:
>> 5be8603b-18d0-4333-8590-38f918a22857 State: Peer in Cluster (Connected)
>>  Hostname: 10.32.0.48 Uuid: 5be8603b-18d0-4333-8590-38f918a22857 State:
>> Peer in Cluster (Connected) # *
>>
>> I am attaching all logs from both the boards and the command outputs as
>> well.
>>
>> So could you please check what is the reason to get in this situation as
>> it is very frequent in multiple case.
>>
>> Also, we are not replacing any board from setup just rebooting.
>>
>> --
>>
>> Regards
>> Abhishek Paliwal
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-users
>>
>
>
>
> --
>
> ~ Atin (atinm)
>



-- 




Regards
Abhishek Paliwal
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Duplicate UUID entries in "gluster peer status" command

2016-11-20 Thread Atin Mukherjee
Abhishek,

rebooting the board does wipe of /var/lib/glusterd contents in your set up
right (as per my earlier conversation with you) ? In that case, how are you
ensuring that the same node gets back the older UUID? If you don't then
this is bound to happen.

On Mon, Nov 21, 2016 at 9:11 AM, ABHISHEK PALIWAL 
wrote:

> Hi Team,
>
> Please lookinto this problem as this is very widely seen problem in our
> system.
>
> We are having the setup of replicate volume setup with two brick but after
> restarting the second board I am getting the duplicate entry in "gluster
> peer status" command like below:
>
>
>
>
>
>
>
>
>
>
>
> *# gluster peer status Number of Peers: 2  Hostname: 10.32.0.48 Uuid:
> 5be8603b-18d0-4333-8590-38f918a22857 State: Peer in Cluster (Connected)
>  Hostname: 10.32.0.48 Uuid: 5be8603b-18d0-4333-8590-38f918a22857 State:
> Peer in Cluster (Connected) # *
>
> I am attaching all logs from both the boards and the command outputs as
> well.
>
> So could you please check what is the reason to get in this situation as
> it is very frequent in multiple case.
>
> Also, we are not replacing any board from setup just rebooting.
>
> --
>
> Regards
> Abhishek Paliwal
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>



-- 

~ Atin (atinm)
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] question about info and info.tmp

2016-11-20 Thread songxin
Hi Atin,
Ok.Thank you for your reply.


Thanks,
Xin






在 2016-11-21 10:00:36,"Atin Mukherjee"  写道:

Hi Xin,

I've not got a chance to look into it yet. delete stale volume function is in 
place to take care of wiping off volume configuration data which has been 
deleted from the cluster. However we need to revisit this code to see if this 
function is anymore needed given we recently added a validation to fail delete 
request if one of the glusterd is down. I'll get back to you on this.


On Mon, 21 Nov 2016 at 07:24, songxin  wrote:

Hi Atin,
Thank you for your support.


And any conclusions about this issue?


Thanks,
Xin






在 2016-11-16 20:59:05,"Atin Mukherjee"  写道:





On Tue, Nov 15, 2016 at 1:53 PM, songxin  wrote:

ok, thank you.





在 2016-11-15 16:12:34,"Atin Mukherjee"  写道:





On Tue, Nov 15, 2016 at 12:47 PM, songxin  wrote:



Hi Atin,


I think the root cause is in the function glusterd_import_friend_volume as 
below. 

int32_t 
glusterd_import_friend_volume (dict_t *peer_data, size_t count) 
{ 
... 
ret = glusterd_volinfo_find (new_volinfo->volname, &old_volinfo); 
if (0 == ret) { 
(void) gd_check_and_update_rebalance_info (old_volinfo, 
   new_volinfo); 
(void) glusterd_delete_stale_volume (old_volinfo, new_volinfo); 
} 
... 
ret = glusterd_store_volinfo (new_volinfo, 
GLUSTERD_VOLINFO_VER_AC_NONE); 
if (ret) { 
gf_msg (this->name, GF_LOG_ERROR, 0, 
GD_MSG_VOLINFO_STORE_FAIL, "Failed to store " 
"volinfo for volume %s", new_volinfo->volname); 
goto out; 
} 
... 
} 

glusterd_delete_stale_volume will remove the info and bricks/* and the 
glusterd_store_volinfo will create the new one. 
But if glusterd is killed before rename the info will is empty. 


And glusterd will start failed because the infois empty in the next time you 
start the glusterd.


Any idea, Atin?


Give me some time, will check it out, but reading at this analysis looks very 
well possible if a volume is changed when the glusterd was done on node a and 
when the same comes up during peer handshake we update the volinfo and during 
that time glusterd goes down once again. I'll confirm it by tomorrow.



I checked the code and it does look like you have got the right RCA for the 
issue which you simulated through those two scripts. However this can happen 
even when you try to create a fresh volume and while glusterd tries to write 
the content into the store and goes down before renaming the info.tmp file you 
get into the same situation.


I'd really need to think through if this can be fixed. Suggestions are always 
appreciated.

 



BTW, excellent work Xin!




Thanks,
Xin



在 2016-11-15 12:07:05,"Atin Mukherjee"  写道:





On Tue, Nov 15, 2016 at 8:58 AM, songxin  wrote:

Hi Atin,
I have some clues about this issue.
I could reproduce this issue use the scrip that mentioned in 
https://bugzilla.redhat.com/show_bug.cgi?id=1308487 .


I really appreciate your help in trying to nail down this issue. While I am at 
your email and going through the code to figure out the possible cause for it, 
unfortunately I don't see any script in the attachment of the bug.  Could you 
please cross check?
 



After I added some debug print,which like below, in glusterd-store.c and I 
found that the /var/lib/glusterd/vols/xxx/info and 
/var/lib/glusterd/vols/xxx/bricks/* are removed. 
But other files in /var/lib/glusterd/vols/xxx/ will not be remove.


int32_t
glusterd_store_volinfo (glusterd_volinfo_t *volinfo, glusterd_volinfo_ver_ac_t 
ac)
{
int32_t ret = -1;


GF_ASSERT (volinfo)


ret = access("/var/lib/glusterd/vols/gv0/info", F_OK);
if(ret < 0)
{
gf_msg (THIS->name, GF_LOG_ERROR, 0, 0, "info is not exit(%d)", 
errno);
}
else
{
ret = stat("/var/lib/glusterd/vols/gv0/info", &buf);
if(ret < 0)
{
gf_msg (THIS->name, GF_LOG_ERROR, 0, 0, "stat info 
error");
}
else
{
gf_msg (THIS->name, GF_LOG_ERROR, 0, 0, "info size is 
%lu, inode num is %lu", buf.st_size, buf.st_ino);
}
}


glusterd_perform_volinfo_version_action (volinfo, ac);
ret = glusterd_store_create_volume_dir (volinfo);
if (ret)
goto out;


...
}


So it is easy to understand why  the info or 10.32.1.144.-opt-lvmdir-c2-brick 
sometimes is empty.
It is becaue the info file is not exist, and it will be create by “fd = open 
(path, O_RDWR | O_CREAT | O_APPEND, 0600);” in function gf_store_handle_new.
And the info file is empty before rename.
So the info file is empty if glusterd shutdown before rename.
 



My question is following.
1.I did not find the point th

Re: [Gluster-users] question about info and info.tmp

2016-11-20 Thread Atin Mukherjee
Hi Xin,

I've not got a chance to look into it yet. delete stale volume function is
in place to take care of wiping off volume configuration data which has
been deleted from the cluster. However we need to revisit this code to see
if this function is anymore needed given we recently added a validation to
fail delete request if one of the glusterd is down. I'll get back to you on
this.

On Mon, 21 Nov 2016 at 07:24, songxin  wrote:

> Hi Atin,
> Thank you for your support.
>
> And any conclusions about this issue?
>
> Thanks,
> Xin
>
>
>
>
>
> 在 2016-11-16 20:59:05,"Atin Mukherjee"  写道:
>
>
>
> On Tue, Nov 15, 2016 at 1:53 PM, songxin  wrote:
>
> ok, thank you.
>
>
>
>
> 在 2016-11-15 16:12:34,"Atin Mukherjee"  写道:
>
>
>
> On Tue, Nov 15, 2016 at 12:47 PM, songxin  wrote:
>
>
> Hi Atin,
>
> I think the root cause is in the function glusterd_import_friend_volume as
> below.
>
> int32_t
> glusterd_import_friend_volume (dict_t *peer_data, size_t count)
> {
> ...
> ret = glusterd_volinfo_find (new_volinfo->volname, &old_volinfo);
> if (0 == ret) {
> (void) gd_check_and_update_rebalance_info (old_volinfo,
>new_volinfo);
> (void) glusterd_delete_stale_volume (old_volinfo,
> new_volinfo);
> }
> ...
> ret = glusterd_store_volinfo (new_volinfo,
> GLUSTERD_VOLINFO_VER_AC_NONE);
> if (ret) {
> gf_msg (this->name, GF_LOG_ERROR, 0,
> GD_MSG_VOLINFO_STORE_FAIL, "Failed to store "
> "volinfo for volume %s", new_volinfo->volname);
> goto out;
> }
> ...
> }
>
> glusterd_delete_stale_volume will remove the info and bricks/* and the
> glusterd_store_volinfo will create the new one.
> But if glusterd is killed before rename the info will is empty.
>
> And glusterd will start failed because the infois empty in the next time
> you start the glusterd.
>
> Any idea, Atin?
>
>
> Give me some time, will check it out, but reading at this analysis looks
> very well possible if a volume is changed when the glusterd was done on
> node a and when the same comes up during peer handshake we update the
> volinfo and during that time glusterd goes down once again. I'll confirm it
> by tomorrow.
>
>
> I checked the code and it does look like you have got the right RCA for
> the issue which you simulated through those two scripts. However this can
> happen even when you try to create a fresh volume and while glusterd tries
> to write the content into the store and goes down before renaming the
> info.tmp file you get into the same situation.
>
> I'd really need to think through if this can be fixed. Suggestions are
> always appreciated.
>
>
>
>
> BTW, excellent work Xin!
>
>
> Thanks,
> Xin
>
>
> 在 2016-11-15 12:07:05,"Atin Mukherjee"  写道:
>
>
>
> On Tue, Nov 15, 2016 at 8:58 AM, songxin  wrote:
>
> Hi Atin,
> I have some clues about this issue.
> I could reproduce this issue use the scrip that mentioned in
> https://bugzilla.redhat.com/show_bug.cgi?id=1308487 .
>
>
> I really appreciate your help in trying to nail down this issue. While I
> am at your email and going through the code to figure out the possible
> cause for it, unfortunately I don't see any script in the attachment of the
> bug.  Could you please cross check?
>
>
>
> After I added some debug print,which like below, in glusterd-store.c and I
> found that the /var/lib/glusterd/vols/xxx/info and 
> /var/lib/glusterd/vols/xxx/bricks/*
> are removed.
> But other files in /var/lib/glusterd/vols/xxx/ will not be remove.
>
> int32_t
> glusterd_store_volinfo (glusterd_volinfo_t *volinfo,
> glusterd_volinfo_ver_ac_t ac)
> {
> int32_t ret = -1;
>
> GF_ASSERT (volinfo)
>
> ret = access("/var/lib/glusterd/vols/gv0/info", F_OK);
> if(ret < 0)
> {
> gf_msg (THIS->name, GF_LOG_ERROR, 0, 0, "info is not
> exit(%d)", errno);
> }
> else
> {
> ret = stat("/var/lib/glusterd/vols/gv0/info", &buf);
> if(ret < 0)
> {
> gf_msg (THIS->name, GF_LOG_ERROR, 0, 0, "stat info
> error");
> }
> else
> {
> gf_msg (THIS->name, GF_LOG_ERROR, 0, 0, "info size
> is %lu, inode num is %lu", buf.st_size, buf.st_ino);
> }
> }
>
> glusterd_perform_volinfo_version_action (volinfo, ac);
> ret = glusterd_store_create_volume_dir (volinfo);
> if (ret)
> goto out;
>
> ...
> }
>
> So it is easy to understand why  the info or
> 10.32.1.144.-opt-lvmdir-c2-brick sometimes is empty.
> It is becaue the info file is not exist, and it will be create by “fd =
> open (path, O_RDWR | O_CREAT | O_APPEND, 0600);” in function
> gf_store_handle_new.
> And the info file is empty before rename.
> So the info file is empty if glus

Re: [Gluster-users] question about info and info.tmp

2016-11-20 Thread songxin
Hi Atin,
Thank you for your support.


And any conclusions about this issue?


Thanks,
Xin






在 2016-11-16 20:59:05,"Atin Mukherjee"  写道:





On Tue, Nov 15, 2016 at 1:53 PM, songxin  wrote:

ok, thank you.





在 2016-11-15 16:12:34,"Atin Mukherjee"  写道:





On Tue, Nov 15, 2016 at 12:47 PM, songxin  wrote:



Hi Atin,


I think the root cause is in the function glusterd_import_friend_volume as 
below. 

int32_t 
glusterd_import_friend_volume (dict_t *peer_data, size_t count) 
{ 
... 
ret = glusterd_volinfo_find (new_volinfo->volname, &old_volinfo); 
if (0 == ret) { 
(void) gd_check_and_update_rebalance_info (old_volinfo, 
   new_volinfo); 
(void) glusterd_delete_stale_volume (old_volinfo, new_volinfo); 
} 
... 
ret = glusterd_store_volinfo (new_volinfo, 
GLUSTERD_VOLINFO_VER_AC_NONE); 
if (ret) { 
gf_msg (this->name, GF_LOG_ERROR, 0, 
GD_MSG_VOLINFO_STORE_FAIL, "Failed to store " 
"volinfo for volume %s", new_volinfo->volname); 
goto out; 
} 
... 
} 

glusterd_delete_stale_volume will remove the info and bricks/* and the 
glusterd_store_volinfo will create the new one. 
But if glusterd is killed before rename the info will is empty. 


And glusterd will start failed because the infois empty in the next time you 
start the glusterd.


Any idea, Atin?


Give me some time, will check it out, but reading at this analysis looks very 
well possible if a volume is changed when the glusterd was done on node a and 
when the same comes up during peer handshake we update the volinfo and during 
that time glusterd goes down once again. I'll confirm it by tomorrow.



I checked the code and it does look like you have got the right RCA for the 
issue which you simulated through those two scripts. However this can happen 
even when you try to create a fresh volume and while glusterd tries to write 
the content into the store and goes down before renaming the info.tmp file you 
get into the same situation.


I'd really need to think through if this can be fixed. Suggestions are always 
appreciated.

 



BTW, excellent work Xin!




Thanks,
Xin



在 2016-11-15 12:07:05,"Atin Mukherjee"  写道:





On Tue, Nov 15, 2016 at 8:58 AM, songxin  wrote:

Hi Atin,
I have some clues about this issue.
I could reproduce this issue use the scrip that mentioned in 
https://bugzilla.redhat.com/show_bug.cgi?id=1308487 .


I really appreciate your help in trying to nail down this issue. While I am at 
your email and going through the code to figure out the possible cause for it, 
unfortunately I don't see any script in the attachment of the bug.  Could you 
please cross check?
 



After I added some debug print,which like below, in glusterd-store.c and I 
found that the /var/lib/glusterd/vols/xxx/info and 
/var/lib/glusterd/vols/xxx/bricks/* are removed. 
But other files in /var/lib/glusterd/vols/xxx/ will not be remove.


int32_t
glusterd_store_volinfo (glusterd_volinfo_t *volinfo, glusterd_volinfo_ver_ac_t 
ac)
{
int32_t ret = -1;


GF_ASSERT (volinfo)


ret = access("/var/lib/glusterd/vols/gv0/info", F_OK);
if(ret < 0)
{
gf_msg (THIS->name, GF_LOG_ERROR, 0, 0, "info is not exit(%d)", 
errno);
}
else
{
ret = stat("/var/lib/glusterd/vols/gv0/info", &buf);
if(ret < 0)
{
gf_msg (THIS->name, GF_LOG_ERROR, 0, 0, "stat info 
error");
}
else
{
gf_msg (THIS->name, GF_LOG_ERROR, 0, 0, "info size is 
%lu, inode num is %lu", buf.st_size, buf.st_ino);
}
}


glusterd_perform_volinfo_version_action (volinfo, ac);
ret = glusterd_store_create_volume_dir (volinfo);
if (ret)
goto out;


...
}


So it is easy to understand why  the info or 10.32.1.144.-opt-lvmdir-c2-brick 
sometimes is empty.
It is becaue the info file is not exist, and it will be create by “fd = open 
(path, O_RDWR | O_CREAT | O_APPEND, 0600);” in function gf_store_handle_new.
And the info file is empty before rename.
So the info file is empty if glusterd shutdown before rename.
 



My question is following.
1.I did not find the point the info is removed.Could you tell me the point 
where the info and /bricks/* are removed?
2.why the file info and bricks/* is removed?But other files in 
var/lib/glusterd/vols/xxx/ are not be removed?

AFAIK, we never delete the info file and hence this file is opened with 
O_APPEND flag. As I said I will go back and cross check the code once again.






Thanks,
Xin



在 2016-11-11 20:34:05,"Atin Mukherjee"  写道:





On Fri, Nov 11, 2016 at 4:00 PM, songxin  wrote:

Hi Atin,



Thank you for your support.
Sincerely wait for your reply.



Re: [Gluster-users] How to enable shared_storage?

2016-11-20 Thread Alexandr Porunov
I have installed it from rpm. No that file isn't there. The folder
"/var/lib/glusterd/hooks/1/set/post/" is empty..

Sincerely,
Alexandr

On Sun, Nov 20, 2016 at 2:55 PM, Jiffin Tony Thottan 
wrote:

> Did u install rpm or directly from sources. Can u check whether following
> script is present?
> /var/lib/glusterd/hooks/1/set/post/S32gluster_enable_shared_storage.sh
>
> --
>
> Jiffin
>
>
> On 20/11/16 13:33, Alexandr Porunov wrote:
>
> To enable shared storage I used next command:
> # gluster volume set all cluster.enable-shared-storage enable
>
> But it seems that it doesn't create gluster_shared_storage automatically.
>
> # gluster volume status gluster_shared_storage
> Volume gluster_shared_storage does not exist
>
> Do I need to manually create a volume "gluster_shared_storage"? Do I need
> to manually create a folder "/var/run/gluster/shared_storage"? Do I need
> to manually mount it? Or something I don't need to do?
>
> If I use 6 cluster nodes and I need to have a shared storage on all of
> them then how to create a shared storage?
> It says that it have to be with replication 2 or replication 3. But if we
> use shared storage on all of 6 nodes then we have only 2 ways to create a
> volume:
> 1. Use replication 6
> 2. Use replication 3 with distribution.
>
> Which way I need to use?
>
> Sincerely,
> Alexandr
>
> On Sun, Nov 20, 2016 at 9:07 AM, Jiffin Tony Thottan 
> wrote:
>
>>
>>
>> On 19/11/16 21:47, Alexandr Porunov wrote:
>>
>> Unfortunately I haven't this log file but I have
>> 'run-gluster-shared_storage.log' and it has errors I don't know why.
>>
>> Here is the content of the 'run-gluster-shared_storage.log':
>>
>>
>> Make sure shared storage is up and running using "gluster volume status
>> gluster_shared_storage"
>>
>> May be the issue is related to firewalld or iptables. Try it after
>> disabling them.
>>
>> --
>>
>> Jiffin
>>
>> [2016-11-19 10:37:01.581737] I [MSGID: 100030] [glusterfsd.c:2454:main]
>> 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.8.5
>> (args: /usr/sbin/glusterfs --volfile-server=127.0.0.1
>> --volfile-id=gluster_shared_storage /run/gluster/shared_storage)
>> [2016-11-19 10:37:01.641836] I [MSGID: 101190]
>> [event-epoll.c:628:event_dispatch_epoll_worker] 0-epoll: Started thread
>> with index 1
>> [2016-11-19 10:37:01.642311] E [glusterfsd-mgmt.c:1586:mgmt_getspec_cbk]
>> 0-glusterfs: failed to get the 'volume file' from server
>> [2016-11-19 10:37:01.642340] E [glusterfsd-mgmt.c:1686:mgmt_getspec_cbk]
>> 0-mgmt: failed to fetch volume file (key:gluster_shared_storage)
>> [2016-11-19 10:37:01.642592] W [glusterfsd.c:1327:cleanup_and_exit]
>> (-->/lib64/libgfrpc.so.0(rpc_clnt_handle_reply+0x90) [0x7f95cd309770]
>> -->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x536) [0x7f95cda3afc6]
>> -->/usr/sbin/glusterfs(cleanup_and_exit+0x6b) [0x7f95cda34b4b] ) 0-:
>> received signum (0), shutting down
>> [2016-11-19 10:37:01.642638] I [fuse-bridge.c:5793:fini] 0-fuse:
>> Unmounting '/run/gluster/shared_storage'.
>> [2016-11-19 10:37:18.798787] I [MSGID: 100030] [glusterfsd.c:2454:main]
>> 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.8.5
>> (args: /usr/sbin/glusterfs --volfile-server=127.0.0.1
>> --volfile-id=gluster_shared_storage /run/gluster/shared_storage)
>> [2016-11-19 10:37:18.813011] I [MSGID: 101190]
>> [event-epoll.c:628:event_dispatch_epoll_worker] 0-epoll: Started thread
>> with index 1
>> [2016-11-19 10:37:18.813363] E [glusterfsd-mgmt.c:1586:mgmt_getspec_cbk]
>> 0-glusterfs: failed to get the 'volume file' from server
>> [2016-11-19 10:37:18.813386] E [glusterfsd-mgmt.c:1686:mgmt_getspec_cbk]
>> 0-mgmt: failed to fetch volume file (key:gluster_shared_storage)
>> [2016-11-19 10:37:18.813592] W [glusterfsd.c:1327:cleanup_and_exit]
>> (-->/lib64/libgfrpc.so.0(rpc_clnt_handle_reply+0x90) [0x7f96ba4c7770]
>> -->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x536) [0x7f96babf8fc6]
>> -->/usr/sbin/glusterfs(cleanup_and_exit+0x6b) [0x7f96babf2b4b] ) 0-:
>> received signum (0), shutting down
>>
>> [2016-11-19 10:37:18.813633] I [fuse-bridge.c:5793:fini] 0-fuse:
>> Unmounting '/run/gluster/shared_storage'.
>> [2016-11-19 10:40:33.115685] I [MSGID: 100030] [glusterfsd.c:2454:main]
>> 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.8.5
>> (args: /usr/sbin/glusterfs --volfile-server=127.0.0.1
>> --volfile-id=gluster_shared_storage /run/gluster/shared_storage)
>> [2016-11-19 10:40:33.124218] I [MSGID: 101190]
>> [event-epoll.c:628:event_dispatch_epoll_worker] 0-epoll: Started thread
>> with index 1
>> [2016-11-19 10:40:33.124722] E [glusterfsd-mgmt.c:1586:mgmt_getspec_cbk]
>> 0-glusterfs: failed to get the 'volume file' from server
>> [2016-11-19 10:40:33.124738] E [glusterfsd-mgmt.c:1686:mgmt_getspec_cbk]
>> 0-mgmt: failed to fetch volume file (key:gluster_shared_storage)
>>
>>
>> [2016-11-19 10:40:33.124869] W [glusterfsd.c:1327:cleanup_and_exit]
>> (-->/lib64/libgfrpc.so.0(rpc_clnt_handle_reply+0x90) [0x7f23576a9770]
>> -->/us

Re: [Gluster-users] How to enable shared_storage?

2016-11-20 Thread Jiffin Tony Thottan
Did u install rpm or directly from sources. Can u check whether 
following script is present?


/var/lib/glusterd/hooks/1/set/post/S32gluster_enable_shared_storage.sh

--

Jiffin

On 20/11/16 13:33, Alexandr Porunov wrote:

To enable shared storage I used next command:
# gluster volume set all cluster.enable-shared-storage enable

But it seems that it doesn't create gluster_shared_storage automatically.

# gluster volume status gluster_shared_storage
Volume gluster_shared_storage does not exist

Do I need to manually create a volume "gluster_shared_storage"? Do I 
need to manually create a folder "/var/run/gluster/shared_storage"? Do 
I need to manually mount it? Or something I don't need to do?


If I use 6 cluster nodes and I need to have a shared storage on all of 
them then how to create a shared storage?
It says that it have to be with replication 2 or replication 3. But if 
we use shared storage on all of 6 nodes then we have only 2 ways to 
create a volume:

1. Use replication 6
2. Use replication 3 with distribution.

Which way I need to use?

Sincerely,
Alexandr

On Sun, Nov 20, 2016 at 9:07 AM, Jiffin Tony Thottan 
mailto:jthot...@redhat.com>> wrote:




On 19/11/16 21:47, Alexandr Porunov wrote:

Unfortunately I haven't this log file but I have
'run-gluster-shared_storage.log' and it has errors I don't know why.

Here is the content of the 'run-gluster-shared_storage.log':



Make sure shared storage is up and running using "gluster volume
status  gluster_shared_storage"

May be the issue is related to firewalld or iptables. Try it after
disabling them.

--

Jiffin

[2016-11-19 10:37:01.581737] I [MSGID: 100030]
[glusterfsd.c:2454:main] 0-/usr/sbin/glusterfs: Started running
/usr/sbin/glusterfs version 3.8.5 (args: /usr/sbin/glusterfs
--volfile-server=127.0.0.1 --volfile-id=gluster_shared_storage
/run/gluster/shared_storage)
[2016-11-19 10:37:01.641836] I [MSGID: 101190]
[event-epoll.c:628:event_dispatch_epoll_worker] 0-epoll: Started
thread with index 1
[2016-11-19 10:37:01.642311] E
[glusterfsd-mgmt.c:1586:mgmt_getspec_cbk] 0-glusterfs: failed to
get the 'volume file' from server
[2016-11-19 10:37:01.642340] E
[glusterfsd-mgmt.c:1686:mgmt_getspec_cbk] 0-mgmt: failed to fetch
volume file (key:gluster_shared_storage)
[2016-11-19 10:37:01.642592] W
[glusterfsd.c:1327:cleanup_and_exit]
(-->/lib64/libgfrpc.so.0(rpc_clnt_handle_reply+0x90)
[0x7f95cd309770] -->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x536)
[0x7f95cda3afc6] -->/usr/sbin/glusterfs(cleanup_and_exit+0x6b)
[0x7f95cda34b4b] ) 0-: received signum (0), shutting down
[2016-11-19 10:37:01.642638] I [fuse-bridge.c:5793:fini] 0-fuse:
Unmounting '/run/gluster/shared_storage'.
[2016-11-19 10:37:18.798787] I [MSGID: 100030]
[glusterfsd.c:2454:main] 0-/usr/sbin/glusterfs: Started running
/usr/sbin/glusterfs version 3.8.5 (args: /usr/sbin/glusterfs
--volfile-server=127.0.0.1 --volfile-id=gluster_shared_storage
/run/gluster/shared_storage)
[2016-11-19 10:37:18.813011] I [MSGID: 101190]
[event-epoll.c:628:event_dispatch_epoll_worker] 0-epoll: Started
thread with index 1
[2016-11-19 10:37:18.813363] E
[glusterfsd-mgmt.c:1586:mgmt_getspec_cbk] 0-glusterfs: failed to
get the 'volume file' from server
[2016-11-19 10:37:18.813386] E
[glusterfsd-mgmt.c:1686:mgmt_getspec_cbk] 0-mgmt: failed to fetch
volume file (key:gluster_shared_storage)
[2016-11-19 10:37:18.813592] W
[glusterfsd.c:1327:cleanup_and_exit]
(-->/lib64/libgfrpc.so.0(rpc_clnt_handle_reply+0x90)
[0x7f96ba4c7770] -->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x536)
[0x7f96babf8fc6] -->/usr/sbin/glusterfs(cleanup_and_exit+0x6b)
[0x7f96babf2b4b] ) 0-: received signum (0), shutting down
[2016-11-19 10:37:18.813633] I [fuse-bridge.c:5793:fini] 0-fuse:
Unmounting '/run/gluster/shared_storage'.
[2016-11-19 10:40:33.115685] I [MSGID: 100030]
[glusterfsd.c:2454:main] 0-/usr/sbin/glusterfs: Started running
/usr/sbin/glusterfs version 3.8.5 (args: /usr/sbin/glusterfs
--volfile-server=127.0.0.1 --volfile-id=gluster_shared_storage
/run/gluster/shared_storage)
[2016-11-19 10:40:33.124218] I [MSGID: 101190]
[event-epoll.c:628:event_dispatch_epoll_worker] 0-epoll: Started
thread with index 1
[2016-11-19 10:40:33.124722] E
[glusterfsd-mgmt.c:1586:mgmt_getspec_cbk] 0-glusterfs: failed to
get the 'volume file' from server
[2016-11-19 10:40:33.124738] E
[glusterfsd-mgmt.c:1686:mgmt_getspec_cbk] 0-mgmt: failed to fetch
volume file (key:gluster_shared_storage)
[2016-11-19 10:40:33.124869] W
[glusterfsd.c:1327:cleanup_and_exit]
(-->/lib64/libgfrpc.so.0(rpc_clnt_handle_reply+0x90)
[0x7f23576a9770] -->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x536)
[0x7f2357ddafc6] -->/usr/sbin/glusterfs(cleanup_and_exit+0x6b)
[0x7f2357dd4b4b] ) 0-: received s

Re: [Gluster-users] How to enable shared_storage?

2016-11-20 Thread Alexandr Porunov
To enable shared storage I used next command:
# gluster volume set all cluster.enable-shared-storage enable

But it seems that it doesn't create gluster_shared_storage automatically.

# gluster volume status gluster_shared_storage
Volume gluster_shared_storage does not exist

Do I need to manually create a volume "gluster_shared_storage"? Do I need
to manually create a folder "/var/run/gluster/shared_storage"? Do I need to
manually mount it? Or something I don't need to do?

If I use 6 cluster nodes and I need to have a shared storage on all of them
then how to create a shared storage?
It says that it have to be with replication 2 or replication 3. But if we
use shared storage on all of 6 nodes then we have only 2 ways to create a
volume:
1. Use replication 6
2. Use replication 3 with distribution.

Which way I need to use?

Sincerely,
Alexandr

On Sun, Nov 20, 2016 at 9:07 AM, Jiffin Tony Thottan 
wrote:

>
>
> On 19/11/16 21:47, Alexandr Porunov wrote:
>
> Unfortunately I haven't this log file but I have
> 'run-gluster-shared_storage.log' and it has errors I don't know why.
>
> Here is the content of the 'run-gluster-shared_storage.log':
>
>
> Make sure shared storage is up and running using "gluster volume status
> gluster_shared_storage"
>
> May be the issue is related to firewalld or iptables. Try it after
> disabling them.
>
> --
>
> Jiffin
>
> [2016-11-19 10:37:01.581737] I [MSGID: 100030] [glusterfsd.c:2454:main]
> 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.8.5
> (args: /usr/sbin/glusterfs --volfile-server=127.0.0.1
> --volfile-id=gluster_shared_storage /run/gluster/shared_storage)
> [2016-11-19 10:37:01.641836] I [MSGID: 101190] 
> [event-epoll.c:628:event_dispatch_epoll_worker]
> 0-epoll: Started thread with index 1
> [2016-11-19 10:37:01.642311] E [glusterfsd-mgmt.c:1586:mgmt_getspec_cbk]
> 0-glusterfs: failed to get the 'volume file' from server
> [2016-11-19 10:37:01.642340] E [glusterfsd-mgmt.c:1686:mgmt_getspec_cbk]
> 0-mgmt: failed to fetch volume file (key:gluster_shared_storage)
> [2016-11-19 10:37:01.642592] W [glusterfsd.c:1327:cleanup_and_exit]
> (-->/lib64/libgfrpc.so.0(rpc_clnt_handle_reply+0x90) [0x7f95cd309770]
> -->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x536) [0x7f95cda3afc6]
> -->/usr/sbin/glusterfs(cleanup_and_exit+0x6b) [0x7f95cda34b4b] ) 0-:
> received signum (0), shutting down
> [2016-11-19 10:37:01.642638] I [fuse-bridge.c:5793:fini] 0-fuse:
> Unmounting '/run/gluster/shared_storage'.
> [2016-11-19 10:37:18.798787] I [MSGID: 100030] [glusterfsd.c:2454:main]
> 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.8.5
> (args: /usr/sbin/glusterfs --volfile-server=127.0.0.1
> --volfile-id=gluster_shared_storage /run/gluster/shared_storage)
> [2016-11-19 10:37:18.813011] I [MSGID: 101190] 
> [event-epoll.c:628:event_dispatch_epoll_worker]
> 0-epoll: Started thread with index 1
> [2016-11-19 10:37:18.813363] E [glusterfsd-mgmt.c:1586:mgmt_getspec_cbk]
> 0-glusterfs: failed to get the 'volume file' from server
> [2016-11-19 10:37:18.813386] E [glusterfsd-mgmt.c:1686:mgmt_getspec_cbk]
> 0-mgmt: failed to fetch volume file (key:gluster_shared_storage)
> [2016-11-19 10:37:18.813592] W [glusterfsd.c:1327:cleanup_and_exit]
> (-->/lib64/libgfrpc.so.0(rpc_clnt_handle_reply+0x90) [0x7f96ba4c7770]
> -->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x536) [0x7f96babf8fc6]
> -->/usr/sbin/glusterfs(cleanup_and_exit+0x6b) [0x7f96babf2b4b] ) 0-:
> received signum (0), shutting down
>
> [2016-11-19 10:37:18.813633] I [fuse-bridge.c:5793:fini] 0-fuse:
> Unmounting '/run/gluster/shared_storage'.
> [2016-11-19 10:40:33.115685] I [MSGID: 100030] [glusterfsd.c:2454:main]
> 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.8.5
> (args: /usr/sbin/glusterfs --volfile-server=127.0.0.1
> --volfile-id=gluster_shared_storage /run/gluster/shared_storage)
> [2016-11-19 10:40:33.124218] I [MSGID: 101190] 
> [event-epoll.c:628:event_dispatch_epoll_worker]
> 0-epoll: Started thread with index 1
> [2016-11-19 10:40:33.124722] E [glusterfsd-mgmt.c:1586:mgmt_getspec_cbk]
> 0-glusterfs: failed to get the 'volume file' from server
> [2016-11-19 10:40:33.124738] E [glusterfsd-mgmt.c:1686:mgmt_getspec_cbk]
> 0-mgmt: failed to fetch volume file (key:gluster_shared_storage)
>
>
> [2016-11-19 10:40:33.124869] W [glusterfsd.c:1327:cleanup_and_exit]
> (-->/lib64/libgfrpc.so.0(rpc_clnt_handle_reply+0x90) [0x7f23576a9770]
> -->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x536) [0x7f2357ddafc6]
> -->/usr/sbin/glusterfs(cleanup_and_exit+0x6b) [0x7f2357dd4b4b] ) 0-:
> received signum (0), shutting down
>
> [2016-11-19 10:40:33.124896] I [fuse-bridge.c:5793:fini] 0-fuse:
> Unmounting '/run/gluster/shared_storage'.
> [2016-11-19 10:44:36.029838] I [MSGID: 100030] [glusterfsd.c:2454:main]
> 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.8.5
> (args: /usr/sbin/glusterfs --volfile-server=127.0.0.1
> --volfile-id=gluster_shared_storage /run/gluster/shared_storage)
> [201