Abhishek,

rebooting the board does wipe of /var/lib/glusterd contents in your set up
right (as per my earlier conversation with you) ? In that case, how are you
ensuring that the same node gets back the older UUID? If you don't then
this is bound to happen.

On Mon, Nov 21, 2016 at 9:11 AM, ABHISHEK PALIWAL <abhishpali...@gmail.com>
wrote:

> Hi Team,
>
> Please lookinto this problem as this is very widely seen problem in our
> system.
>
> We are having the setup of replicate volume setup with two brick but after
> restarting the second board I am getting the duplicate entry in "gluster
> peer status" command like below:
>
>
>
>
>
>
>
>
>
>
>
> *# gluster peer status Number of Peers: 2  Hostname: 10.32.0.48 Uuid:
> 5be8603b-18d0-4333-8590-38f918a22857 State: Peer in Cluster (Connected)
>  Hostname: 10.32.0.48 Uuid: 5be8603b-18d0-4333-8590-38f918a22857 State:
> Peer in Cluster (Connected) # *
>
> I am attaching all logs from both the boards and the command outputs as
> well.
>
> So could you please check what is the reason to get in this situation as
> it is very frequent in multiple case.
>
> Also, we are not replacing any board from setup just rebooting.
>
> --
>
> Regards
> Abhishek Paliwal
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>



-- 

~ Atin (atinm)
_______________________________________________
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Reply via email to