Re: [Gluster-users] Question about Healing estimated time ...

2019-08-29 Thread Anand Malagi
Can someone please respond ??

Thanks and Regards,
--Anand
Extn : 6974
Mobile : 9552527199

From: Anand Malagi
Sent: Wednesday, August 28, 2019 5:13 PM
To: gluster-users@gluster.org; Gluster Devel 
Subject: Question about Healing estimated time ...

Hi Gluster Team,

I have Distributed-Disperse gluster volume which uses erasure coding. I 
basically two of the bricks within a sub volume (4+2 config), then generated 
some data which obviously will not be written to these two bricks which I 
brought down.

However before bringing them up and get them healed, is there a way to know how 
much time it will take to heal the files or a way to measure the healing time ??


Thanks and Regards,
--Anand
Extn : 6974
Mobile : 9552527199

___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] replace-brick operation issue...

2019-01-09 Thread Anand Malagi
Can I please get some help in understanding the issue mentioned ?

From: Anand Malagi
Sent: Monday, December 31, 2018 1:39 PM
To: 'Anand Malagi' ; gluster-users@gluster.org
Subject: RE: replace-brick operation issue...

Can someone please help here ??

From: 
gluster-users-boun...@gluster.org<mailto:gluster-users-boun...@gluster.org> 
mailto:gluster-users-boun...@gluster.org>> 
On Behalf Of Anand Malagi
Sent: Friday, December 21, 2018 3:44 PM
To: gluster-users@gluster.org<mailto:gluster-users@gluster.org>
Subject: [Gluster-users] replace-brick operation issue...

Hi Friends,

Please note that, when replace-brick operation was tried for one of the bad 
brick present in distributed disperse EC volume, the command actually failed 
but the brick daemon of new replaced brick came online.
Please help to understand in what situations this issue may arise and proposed 
solution if possible ? :


glusterd.log  :



[2018-12-11 11:04:43.774120] I [MSGID: 106503] 
[glusterd-replace-brick.c:147:__glusterd_handle_replace_brick] 0-management: 
Received replace-brick commit force request.

[2018-12-11 11:04:44.784578] I [MSGID: 106504] 
[glusterd-utils.c:13079:rb_update_dstbrick_port] 0-glusterd: adding dst-brick 
port no 0

...

[2018-12-11 11:04:46.457537] E [MSGID: 106029] 
[glusterd-utils.c:7981:glusterd_brick_signal] 0-glusterd: Unable to open 
pidfile: 
/var/run/gluster/vols/AM6_HyperScale/am6sv0004sds.saipemnet.saipem.intranet-ws-disk3-ws_brick.pid
 [No such file or directory]

[2018-12-11 11:04:53.089810] I [glusterd-utils.c:5876:glusterd_brick_start] 
0-management: starting a fresh brick process for brick /ws/disk15/ws_brick

...

[2018-12-11 11:04:53.117935] W [socket.c:595:__socket_rwv] 0-socket.management: 
writev on 127.0.0.1:864 failed (Broken pipe)

[2018-12-11 11:04:54.014023] I [socket.c:2465:socket_event_handler] 
0-transport: EPOLLERR - disconnecting now

[2018-12-11 11:04:54.273190] I [MSGID: 106005] 
[glusterd-handler.c:6120:__glusterd_brick_rpc_notify] 0-management: Brick 
am6sv0004sds.saipemnet.saipem.intranet:/ws/disk15/ws_brick has disconnected 
from glusterd.

[2018-12-11 11:04:54.297603] E [MSGID: 106116] 
[glusterd-mgmt.c:135:gd_mgmt_v3_collate_errors] 0-management: Commit failed on 
am6sv0006sds.saipemnet.saipem.intranet. Please check log file for details.

[2018-12-11 11:04:54.350666] I [MSGID: 106143] 
[glusterd-pmap.c:278:pmap_registry_bind] 0-pmap: adding brick 
/ws/disk15/ws_brick on port 49164

[2018-12-11 11:05:01.137449] E [MSGID: 106123] 
[glusterd-mgmt.c:1519:glusterd_mgmt_v3_commit] 0-management: Commit failed on 
peers

[2018-12-11 11:05:01.137496] E [MSGID: 106123] 
[glusterd-replace-brick.c:660:glusterd_mgmt_v3_initiate_replace_brick_cmd_phases]
 0-management: Commit Op Failed

[2018-12-11 11:06:12.275867] I [MSGID: 106499] 
[glusterd-handler.c:4370:__glusterd_handle_status_volume] 0-management: 
Received status volume req for volume AM6_HyperScale

[2018-12-11 13:35:51.529365] I [MSGID: 106499] 
[glusterd-handler.c:4370:__glusterd_handle_status_volume] 0-management: 
Received status volume req for volume AM6_HyperScale



gluster volume replace-brick AM6_HyperScale 
am6sv0004sds.saipemnet.saipem.intranet:/ws/disk3/ws_brick 
am6sv0004sds.saipemnet.saipem.intranet:/ws/disk15/ws_brick commit force
Replace brick failure, brick [/ws/disk3], volume [AM6_HyperScale]

"gluster volume status" now shows a new disk active /ws/disk15

The replacement appears to be successful, looks like healing started

[cid:image001.png@01D4A84C.A15F8680]


Thanks and Regards,
--Anand
Legal Disclaimer
"This communication may contain confidential and privileged material for the
sole use of the intended recipient. Any unauthorized review, use or distribution
by others is strictly prohibited. If you have received the message by mistake,
please advise the sender by reply email and delete the message. We may process
information in the email header of business emails sent and received by us
(including the names of recipient and sender, date and time of the email) for
the purposes of evaluating our existing or prospective business relationship.
The lawful basis we rely on for this processing is our legitimate interests. For
more information about how we use personal information please read our privacy
policy https://www.commvault.com/privacy-policy. Thank you."

Legal Disclaimer
"This communication may contain confidential and privileged material for the
sole use of the intended recipient. Any unauthorized review, use or distribution
by others is strictly prohibited. If you have received the message by mistake,
please advise the sender by reply email and delete the message. We may process
information in the email header of business emails sent and receiv

Re: [Gluster-users] replace-brick operation issue...

2018-12-31 Thread Anand Malagi
Can someone please help here ??

From: gluster-users-boun...@gluster.org  On 
Behalf Of Anand Malagi
Sent: Friday, December 21, 2018 3:44 PM
To: gluster-users@gluster.org
Subject: [Gluster-users] replace-brick operation issue...

Hi Friends,

Please note that, when replace-brick operation was tried for one of the bad 
brick present in distributed disperse EC volume, the command actually failed 
but the brick daemon of new replaced brick came online.
Please help to understand in what situations this issue may arise and proposed 
solution if possible ? :


glusterd.log  :



[2018-12-11 11:04:43.774120] I [MSGID: 106503] 
[glusterd-replace-brick.c:147:__glusterd_handle_replace_brick] 0-management: 
Received replace-brick commit force request.

[2018-12-11 11:04:44.784578] I [MSGID: 106504] 
[glusterd-utils.c:13079:rb_update_dstbrick_port] 0-glusterd: adding dst-brick 
port no 0

...

[2018-12-11 11:04:46.457537] E [MSGID: 106029] 
[glusterd-utils.c:7981:glusterd_brick_signal] 0-glusterd: Unable to open 
pidfile: 
/var/run/gluster/vols/AM6_HyperScale/am6sv0004sds.saipemnet.saipem.intranet-ws-disk3-ws_brick.pid
 [No such file or directory]

[2018-12-11 11:04:53.089810] I [glusterd-utils.c:5876:glusterd_brick_start] 
0-management: starting a fresh brick process for brick /ws/disk15/ws_brick

...

[2018-12-11 11:04:53.117935] W [socket.c:595:__socket_rwv] 0-socket.management: 
writev on 127.0.0.1:864 failed (Broken pipe)

[2018-12-11 11:04:54.014023] I [socket.c:2465:socket_event_handler] 
0-transport: EPOLLERR - disconnecting now

[2018-12-11 11:04:54.273190] I [MSGID: 106005] 
[glusterd-handler.c:6120:__glusterd_brick_rpc_notify] 0-management: Brick 
am6sv0004sds.saipemnet.saipem.intranet:/ws/disk15/ws_brick has disconnected 
from glusterd.

[2018-12-11 11:04:54.297603] E [MSGID: 106116] 
[glusterd-mgmt.c:135:gd_mgmt_v3_collate_errors] 0-management: Commit failed on 
am6sv0006sds.saipemnet.saipem.intranet. Please check log file for details.

[2018-12-11 11:04:54.350666] I [MSGID: 106143] 
[glusterd-pmap.c:278:pmap_registry_bind] 0-pmap: adding brick 
/ws/disk15/ws_brick on port 49164

[2018-12-11 11:05:01.137449] E [MSGID: 106123] 
[glusterd-mgmt.c:1519:glusterd_mgmt_v3_commit] 0-management: Commit failed on 
peers

[2018-12-11 11:05:01.137496] E [MSGID: 106123] 
[glusterd-replace-brick.c:660:glusterd_mgmt_v3_initiate_replace_brick_cmd_phases]
 0-management: Commit Op Failed

[2018-12-11 11:06:12.275867] I [MSGID: 106499] 
[glusterd-handler.c:4370:__glusterd_handle_status_volume] 0-management: 
Received status volume req for volume AM6_HyperScale

[2018-12-11 13:35:51.529365] I [MSGID: 106499] 
[glusterd-handler.c:4370:__glusterd_handle_status_volume] 0-management: 
Received status volume req for volume AM6_HyperScale



gluster volume replace-brick AM6_HyperScale 
am6sv0004sds.saipemnet.saipem.intranet:/ws/disk3/ws_brick 
am6sv0004sds.saipemnet.saipem.intranet:/ws/disk15/ws_brick commit force
Replace brick failure, brick [/ws/disk3], volume [AM6_HyperScale]

"gluster volume status" now shows a new disk active /ws/disk15

The replacement appears to be successful, looks like healing started

[cid:image001.png@01D4A10E.2C4DFE70]


Thanks and Regards,
--Anand
Legal Disclaimer
"This communication may contain confidential and privileged material for the
sole use of the intended recipient. Any unauthorized review, use or distribution
by others is strictly prohibited. If you have received the message by mistake,
please advise the sender by reply email and delete the message. We may process
information in the email header of business emails sent and received by us
(including the names of recipient and sender, date and time of the email) for
the purposes of evaluating our existing or prospective business relationship.
The lawful basis we rely on for this processing is our legitimate interests. For
more information about how we use personal information please read our privacy
policy https://www.commvault.com/privacy-policy. Thank you."

Legal Disclaimer
"This communication may contain confidential and privileged material for the
sole use of the intended recipient. Any unauthorized review, use or distribution
by others is strictly prohibited. If you have received the message by mistake,
please advise the sender by reply email and delete the message. We may process
information in the email header of business emails sent and received by us
(including the names of recipient and sender, date and time of the email) for
the purposes of evaluating our existing or prospective business relationship.
The lawful basis we rely on for this processing is our legitimate interests. For
more information about how we use personal information please read our privacy
policy https://www.commvault.com/

[Gluster-users] replace-brick operation issue...

2018-12-21 Thread Anand Malagi
Hi Friends,

Please note that, when replace-brick operation was tried for one of the bad 
brick present in distributed disperse EC volume, the command actually failed 
but the brick daemon of new replaced brick came online.
Please help to understand in what situations this issue may arise and proposed 
solution if possible ? :


glusterd.log  :



[2018-12-11 11:04:43.774120] I [MSGID: 106503] 
[glusterd-replace-brick.c:147:__glusterd_handle_replace_brick] 0-management: 
Received replace-brick commit force request.

[2018-12-11 11:04:44.784578] I [MSGID: 106504] 
[glusterd-utils.c:13079:rb_update_dstbrick_port] 0-glusterd: adding dst-brick 
port no 0

...

[2018-12-11 11:04:46.457537] E [MSGID: 106029] 
[glusterd-utils.c:7981:glusterd_brick_signal] 0-glusterd: Unable to open 
pidfile: 
/var/run/gluster/vols/AM6_HyperScale/am6sv0004sds.saipemnet.saipem.intranet-ws-disk3-ws_brick.pid
 [No such file or directory]

[2018-12-11 11:04:53.089810] I [glusterd-utils.c:5876:glusterd_brick_start] 
0-management: starting a fresh brick process for brick /ws/disk15/ws_brick

...

[2018-12-11 11:04:53.117935] W [socket.c:595:__socket_rwv] 0-socket.management: 
writev on 127.0.0.1:864 failed (Broken pipe)

[2018-12-11 11:04:54.014023] I [socket.c:2465:socket_event_handler] 
0-transport: EPOLLERR - disconnecting now

[2018-12-11 11:04:54.273190] I [MSGID: 106005] 
[glusterd-handler.c:6120:__glusterd_brick_rpc_notify] 0-management: Brick 
am6sv0004sds.saipemnet.saipem.intranet:/ws/disk15/ws_brick has disconnected 
from glusterd.

[2018-12-11 11:04:54.297603] E [MSGID: 106116] 
[glusterd-mgmt.c:135:gd_mgmt_v3_collate_errors] 0-management: Commit failed on 
am6sv0006sds.saipemnet.saipem.intranet. Please check log file for details.

[2018-12-11 11:04:54.350666] I [MSGID: 106143] 
[glusterd-pmap.c:278:pmap_registry_bind] 0-pmap: adding brick 
/ws/disk15/ws_brick on port 49164

[2018-12-11 11:05:01.137449] E [MSGID: 106123] 
[glusterd-mgmt.c:1519:glusterd_mgmt_v3_commit] 0-management: Commit failed on 
peers

[2018-12-11 11:05:01.137496] E [MSGID: 106123] 
[glusterd-replace-brick.c:660:glusterd_mgmt_v3_initiate_replace_brick_cmd_phases]
 0-management: Commit Op Failed

[2018-12-11 11:06:12.275867] I [MSGID: 106499] 
[glusterd-handler.c:4370:__glusterd_handle_status_volume] 0-management: 
Received status volume req for volume AM6_HyperScale

[2018-12-11 13:35:51.529365] I [MSGID: 106499] 
[glusterd-handler.c:4370:__glusterd_handle_status_volume] 0-management: 
Received status volume req for volume AM6_HyperScale



gluster volume replace-brick AM6_HyperScale 
am6sv0004sds.saipemnet.saipem.intranet:/ws/disk3/ws_brick 
am6sv0004sds.saipemnet.saipem.intranet:/ws/disk15/ws_brick commit force
Replace brick failure, brick [/ws/disk3], volume [AM6_HyperScale]

"gluster volume status" now shows a new disk active /ws/disk15

The replacement appears to be successful, looks like healing started

[cid:image001.png@01D49944.02DC60D0]


Thanks and Regards,
--Anand
Legal Disclaimer
"This communication may contain confidential and privileged material for the
sole use of the intended recipient. Any unauthorized review, use or distribution
by others is strictly prohibited. If you have received the message by mistake,
please advise the sender by reply email and delete the message. We may process
information in the email header of business emails sent and received by us
(including the names of recipient and sender, date and time of the email) for
the purposes of evaluating our existing or prospective business relationship.
The lawful basis we rely on for this processing is our legitimate interests. For
more information about how we use personal information please read our privacy
policy https://www.commvault.com/privacy-policy. Thank you."

___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Bricks to sub-volume mapping

2018-01-09 Thread Anand Malagi
Ok. Thank you very much...

Thanks and Regards,
--Anand
Extn : 6974
Mobile : 91 9552527199, 91 9850160173

From: Aravinda [mailto:avish...@redhat.com]
Sent: 09 January 2018 12:47
To: Anand Malagi ; gluster-users@gluster.org
Subject: Re: [Gluster-users] Bricks to sub-volume mapping

No, we don't store the information separately. But it can be easily predictable 
from the Volume Info.

For example, in the below Volume info, it shows "Number of Bricks" in the 
following format,

Number of Subvols x (Number of Data bricks + Number of Redundancy bricks) = 
Total Bricks

Note: Sub volumes are predictable without storing it as separate info since we 
do not have a concept to mix different sub volumes types for single 
Volume(Except in case of Tiering).  But in future we may support sub volumes 
with multiple types within a Volume.(Issue with Glusterd2 
https://github.com/gluster/glusterd2/issues/388)


On Tuesday 09 January 2018 12:33 PM, Anand Malagi wrote:
But do we store this information somewhere as part of gluster metadata or 
something...

Thanks and Regards,
--Anand
Extn : 6974
Mobile : 91 9552527199, 91 9850160173

From: Aravinda [mailto:avish...@redhat.com]
Sent: 09 January 2018 12:31
To: Anand Malagi <mailto:amal...@commvault.com>; 
gluster-users@gluster.org<mailto:gluster-users@gluster.org>
Subject: Re: [Gluster-users] Bricks to sub-volume mapping

First 6 bricks belong to First sub volume and next 6 bricks belong to second.

On Tuesday 09 January 2018 12:11 PM, Anand Malagi wrote:
Hi Team,

Please let me know how I can know which bricks are part of which sub-volumes in 
case of disperse volume, for example in below volume has two sub-volumes :
Type: Distributed-Disperse
Volume ID: 6dc8ced8-27aa-4481-bfe8-057133c31d0b
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x (4 + 2) = 12
Transport-type: tcp
Bricks:
Brick1: pdchyperscale1sds:/ws/disk1/ws_brick
Brick2: pdchyperscale2sds:/ws/disk1/ws_brick
Brick3: pdchyperscale3sds:/ws/disk1/ws_brick
Brick4: pdchyperscale1sds:/ws/disk2/ws_brick
Brick5: pdchyperscale2sds:/ws/disk2/ws_brick
Brick6: pdchyperscale3sds:/ws/disk2/ws_brick
Brick7: pdchyperscale1sds:/ws/disk3/ws_brick
Brick8: pdchyperscale2sds:/ws/disk3/ws_brick
Brick9: pdchyperscale3sds:/ws/disk3/ws_brick
Brick10: pdchyperscale1sds:/ws/disk4/ws_brick
Brick11: pdchyperscale2sds:/ws/disk4/ws_brick
Brick12: pdchyperscale3sds:/ws/disk4/ws_brick

Please suggest how to know which bricks are part of first and second sub-volume.

Thanks and Regards,
--Anand
Extn : 6974
Mobile : 91 9552527199, 91 9850160173

***Legal Disclaimer***
"This communication may contain confidential and privileged material for the
sole use of the intended recipient. Any unauthorized review, use or distribution
by others is strictly prohibited. If you have received the message by mistake,
please advise the sender by reply email and delete the message. Thank you."
**




___

Gluster-users mailing list

Gluster-users@gluster.org<mailto:Gluster-users@gluster.org>

http://lists.gluster.org/mailman/listinfo/gluster-users





--

regards

Aravinda VK
***Legal Disclaimer***
"This communication may contain confidential and privileged material for the
sole use of the intended recipient. Any unauthorized review, use or distribution
by others is strictly prohibited. If you have received the message by mistake,
please advise the sender by reply email and delete the message. Thank you."
**




--

regards

Aravinda VK

http://aravindavk.in
***Legal Disclaimer***
"This communication may contain confidential and privileged material for the
sole use of the intended recipient. Any unauthorized review, use or distribution
by others is strictly prohibited. If you have received the message by mistake,
please advise the sender by reply email and delete the message. Thank you."
**
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Bricks to sub-volume mapping

2018-01-08 Thread Anand Malagi
But do we store this information somewhere as part of gluster metadata or 
something...

Thanks and Regards,
--Anand
Extn : 6974
Mobile : 91 9552527199, 91 9850160173

From: Aravinda [mailto:avish...@redhat.com]
Sent: 09 January 2018 12:31
To: Anand Malagi ; gluster-users@gluster.org
Subject: Re: [Gluster-users] Bricks to sub-volume mapping

First 6 bricks belong to First sub volume and next 6 bricks belong to second.

On Tuesday 09 January 2018 12:11 PM, Anand Malagi wrote:
Hi Team,

Please let me know how I can know which bricks are part of which sub-volumes in 
case of disperse volume, for example in below volume has two sub-volumes :
Type: Distributed-Disperse
Volume ID: 6dc8ced8-27aa-4481-bfe8-057133c31d0b
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x (4 + 2) = 12
Transport-type: tcp
Bricks:
Brick1: pdchyperscale1sds:/ws/disk1/ws_brick
Brick2: pdchyperscale2sds:/ws/disk1/ws_brick
Brick3: pdchyperscale3sds:/ws/disk1/ws_brick
Brick4: pdchyperscale1sds:/ws/disk2/ws_brick
Brick5: pdchyperscale2sds:/ws/disk2/ws_brick
Brick6: pdchyperscale3sds:/ws/disk2/ws_brick
Brick7: pdchyperscale1sds:/ws/disk3/ws_brick
Brick8: pdchyperscale2sds:/ws/disk3/ws_brick
Brick9: pdchyperscale3sds:/ws/disk3/ws_brick
Brick10: pdchyperscale1sds:/ws/disk4/ws_brick
Brick11: pdchyperscale2sds:/ws/disk4/ws_brick
Brick12: pdchyperscale3sds:/ws/disk4/ws_brick

Please suggest how to know which bricks are part of first and second sub-volume.

Thanks and Regards,
--Anand
Extn : 6974
Mobile : 91 9552527199, 91 9850160173

***Legal Disclaimer***
"This communication may contain confidential and privileged material for the
sole use of the intended recipient. Any unauthorized review, use or distribution
by others is strictly prohibited. If you have received the message by mistake,
please advise the sender by reply email and delete the message. Thank you."
**



___

Gluster-users mailing list

Gluster-users@gluster.org<mailto:Gluster-users@gluster.org>

http://lists.gluster.org/mailman/listinfo/gluster-users




--

regards

Aravinda VK
***Legal Disclaimer***
"This communication may contain confidential and privileged material for the
sole use of the intended recipient. Any unauthorized review, use or distribution
by others is strictly prohibited. If you have received the message by mistake,
please advise the sender by reply email and delete the message. Thank you."
**
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Bricks to sub-volume mapping

2018-01-08 Thread Anand Malagi
Hi Team,

Please let me know how I can know which bricks are part of which sub-volumes in 
case of disperse volume, for example in below volume has two sub-volumes :
Type: Distributed-Disperse
Volume ID: 6dc8ced8-27aa-4481-bfe8-057133c31d0b
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x (4 + 2) = 12
Transport-type: tcp
Bricks:
Brick1: pdchyperscale1sds:/ws/disk1/ws_brick
Brick2: pdchyperscale2sds:/ws/disk1/ws_brick
Brick3: pdchyperscale3sds:/ws/disk1/ws_brick
Brick4: pdchyperscale1sds:/ws/disk2/ws_brick
Brick5: pdchyperscale2sds:/ws/disk2/ws_brick
Brick6: pdchyperscale3sds:/ws/disk2/ws_brick
Brick7: pdchyperscale1sds:/ws/disk3/ws_brick
Brick8: pdchyperscale2sds:/ws/disk3/ws_brick
Brick9: pdchyperscale3sds:/ws/disk3/ws_brick
Brick10: pdchyperscale1sds:/ws/disk4/ws_brick
Brick11: pdchyperscale2sds:/ws/disk4/ws_brick
Brick12: pdchyperscale3sds:/ws/disk4/ws_brick

Please suggest how to know which bricks are part of first and second sub-volume.

Thanks and Regards,
--Anand
Extn : 6974
Mobile : 91 9552527199, 91 9850160173

***Legal Disclaimer***
"This communication may contain confidential and privileged material for the
sole use of the intended recipient. Any unauthorized review, use or distribution
by others is strictly prohibited. If you have received the message by mistake,
please advise the sender by reply email and delete the message. Thank you."
**
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users