Re: [Gluster-devel] [Gluster-users] Impact of force option in remove-brick

2016-03-22 Thread Gaurav Garg
>> I just want to know what is the difference in the following scenario: 

1. remove-brick without the force option 
2. remove-brick with the force option 


remove-brick without force option will perform task based on your option,
for eg. remove-brick start option will start migration of file from given
remove-brick to other available bricks in the cluster. you can check status
of this remove-brick task by issuing remove-brick status command.

But remove-brick with force option will just forcefully remove brick from the 
cluster.
It will result in data loss in case of distributed volume, because it will not 
migrate file
from given remove-brick to other available bricks in the cluster. In case of 
replicate volume
you might not have problem by doing remove-brick force because later on after 
adding brick you
can issue heal command and migrate file from first replica set to this newly 
added brick.

Thanks,

~Gaurav

- Original Message -
From: "ABHISHEK PALIWAL" 
To: gluster-us...@gluster.org, gluster-devel@gluster.org
Sent: Tuesday, March 22, 2016 11:35:52 AM
Subject: [Gluster-users] Impact of force option in remove-brick

Hi Team, 

I have the following scenario: 

1. I have one replica 2 volume in which two brick are available. 
2. in such permutation and combination I got the UUID of peers mismatch. 
3. Because of UUID mismatch when I tried to remove brick on the second board I 
am getting the Incorrect Brick failure. 

Now, I have the question if I am using the remove-brick command with the 
'force' option it means it should remove the brick in any situation either the 
brick is available or its UUID is mismatch. 

I just want to know what is the difference in the following scenario: 

1. remove-brick without the force option 
2. remove-brick with the force option 


Regards 
Abhishek 

___
Gluster-users mailing list
gluster-us...@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] MInutes of Gluster Community Bug Triage meeting at 12:00 UTC on 8th March, 2016

2016-03-08 Thread Gaurav Garg
Hi All,

Following are the meeting minutes for today Gluster community bug triaging 
meeting.


Meeting ended Tue Mar  8 12:58:43 2016 UTC. Information about MeetBot at 
http://wiki.debian.org/MeetBot .

Minutes: 
https://meetbot.fedoraproject.org/gluster-meeting/2016-03-08/gluster_bug_triage.2016-03-08-12.01.html

Minutes (text): 
https://meetbot.fedoraproject.org/gluster-meeting/2016-03-08/gluster_bug_triage.2016-03-08-12.01.txt

Log: 
https://meetbot.fedoraproject.org/gluster-meeting/2016-03-08/gluster_bug_triage.2016-03-08-12.01.log.html


Meeting summary
---
* Roll call  (ggarg_, 12:02:27)
  * ACTION: kkeithley_ will come up with a proposal to reduce the number
of bugs against "mainline" in NEW state  (ggarg, 12:06:49)
  * LINK:
https://ci.centos.org/view/Gluster/job/gluster_libgfapi-python/
(ndevos, 12:08:31)
  * LINK:

https://ci.centos.org/view/Gluster/job/gluster_libgfapi-python/buildTimeTrend
(ndevos, 12:09:42)
  * ACTION: ndevos to continue work on  proposing  some test-cases for
minimal libgfapi test  (ggarg, 12:11:15)
  * ACTION: Manikandan and Nandaja will update on bug automation
(ggarg, 12:13:09)
  * LINK: https://public.pad.fsfe.org/p/gluster-bugs-to-triage   (ggarg,
12:14:02)
  * LINK: https://bugzilla.redhat.com/show_bug.cgi?id=1315422   (rafi,
12:18:41)
  * LINK: http://ur1.ca/om4jt   (rafi, 12:46:38)

Meeting ended at 12:58:43 UTC.



Action Items

* kkeithley_ will come up with a proposal to reduce the number of bugs
  against "mainline" in NEW state
* ndevos to continue work on  proposing  some test-cases for minimal
  libgfapi test
* Manikandan and Nandaja will update on bug automation



Action Items, by person
---
* Manikandan
  * Manikandan and Nandaja will update on bug automation
* ndevos
  * ndevos to continue work on  proposing  some test-cases for minimal
libgfapi test

  * kkeithley_ will come up with a proposal to reduce the number of bugs
against "mainline" in NEW state




People Present (lines said)
---
* ggarg (54)
* rafi (37)
* ndevos (15)
* obnox (15)
* ira (15)
* jiffin (13)
* Manikandan (9)
* Saravanakmr (6)
* glusterbot (6)
* zodbot (3)
* ggarg_ (3)
* hgowtham (1)


Thanks,

Regards,
Gaurav

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] REMINDER: Gluster Community Bug Triage meeting at 12:00 UTC (~in 90 minutes)

2016-03-08 Thread Gaurav Garg
Hi all,

This meeting is scheduled for anyone that is interested in learning more
about, or assisting with the Bug Triage.

Meeting details:
- location: #gluster-meeting on Freenode IRC
(https://webchat.freenode.net/?channels=gluster-meeting  )
- date: every Tuesday
- time: 12:00 UTC
  (in your terminal, run: date -d "12:00 UTC")
- agenda: https://public.pad.fsfe.org/p/gluster-bug-triage

Currently the following items are listed:
* Roll Call
* Status of last weeks action items
* Group Triage
* Open Floor

The last two topics have space for additions. If you have a suitable bug
or topic to discuss, please add it to the agenda.

Appreciate your participation.

Thanks,

Regards,
Gaurav
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] test throws core intermittently: tests/bugs/snapshot/bug-1140162-file-snapshot-features-encrypt-opts-validation.t

2015-12-09 Thread Gaurav Garg
Hi,

this issue already reported by community and it seems that there is problem 
during cleanup when features.encryption is enable.

previous discussion on the same core: 

http://nongnu.13855.n7.nabble.com/Upstream-regression-crash-https-build-gluster-org-job-rackspace-regression-2GB-triggered-16191-consol-td206079.html

will look into this issue further.

Thanks,
Gaurav

- Original Message -
From: "Vijay Bellur" <vbel...@redhat.com>
To: "Michael Adam" <ob...@samba.org>, gluster-devel@gluster.org, "Gaurav Garg" 
<gg...@redhat.com>
Sent: Thursday, December 10, 2015 9:12:08 AM
Subject: Re: [Gluster-devel] test throws core intermittently: 
tests/bugs/snapshot/bug-1140162-file-snapshot-features-encrypt-opts-validation.t

On 12/09/2015 07:33 PM, Michael Adam wrote:
> by
>
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/16674/consoleFull
>
>

Gaurav - can you please check this test? It caused the baseline 
regression to fail as well:

https://build.gluster.org/job/regression-test-burn-in/47/console

Regards,
Vijay

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] net bsd regression test (tests/basic/mount-nfs-auth.t) failure in 3.7 branch

2015-11-20 Thread Gaurav Garg
Thank you Niels for this information :)

Thanx,

~Gaurav


- Original Message -
From: "Niels de Vos" <nde...@redhat.com>
To: "Gaurav Garg" <gg...@redhat.com>
Cc: "Gluster Devel" <gluster-devel@gluster.org>, "Soumya Koduri" 
<skod...@redhat.com>
Sent: Friday, November 20, 2015 7:39:04 PM
Subject: Re: net bsd  regression test (tests/basic/mount-nfs-auth.t) failure in 
3.7 branch

On Fri, Nov 20, 2015 at 08:40:39AM -0500, Gaurav Garg wrote:
> Hi,
> 
> netbsd regression test (tests/basic/mount-nfs-auth.t) constantly failing on 
> GlusterFS Release-3.7 branch
> 
> https://build.gluster.org/job/rackspace-netbsd7-regression-triggered/11889/consoleFull
> 
> https://build.gluster.org/job/rackspace-netbsd7-regression-triggered/11878/consoleFull
> 
> It not related to this patch. Requesting nfs team to review it.

I've sent backports for this test case yesterday:

 - http://review.gluster.org/12663
 - http://review.gluster.org/12664

Once these have been merged, the test-case should be much more stable.

Thanks,
Niels
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] net bsd regression test (tests/basic/mount-nfs-auth.t) failure in 3.7 branch

2015-11-20 Thread Gaurav Garg
Hi,

netbsd regression test (tests/basic/mount-nfs-auth.t) constantly failing on 
GlusterFS Release-3.7 branch

https://build.gluster.org/job/rackspace-netbsd7-regression-triggered/11889/consoleFull

https://build.gluster.org/job/rackspace-netbsd7-regression-triggered/11878/consoleFull

It not related to this patch. Requesting nfs team to review it.

ccing nfs team member.


Thanx,

Regards,

~Gaurav
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] net bsd regression test (tests/basic/mount-nfs-auth.t) failure in 3.7 branch

2015-11-20 Thread Gaurav Garg


- Original Message -
From: "Gaurav Garg" <gg...@redhat.com>
To: "Gluster Devel" <gluster-devel@gluster.org>, "Niels de Vos" 
<nde...@redhat.com>
Sent: Friday, November 20, 2015 7:10:39 PM
Subject: [Gluster-devel] net bsd regression test (tests/basic/mount-nfs-auth.t) 
failure in 3.7 branch

Hi,

netbsd regression test (tests/basic/mount-nfs-auth.t) constantly failing on 
GlusterFS Release-3.7 branch

https://build.gluster.org/job/rackspace-netbsd7-regression-triggered/11889/consoleFull

https://build.gluster.org/job/rackspace-netbsd7-regression-triggered/11878/consoleFull

It not related to this patch. Requesting nfs team to review it.

ccing nfs team member.


Thanx,

Regards,

~Gaurav
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] spurious failure in ./tests/basic/ec/ec-readdir.t

2015-11-02 Thread Gaurav Garg
Hi 

 ./tests/basic/ec/ec-readdir.t test case. seems to be spurious failure in ec

https://build.gluster.org/job/rackspace-regression-2GB-triggered/15395/consoleFull

https://build.gluster.org/job/rackspace-regression-2GB-triggered/15388/consoleFull

https://build.gluster.org/job/rackspace-regression-2GB-triggered/15386/consoleFull



ccing ec team members.


Thanx,

~Gaurav
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] spurious failure in ./tests/basic/ec/ec-readdir.t

2015-11-02 Thread Gaurav Garg


- Original Message -
From: "Gaurav Garg" <gg...@redhat.com>
To: "Gluster Devel" <gluster-devel@gluster.org>
Cc: "Ashish Pandey" <aspan...@redhat.com>
Sent: Monday, November 2, 2015 6:21:55 PM
Subject: [Gluster-devel] spurious failure in ./tests/basic/ec/ec-readdir.t

Hi 

 ./tests/basic/ec/ec-readdir.t test case. seems to be spurious failure in ec

https://build.gluster.org/job/rackspace-regression-2GB-triggered/15395/consoleFull

https://build.gluster.org/job/rackspace-regression-2GB-triggered/15388/consoleFull

https://build.gluster.org/job/rackspace-regression-2GB-triggered/15386/consoleFull



ccing ec team members.


Thanx,

~Gaurav
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] not able to open gerrit (review.glustster.org)

2015-10-19 Thread Gaurav Garg
yah, its working now.

- Original Message -
From: "Raghavendra Gowdappa" <rgowd...@redhat.com>
To: "Gaurav Garg" <gg...@redhat.com>
Cc: "Gluster Devel" <gluster-devel@gluster.org>
Sent: Monday, October 19, 2015 1:29:30 PM
Subject: Re: [Gluster-devel] not able to open gerrit (review.glustster.org)

Its loading now.

- Original Message -
> From: "Gaurav Garg" <gg...@redhat.com>
> To: "Gluster Devel" <gluster-devel@gluster.org>
> Sent: Monday, October 19, 2015 11:11:37 AM
> Subject: [Gluster-devel] not able to open gerrit (review.glustster.org)
> 
> Anybody facing the same issue ?
> 
> Thanx,
> 
> ~Gaurav
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
> 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] not able to open gerrit (review.glustster.org)

2015-10-18 Thread Gaurav Garg
Anybody facing the same issue ?

Thanx,

~Gaurav
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] tests/bugs/snapshot/bug-1140162-file-snapshot-features-encrypt-opts-validation.t causing crash

2015-10-12 Thread Gaurav Garg
Hi from the bt and log file it seems that nfs process have crashed. I would 
like to request nfs team to look into it.

from bt:

Core was generated by `/build/install/sbin/glusterfs -s localhost --volfile-id 
gluster/nfs -p /var/lib'.
Program terminated with signal SIGSEGV, Segmentation fault.

and from nfs logs:

[2015-10-12 09:48:01.385455] I [MSGID: 112110] [nfs.c:1506:init] 0-nfs: NFS 
service started
[2015-10-12 09:48:01.386026] E [crypt.c:4298:master_set_master_vol_key] 
0-patchy-crypt: FATAL: missing master key
[2015-10-12 09:48:01.386057] E [MSGID: 101019] [xlator.c:424:xlator_init] 
0-patchy-crypt: Initialization of volume 'patchy-crypt' failed, review your 
volfile again
[2015-10-12 09:48:01.386091] E [MSGID: 101066] 
[graph.c:323:glusterfs_graph_init] 0-patchy-crypt: initializing translator 
failed
[2015-10-12 09:48:01.386128] E [MSGID: 101176] 
[graph.c:669:glusterfs_graph_activate] 0-graph: init failed
pending frames:
frame : type(0) op(0)
patchset: git://git.gluster.com/glusterfs.git
signal received: 11
time of crash: 
2015-10-12 09:48:01
configuration details:
argp 1
backtrace 1
dlfcn 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 3.8dev




ccing nfs team member.

Thanx,

Regards,

Gaurav

- Original Message -
From: "Atin Mukherjee" <amukh...@redhat.com>
To: "Gaurav Garg" <gg...@redhat.com>
Cc: "Gluster Devel" <gluster-devel@gluster.org>
Sent: Monday, October 12, 2015 3:28:29 PM
Subject: 
tests/bugs/snapshot/bug-1140162-file-snapshot-features-encrypt-opts-validation.t
 causing crash

Mentioned test generated a core @
https://build.gluster.org/job/rackspace-regression-2GB-triggered/14871/consoleFull

Please take a look.

Thanks,
Atin
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] tests/bugs/glusterd/bug-1238706-daemons-stop-on-peer-cleanup.t failing spuriously

2015-08-26 Thread Gaurav Garg
Hi Atin,

Will look into this issue.

Thanx,

Regards,
Gaurav Garg

- Original Message -
From: Atin Mukherjee amukh...@redhat.com
To: Gaurav Garg gg...@redhat.com
Cc: Gluster Devel gluster-devel@gluster.org
Sent: Thursday, August 27, 2015 9:48:44 AM
Subject: tests/bugs/glusterd/bug-1238706-daemons-stop-on-peer-cleanup.t failing 
spuriously

Gaurav,

Could you look at the above test case. I could see its failing multiple
times now. One of them is at [1]
If you don't get to the RCA, request you to move this to bad tests and
continue working on it. The same test is now added in spurious failure
list [2]

[1]
https://build.gluster.org/job/rackspace-regression-2GB-triggered/13718/consoleFull

[2] https://public.pad.fsfe.org/p/gluster-spurious-failures

-- 
~Atin
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] rpm build error in master

2015-08-24 Thread Gaurav Garg
Hi Anand,

Even i am able to create rpm in fedora 22 with latest glusterFs code. Wondering 
how erratic behavior is coming with your system. will look into it. 

Thanx,

Regards,
~Gaurav

- Original Message -
From: Anand Nekkunti anekk...@redhat.com
To: Gluster Devel gluster-devel@gluster.org
Sent: Monday, August 24, 2015 4:00:47 PM
Subject: [Gluster-devel] rpm build error in master

HI
  I am trying to build rpms  in master but it is failing with below error

RPM build errors:
 Installed (but unpackaged) file(s) found:
/var/lib/glusterd/hooks/1/delete/post/S57glusterfind-delete-post.pyc
/var/lib/glusterd/hooks/1/delete/post/S57glusterfind-delete-post.pyo
Makefile:573: recipe for target 'rpms' failed
make: *** [rpms] Error 1
make: Leaving directory '/home/blr/anekkunt/glusterfs/extras/LinuxRPM'

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] volfile change

2015-07-23 Thread Gaurav Garg
Hi Emmanuel,

ssl.dh-param (not yet committed) seems to restart the daemon while
ssl.cipher-list does not.

ss.dh-param is the option that you are adding additionally based on your 
requirement in current gluster code. 

So we need to check your patch first then only we can say how brick is 
restarting. As of now restarting brick is not plausible after executing volume 
set command.

Thanx,
~Gaurav


- Original Message -
From: Emmanuel Dreyfus m...@netbsd.org
To: Gaurav Garg gg...@redhat.com
Cc: gluster-devel@gluster.org
Sent: Thursday, July 23, 2015 11:08:39 PM
Subject: Re: [Gluster-devel] volfile change

Gaurav Garg gg...@redhat.com wrote:

 could you tell me the name of the option which did brick restart?



-- 
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
m...@netbsd.org
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] volfile change

2015-07-23 Thread Gaurav Garg
Hi Emmanuel,

Restarting a already running daemon is based on restarting volume, restarting 
glusterd, change in topology of volfile for eg: removing/adding bricks,
by performing these thing it should not restart brick, it should only restart 
daemon's.

could you tell me the name of the option which did brick restart?? 

Thanx,
Gaurav

- Original Message -
From: Emmanuel Dreyfus m...@netbsd.org
To: gluster-devel@gluster.org
Sent: Thursday, July 23, 2015 1:42:45 PM
Subject: [Gluster-devel] volfile change

Hello

While testing, I noticed that some gluster volume set operations
caused a brick restart whil other did not. Comparing the code
around both options, I see no difference. 

How does it decides to restart a daemon?

-- 
Emmanuel Dreyfus
m...@netbsd.org
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Spurious Failure: ./tests/bugs/cli/bug-1087487.t: 1 new core files

2015-07-09 Thread Gaurav Garg
+nithya, +raghavendra,

 Original Message -
From: Gaurav Garg gg...@redhat.com
To: Joseph Fernandes josfe...@redhat.com
Cc: Gluster Devel gluster-devel@gluster.org
Sent: Thursday, July 9, 2015 11:20:49 AM
Subject: Re: [Gluster-devel] Spurious Failure: ./tests/bugs/cli/bug-1087487.t: 
1 new core files

Hi joseph,

By looking at bt it seems that rebalance process have crashed.


(gdb) bt
#0  0x0040e825 in glusterfs_rebalance_event_notify_cbk 
(req=0x7ff25000497c, iov=0x7ff26ed905e0, count=1, myframe=0x7ff25000175c)
at 
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/glusterfsd/src/glusterfsd-mgmt.c:1725
#1  0x7ff27abc66ab in saved_frames_unwind (saved_frames=0x1817ca0)
at 
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/rpc/rpc-lib/src/rpc-clnt.c:361
#2  0x7ff27abc674a in saved_frames_destroy (frames=0x1817ca0)
at 
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/rpc/rpc-lib/src/rpc-clnt.c:378
#3  0x7ff27abc6ba1 in rpc_clnt_connection_cleanup (conn=0x1816870)
at 
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/rpc/rpc-lib/src/rpc-clnt.c:527
#4  0x7ff27abc75ad in rpc_clnt_notify (trans=0x1816cb0, mydata=0x1816870, 
event=RPC_TRANSPORT_DISCONNECT, data=0x1816cb0)
at 
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/rpc/rpc-lib/src/rpc-clnt.c:836
#5  0x7ff27abc3ad7 in rpc_transport_notify (this=0x1816cb0, 
event=RPC_TRANSPORT_DISCONNECT, data=0x1816cb0)
at 
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/rpc/rpc-lib/src/rpc-transport.c:538
#6  0x7ff2703b3101 in socket_event_poll_err (this=0x1816cb0)
at 
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/rpc/rpc-transport/socket/src/socket.c:1200
#7  0x7ff2703b7e2c in socket_event_handler (fd=9, idx=1, data=0x1816cb0, 
poll_in=1, poll_out=0, poll_err=24)
at 
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/rpc/rpc-transport/socket/src/socket.c:2405
#8  0x7ff27ae779f8 in event_dispatch_epoll_handler (event_pool=0x17dbc90, 
event=0x7ff26ed90e70)
at 
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/libglusterfs/src/event-epoll.c:570
#9  0x7ff27ae77de6 in event_dispatch_epoll_worker (data=0x1817e70)
at 
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/libglusterfs/src/event-epoll.c:673
#10 0x7ff27a0de9d1 in start_thread () from ./lib64/libpthread.so.0
#11 0x7ff279a488fd in clone () from ./lib64/libc.so.6
(gdb) f 0
#0  0x0040e825 in glusterfs_rebalance_event_notify_cbk 
(req=0x7ff25000497c, iov=0x7ff26ed905e0, count=1, myframe=0x7ff25000175c)
at 
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/glusterfsd/src/glusterfsd-mgmt.c:1725
1725in 
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/glusterfsd/src/glusterfsd-mgmt.c
(gdb) p myframe
$7 = (void *) 0x7ff25000175c
(gdb) p *myframe
Attempt to dereference a generic pointer.
(gdb) p $3.this
$8 = (xlator_t *) 0x174000
(gdb) p (xlator_t *)$3.this
$9 = (xlator_t *) 0x174000
(gdb) p *(xlator_t *)$3.this
Cannot access memory at address 0x174000
(gdb) p (call_frame_t *)myframe
$2 = (call_frame_t *) 0x7ff25000175c
(gdb) p *(call_frame_t *)myframe
$3 = {root = 0x174000, parent = 0xadc0de7ff250, frames = {next = 
0x25c90de, prev = 0x307ff268}, local = 0xac00, 
  this = 0x174000, ret = 0xadc0de7ff250, ref_count = 222, lock = 
39620608, cookie = 0x307ff268, complete = _gf_false, op = 44032, begin = {
tv_sec = 6544293208522752, tv_usec = -5926493018029821360}, end = {tv_sec = 
170169215607636190, tv_usec = 52776566518376}, 
  wind_from = 0xac00 error: Cannot access memory at address 
0xac00, 
  wind_to = 0x174000 error: Cannot access memory at address 
0x174000, 
  unwind_from = 0xff7ff250 error: Cannot access memory at address 
0xff7ff250, 
  unwind_to = 0x error: Cannot access memory at address 
0x}
(gdb) p *iov
$4 = {iov_base = 0x0, iov_len = 0}
(gdb) p *req
$5 = {conn = 0x1816870, xid = 2, req = {{iov_base = 0x0, iov_len = 0}, 
{iov_base = 0x0, iov_len = 0}}, reqcnt = 0, req_iobref = 0x0, rsp = {{iov_base 
= 0x0, 
  iov_len = 0}, {iov_base = 0x0, iov_len = 0}}, rspcnt = 0, rsp_iobref = 
0x0, rpc_status = -1, verf = {flavour = 0, datalen = 0, 
authdata = '\000' repeats 399 times}, prog = 0x615580 
clnt_handshake_prog, procnum = 5, cbkfn = 0x40e7ce 
glusterfs_rebalance_event_notify_cbk, 
  conn_private = 0x0}
(gdb) 
$6 = {conn = 0x1816870, xid = 2, req = {{iov_base = 0x0, iov_len = 0}, 
{iov_base = 0x0, iov_len = 0}}, reqcnt = 0, req_iobref = 0x0, rsp = {{iov_base 
= 0x0, 
  iov_len = 0}, {iov_base = 0x0, iov_len = 0}}, rspcnt = 0, rsp_iobref = 
0x0, rpc_status = -1, verf = {flavour = 0, datalen = 0, 
authdata = '\000' repeats 399 times}, prog = 0x615580

Re: [Gluster-devel] Spurious Failure: ./tests/bugs/cli/bug-1087487.t: 1 new core files

2015-07-08 Thread Gaurav Garg
Ya sure will look into this issue.

Regards,
Gaurav

- Original Message -
From: Joseph Fernandes josfe...@redhat.com
To: Gaurav Garg gg...@redhat.com
Cc: Gluster Devel gluster-devel@gluster.org, Atin Mukherjee 
amukh...@redhat.com
Sent: Wednesday, July 8, 2015 10:51:07 PM
Subject: Spurious Failure: ./tests/bugs/cli/bug-1087487.t: 1 new core files

Hi Gaurav,

Could you please look into this

http://build.gluster.org/job/rackspace-regression-2GB-triggered/12126/consoleFull

Regards,
Joe
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Spurious Failure: ./tests/bugs/cli/bug-1087487.t: 1 new core files

2015-07-08 Thread Gaurav Garg
Hi joseph,

By looking at bt it seems that rebalance process have crashed.


(gdb) bt
#0  0x0040e825 in glusterfs_rebalance_event_notify_cbk 
(req=0x7ff25000497c, iov=0x7ff26ed905e0, count=1, myframe=0x7ff25000175c)
at 
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/glusterfsd/src/glusterfsd-mgmt.c:1725
#1  0x7ff27abc66ab in saved_frames_unwind (saved_frames=0x1817ca0)
at 
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/rpc/rpc-lib/src/rpc-clnt.c:361
#2  0x7ff27abc674a in saved_frames_destroy (frames=0x1817ca0)
at 
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/rpc/rpc-lib/src/rpc-clnt.c:378
#3  0x7ff27abc6ba1 in rpc_clnt_connection_cleanup (conn=0x1816870)
at 
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/rpc/rpc-lib/src/rpc-clnt.c:527
#4  0x7ff27abc75ad in rpc_clnt_notify (trans=0x1816cb0, mydata=0x1816870, 
event=RPC_TRANSPORT_DISCONNECT, data=0x1816cb0)
at 
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/rpc/rpc-lib/src/rpc-clnt.c:836
#5  0x7ff27abc3ad7 in rpc_transport_notify (this=0x1816cb0, 
event=RPC_TRANSPORT_DISCONNECT, data=0x1816cb0)
at 
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/rpc/rpc-lib/src/rpc-transport.c:538
#6  0x7ff2703b3101 in socket_event_poll_err (this=0x1816cb0)
at 
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/rpc/rpc-transport/socket/src/socket.c:1200
#7  0x7ff2703b7e2c in socket_event_handler (fd=9, idx=1, data=0x1816cb0, 
poll_in=1, poll_out=0, poll_err=24)
at 
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/rpc/rpc-transport/socket/src/socket.c:2405
#8  0x7ff27ae779f8 in event_dispatch_epoll_handler (event_pool=0x17dbc90, 
event=0x7ff26ed90e70)
at 
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/libglusterfs/src/event-epoll.c:570
#9  0x7ff27ae77de6 in event_dispatch_epoll_worker (data=0x1817e70)
at 
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/libglusterfs/src/event-epoll.c:673
#10 0x7ff27a0de9d1 in start_thread () from ./lib64/libpthread.so.0
#11 0x7ff279a488fd in clone () from ./lib64/libc.so.6
(gdb) f 0
#0  0x0040e825 in glusterfs_rebalance_event_notify_cbk 
(req=0x7ff25000497c, iov=0x7ff26ed905e0, count=1, myframe=0x7ff25000175c)
at 
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/glusterfsd/src/glusterfsd-mgmt.c:1725
1725in 
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/glusterfsd/src/glusterfsd-mgmt.c
(gdb) p myframe
$7 = (void *) 0x7ff25000175c
(gdb) p *myframe
Attempt to dereference a generic pointer.
(gdb) p $3.this
$8 = (xlator_t *) 0x174000
(gdb) p (xlator_t *)$3.this
$9 = (xlator_t *) 0x174000
(gdb) p *(xlator_t *)$3.this
Cannot access memory at address 0x174000
(gdb) p (call_frame_t *)myframe
$2 = (call_frame_t *) 0x7ff25000175c
(gdb) p *(call_frame_t *)myframe
$3 = {root = 0x174000, parent = 0xadc0de7ff250, frames = {next = 
0x25c90de, prev = 0x307ff268}, local = 0xac00, 
  this = 0x174000, ret = 0xadc0de7ff250, ref_count = 222, lock = 
39620608, cookie = 0x307ff268, complete = _gf_false, op = 44032, begin = {
tv_sec = 6544293208522752, tv_usec = -5926493018029821360}, end = {tv_sec = 
170169215607636190, tv_usec = 52776566518376}, 
  wind_from = 0xac00 error: Cannot access memory at address 
0xac00, 
  wind_to = 0x174000 error: Cannot access memory at address 
0x174000, 
  unwind_from = 0xff7ff250 error: Cannot access memory at address 
0xff7ff250, 
  unwind_to = 0x error: Cannot access memory at address 
0x}
(gdb) p *iov
$4 = {iov_base = 0x0, iov_len = 0}
(gdb) p *req
$5 = {conn = 0x1816870, xid = 2, req = {{iov_base = 0x0, iov_len = 0}, 
{iov_base = 0x0, iov_len = 0}}, reqcnt = 0, req_iobref = 0x0, rsp = {{iov_base 
= 0x0, 
  iov_len = 0}, {iov_base = 0x0, iov_len = 0}}, rspcnt = 0, rsp_iobref = 
0x0, rpc_status = -1, verf = {flavour = 0, datalen = 0, 
authdata = '\000' repeats 399 times}, prog = 0x615580 
clnt_handshake_prog, procnum = 5, cbkfn = 0x40e7ce 
glusterfs_rebalance_event_notify_cbk, 
  conn_private = 0x0}
(gdb) 
$6 = {conn = 0x1816870, xid = 2, req = {{iov_base = 0x0, iov_len = 0}, 
{iov_base = 0x0, iov_len = 0}}, reqcnt = 0, req_iobref = 0x0, rsp = {{iov_base 
= 0x0, 
  iov_len = 0}, {iov_base = 0x0, iov_len = 0}}, rspcnt = 0, rsp_iobref = 
0x0, rpc_status = -1, verf = {flavour = 0, datalen = 0, 
authdata = '\000' repeats 399 times}, prog = 0x615580 
clnt_handshake_prog, procnum = 5, cbkfn = 0x40e7ce 
glusterfs_rebalance_event_notify_cbk, 
  conn_private = 0x0}


this means that this have corrupted.

ccing rebalance folk's to look into this.

Regards,
Gaurav





- Original Message -
From: Gaurav Garg gg...@redhat.com
To: Joseph Fernandes josfe...@redhat.com

Re: [Gluster-devel] [Gluster-infra] Reduce regression runs wait time - New gerrit/review work flow

2015-06-15 Thread Gaurav Garg
Hi,

some more cleaner way:

Can we have a small change in this flow ? 
==
What is proposed now: ( as per my understanding) 

Reviewer1 gives +1
Reviewer2 gives +1

Maintainer gives +2 (for merge)

Now, regression triggered = Regression failed. 

The idea is good but I think it will make more dependent on maintainer to run a 
regression test. Maintainer will give +2 only after reviewing of full patch. If 
maintainer will busy in some other work which is higher priority then reviewing 
this patch (and giving +2 if patch are good) might be delay and it might delay 
regression test to run. 

It is good if any Reviewer give +1 (after reviewing patch) for triggering 
regression test.

other idea  

http://www.gluster.org/pipermail/gluster-devel/2014-May/040822.html

we can have docker based regression test run to improve our regression test 
time.

Thanx,

Regards,
Gaurav Garg



So, code is again changed by Developer.

- Original Message -
From: Saravanakumar Arumugam sarum...@redhat.com
To: Kaushal M kshlms...@gmail.com, Atin Mukherjee amukh...@redhat.com
Cc: gluster-infra gluster-in...@gluster.org, Gluster Devel 
gluster-devel@gluster.org
Sent: Monday, June 15, 2015 10:08:46 PM
Subject: Re: [Gluster-devel] [Gluster-infra] Reduce regression runs wait time - 
New gerrit/review work flow

Hi,

 - Developer pushes change to Gerrit.
   - Zuul is notified by Gerrit of new change
 - Zuul runs pre-review checks on Jenkins. This will be the current smoke 
 tests.
   - Zuul reports back status of the checks to Gerrit.
 - If checks fail, developer will need to resend the change after
 the required fixes. The process starts once more.
 - If the checks pass, the change is now ready for review
 - The change is now reviewed by other developers and maintainers.
 Non-maintainers will be able to give only a +1 review.
   - On a negative review, the developer will need to rework the change
 and resend it. The process starts once more.
 - The maintainer give a +2 review once he/she is satisfied. The
 maintainers work is done here.
   - Zuul is notified of the +2 review
 - Zuul runs the regression runs and reports back the status.
   - If the regression runs fail, the process starts over again.
   - If the runs pass, the change is ready for acceptance.
 - Zuul will pick the change into the repository.
   - If the pick fails, Zuul will report back the failure, and the
 process starts once again.

+2 for the idea.

Can we have a small change in this flow ? 
==
What is proposed now: ( as per my understanding) 

Reviewer1 gives +1
Reviewer2 gives +1

Maintainer gives +2 (for merge)

Now, regression triggered = Regression failed. 

So, code is again changed by Developer.

Now, review needs to be done by Reviewer1/Reviewer2/Maintainer.
==
A small change in the proposal:

Reviewer1 gives +1

A single +1 is enough to get Regression Triggered.
  Lets say immediately Regression triggered and Failed.

So, developer Re-submit his/her changes.

Goes through Reviewer1, Reviewer2, Maintainer.

==

How this helps? 
It does not goes through the process from the beginning(especially when there 
is a Regression failure).
  - If the regression runs fail, the process starts over again. 

Thanks,
Saravanakumar

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] scrubber crash

2015-06-01 Thread Gaurav Garg


- Original Message -
From: Venky Shankar vshan...@redhat.com
To: gg...@redhat.com, anekk...@redhat.com
Cc: gluster-devel@gluster.org
Sent: Monday, June 1, 2015 3:28:21 PM
Subject: Re: [Gluster-devel] scrubber crash



On 06/01/2015 02:23 PM, Venky Shankar wrote:


 On 06/01/2015 01:09 PM, Anand Nekkunti wrote:
 Hi Venky
one of regression test in my patch, I found core dump from 
 scrubber . Please have a look.

 Link 
 :http://build.gluster.org/job/rackspace-regression-2GB-triggered/9925/consoleFull

 bt fir core ...

 (gdb) bt
 #0  0x7f89d6224731 in gf_tw_mod_timer_pending (base=0xf2fbc0, 
 timer=0x0, expires=233889) at 
 /home/jenkins/root/workspace/rackspace-regression-2GB-triggered/contrib/timer-wheel/timer-wheel.c:239
 #1  0x7f89c82ce7e8 in br_fsscan_reschedule (this=0x7f89c4008980, 
 child=0x7f89c4011238, fsscan=0x7f89c4012290, fsscrub=0x7f89c4010010, 
 pendingcheck=_gf_true)

 The crash happens when scrubber is paused as reconfigure() blindly 
 accesses scrubber specific data which is not available _after_ pause.

 Thanks for reporting. I'll send a fix for this.
OK. This is not a straight forward crash. The crash is due to a race 
between CHILD_UP (marking the subvolume as up and initializing 
essential structures _later_) and reconfigure() which tries to access 
structures which are yet to be initialized.

For now we can induce delay before invoking reconfigure() {pause in 
the test case} and work on a proper fix for this.

in the test case how much delay we need we don't know. so one idea is to wait 
for few second in reconfigure function
and poll whether timer have initialized or not. if it is initialized then 
proceed further. otherwise skip.

Thoughts?

-Venky
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] regression failure in tests/bugs/bitrot/1209752-volume-status-should-show-bitrot-scrub-info.t

2015-05-09 Thread Gaurav Garg
Thanx atin for this fix.

~gaurav. 

- Original Message -
From: Atin Mukherjee amukh...@redhat.com
To: ggarg  Gaurav Garg gg...@redhat.com
Cc: Gluster Devel gluster-devel@gluster.org
Sent: Saturday, May 9, 2015 12:28:42 PM
Subject: Re: [Gluster-devel] regression failure in 
tests/bugs/bitrot/1209752-volume-status-should-show-bitrot-scrub-info.t



On 05/09/2015 11:42 AM, Atin Mukherjee wrote:
 Gaurav,
 
 Can you quickly check [1]
 
 [1]
 http://build.gluster.org/job/rackspace-regression-2GB-triggered/8881/consoleFull

http://review.gluster.org/10702 should fix all of these spurious
failures coming from bitrot.

Rafi,

You would need to rebase your patches on top of it and retrigger the run.

~Atin
 

-- 
~Atin
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] regression failure in tests/bugs/bitrot/1209752-volume-status-should-show-bitrot-scrub-info.t

2015-05-09 Thread Gaurav Garg


comments inline

- Original Message -
From: Gaurav Garg gg...@redhat.com
To: Atin Mukherjee amukh...@redhat.com
Cc: Gluster Devel gluster-devel@gluster.org
Sent: Saturday, May 9, 2015 3:25:48 PM
Subject: Re: [Gluster-devel] regression failure in 
tests/bugs/bitrot/1209752-volume-status-should-show-bitrot-scrub-info.t

Thanx atin for this fix.

~gaurav. 

- Original Message -
From: Atin Mukherjee amukh...@redhat.com
To: ggarg  Gaurav Garg gg...@redhat.com
Cc: Gluster Devel gluster-devel@gluster.org
Sent: Saturday, May 9, 2015 12:28:42 PM
Subject: Re: [Gluster-devel] regression failure in 
tests/bugs/bitrot/1209752-volume-status-should-show-bitrot-scrub-info.t



On 05/09/2015 11:42 AM, Atin Mukherjee wrote:
 Gaurav,
 
 Can you quickly check [1]
 
 [1]
 http://build.gluster.org/job/rackspace-regression-2GB-triggered/8881/consoleFull

http://review.gluster.org/10702 should fix all of these spurious
failures coming from bitrot.


regarding merging of http://review.gluster.org/#/c/10702/ i have few comments 
but this patch already merged. the comments now will take care by 
http://review.gluster.org/#/c/10707/ patch.



Rafi,

You would need to rebase your patches on top of it and retrigger the run.

~Atin
 

-- 
~Atin
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] spurious failure in tests/bugs/cli/bug-1087487.t

2015-05-05 Thread Gaurav Garg
forgot to mention patch urlhttp://review.gluster.org/#/c/10475.

owner should use upstream bug id while sending patch to upstream and downstream 
bug id while sending patch to downstream branch. 

Thanks 

Regards
Gaurav

- Original Message -
From: Gaurav Garg gg...@redhat.com
To: Pranith Kumar Karampuri pkara...@redhat.com
Cc: Gluster Devel gluster-devel@gluster.org, Sakshi Bansal 
saban...@redhat.com
Sent: Tuesday, May 5, 2015 10:46:51 PM
Subject: Re: [Gluster-devel] spurious failure in tests/bugs/cli/bug-1087487.t

Hi Pranith,


Actually there is problem is sender patch. Its a intended behavior not a 
spurious. Current patch is not solving what bug actually say along with it the 
patch owner should look into the failure test case once and  patch owner should 
modify the test cases if there is actual need of modify test case based on the 
sender current patch. 

I have posted a comment on the patch itself. once that comments will resolve 
this problem will be disappear. 

ccing patch owner. 

Thank you

~Gaurav

- Original Message -
From: Pranith Kumar Karampuri pkara...@redhat.com
To: Gaurav Garg gg...@redhat.com
Cc: Gluster Devel gluster-devel@gluster.org
Sent: Tuesday, May 5, 2015 8:52:16 PM
Subject: spurious failure in tests/bugs/cli/bug-1087487.t

Gaurav,
  Please look into 
http://build.gluster.org/job/rackspace-regression-2GB-triggered/8409/console

Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] spurious failure in tests/bugs/cli/bug-1087487.t

2015-05-05 Thread Gaurav Garg
Hi Pranith,


Actually there is problem is sender patch. Its a intended behavior not a 
spurious. Current patch is not solving what bug actually say along with it the 
patch owner should look into the failure test case once and  patch owner should 
modify the test cases if there is actual need of modify test case based on the 
sender current patch. 

I have posted a comment on the patch itself. once that comments will resolve 
this problem will be disappear. 

ccing patch owner. 

Thank you

~Gaurav

- Original Message -
From: Pranith Kumar Karampuri pkara...@redhat.com
To: Gaurav Garg gg...@redhat.com
Cc: Gluster Devel gluster-devel@gluster.org
Sent: Tuesday, May 5, 2015 8:52:16 PM
Subject: spurious failure in tests/bugs/cli/bug-1087487.t

Gaurav,
  Please look into 
http://build.gluster.org/job/rackspace-regression-2GB-triggered/8409/console

Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Configuration Error during gerrit login

2015-04-30 Thread Gaurav Garg
Hi,

I was also having the same problems many times, i fixed it by following way


1. Go to https://github.com/settings/applications and revoke the authorization 
for 'Gerrit Instance for Gluster Community'
2. Clean up all cookies for github and review.gluster.org
3. Goto https://review.gluster.org/ and sign-in again. You'll be asked to 
sign-in to Github again and provide authorization


- Original Message -
From: Vijay Bellur vbel...@redhat.com
To: Gluster Devel gluster-devel@gluster.org
Sent: Friday, May 1, 2015 12:31:38 AM
Subject: [Gluster-devel] Configuration Error during gerrit login

Ran into Configuration Error several times today. The error message 
states:

The HTTP server did not provide the username in the GITHUB_USERheader 
when it forwarded the request to Gerrit Code Review...

Switching browsers was useful for me to overcome the problem. Annoying 
for sure, but we seem to have a workaround :).

HTH,
Vijay
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] replace-brick command modification

2015-04-02 Thread Gaurav Garg
Hi all,

Since GlusterFs version 3.6.0  gluster volume replace-brick VOLNAME 
SOURCE-BRICK NEW-BRICK {start [force]|pause|abort|status|commit } command 
have deprecated. Only gluster volume replace-brick VOLNAME SOURCE-BRICK 
NEW-BRICK commit force command supported.

for bug https://bugzilla.redhat.com/show_bug.cgi?id=1094119 , Patch 
http://review.gluster.org/#/c/10101/   is removing cli/glusterd code for  
gluster volume replace-brick VOLNAME BRICK NEW-BRICK {start 
[force]|pause|abort|status|commit } command. so only we have commit force 
option supported for replace-brick command.

Should we have new command gluster volume replace-brick VOLNAME 
SOURCE-BRICK NEW-BRICK instead of having gluster volume replace-brick 
VOLNAME SOURCE-BRICK NEW-BRICK commit force command. 


Thanks  Regards
Gaurav Garg
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] replace-brick command modification

2015-04-02 Thread Gaurav Garg
Hi all,

Thank you for your thoughts. force should be present in command, i will keep 
it to commit force.  
replace brick command will be gluster volume replace-brick VOLNAME 
SOURCE-BRICK NEW-BRICK commit force

Regards
Gaurav

- Original Message -
From: Raghavendra Talur raghavendra.ta...@gmail.com
To: Kaushal M kshlms...@gmail.com
Cc: gluster-us...@gluster.org
Sent: Thursday, 2 April, 2015 11:33:50 PM
Subject: Re: [Gluster-users] replace-brick command modification



On Thu, Apr 2, 2015 at 10:28 PM, Kaushal M  kshlms...@gmail.com  wrote: 



On Thu, Apr 2, 2015 at 7:20 PM, Kelvin Edmison 
 kelvin.edmi...@alcatel-lucent.com  wrote: 
 Gaurav, 
 
 I think that it is appropriate to keep the commit force options for 
 replace-brick, just to prevent less experienced admins from self-inflicted 
 data loss scenarios. 
 
 The add-brick/remove-brick pair of operations is not an intuitive choice for 
 admins who are trying to solve a problem with a specific brick. In this 
 situation, admins are generally thinking 'how can I move the data from this 
 brick to another one', and an admin that is casually surfing documentation 
 might infer that the replace-brick operation is the correct one, rather than 
 a sequence of commands that are somehow magically related. 
 
 I believe that keeping the mandatory commit force options for replace-brick 
 will help give these admins reason to pause and re-consider if this is the 
 right command for them to do, and prevent cases where new gluster admins 
 start shouting 'gluster lost my data'. 
 
 Regards, 
 Kelvin 
 
 
 
 On 04/02/2015 07:26 AM, Gaurav Garg wrote: 
 
 Hi all, 
 
 Since GlusterFs version 3.6.0 gluster volume replace-brick VOLNAME 
 SOURCE-BRICK NEW-BRICK {start [force]|pause|abort|status|commit } 
 command have deprecated. Only gluster volume replace-brick VOLNAME 
 SOURCE-BRICK NEW-BRICK commit force command supported. 
 
 for bug https://bugzilla.redhat.com/show_bug.cgi?id=1094119 , Patch 
 http://review.gluster.org/#/c/10101/ is removing cli/glusterd code for 
 gluster volume replace-brick VOLNAME BRICK NEW-BRICK {start 
 [force]|pause|abort|status|commit } command. so only we have commit force 
 option supported for replace-brick command. 
 
 Should we have new command gluster volume replace-brick VOLNAME 
 SOURCE-BRICK NEW-BRICK instead of having gluster volume replace-brick 
 VOLNAME SOURCE-BRICK NEW-BRICK commit force command. 
 
 
 Thanks  Regards 
 Gaurav Garg 
 ___ 
 Gluster-users mailing list 
 gluster-us...@gluster.org 
 http://www.gluster.org/mailman/listinfo/gluster-users 
 
 
 
 
 ___ 
 Gluster-users mailing list 
 gluster-us...@gluster.org 
 http://www.gluster.org/mailman/listinfo/gluster-users 

AFAIK, it was never the plan to remove 'replace-brick commit force'. 
The plan was always to retain it while removing the unsupported and 
unneeded options, ie 'replace-brick (start|pause|abort|status)'. 

Gaurav, your change is attempting to do the correct thing already and 
needs no changes (other than any that arise via the review process). 


I agree with Kelvin and Kaushal. 
We should retain commit force; force brings the implicit meaning 
that I fully understand what I am asking to be done is not the norm, 
but do proceed and I hold myself responsible for anything bad that 
happens. 




~kaushal 
___ 
Gluster-users mailing list 
gluster-us...@gluster.org 
http://www.gluster.org/mailman/listinfo/gluster-users 



-- 
Raghavendra Talur 


___
Gluster-users mailing list
gluster-us...@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel