- Original Message -
From: Avra Sengupta aseng...@redhat.com
To: Gluster Devel gluster-devel@gluster.org
Sent: Friday, May 29, 2015 9:11:22 AM
Subject: [Gluster-devel] Unable to send patches to release 3.7 branch.
Hi,
Usually when a patch is backported to release 3.7 branch it
On Fri, May 29, 2015 at 09:11:22AM +0530, Avra Sengupta wrote:
Hi,
Usually when a patch is backported to release 3.7 branch it contains the
following from the patch already merged in master:
Change-Id: Ib878f39814af566b9250cf6b8ed47da0ca5b1128
BUG: 1226120
Signed-off-by: Avra
Got it. Thanks Niels.
Regards,
Avra
On 05/29/2015 01:44 PM, Niels de Vos wrote:
On Fri, May 29, 2015 at 09:11:22AM +0530, Avra Sengupta wrote:
Hi,
Usually when a patch is backported to release 3.7 branch it contains the
following from the patch already merged in master:
Change-Id:
Anand,
Could you check if your patch [1] fails this regression every time?
Otherwise I would request Avra to take a look at [2].
Snapshot delete failed with following error:
snapshot delete: failed: Snapshot patchy2_snap1_GMT-2015.05.29-13.40.17
might not be in an usable state.
volume delete:
Dear All,
I was wondering if someone can help me on the issue below please
I
am running version 3.6.2 and has ben working ok for few weeks now, just
today it went down twice within an hour with the erroes in the log
below taking the live environment down.
Restarting the gluster services
Hi all,
today we had a discussion about how to get the status of reported bugs
more correct and up to date. It is something that has come up several
times already, but now we have a BIG solution as Pranith calls it.
The goal is rather simple, but is requires some thinking about rules and
Hi all,
I am having issue to copy/move files to my replicated volume residing
inside the aws volume. I will explain my current setup. I have created two
instances in aws and attached a 50GB volume to each of them. I have mounted
those volumes at the mount points /gluster_brick1 and
On 29 May 2015 21:05, Harmeet Kalsi kharm...@hotmail.com wrote:
Dear All,
I was wondering if someone can help me on the issue below please
I am running version 3.6.2 and has ben working ok for few weeks now, just
today it went down twice within an hour with the erroes in the log below
taking
When similar automation was discussed, somebody had raised the concern when
more than one patch is associated with a BZ. Either we keep 1:1 between BZ and
patch. Otherwise the workflow needs to be improvised to inform gerrit when the
last patch is submitted for a BZ so that the state can be
On 05/29/2015 12:51 PM, Niels de Vos wrote:
Hi all,
today we had a discussion about how to get the status of reported bugs
more correct and up to date. It is something that has come up several
times already, but now we have a BIG solution as Pranith calls it.
The goal is rather simple, but is
On 05/29/2015 10:41 PM, Nagaprasad Sathyanarayana wrote:
When similar automation was discussed, somebody had raised the concern when
more than one patch is associated with a BZ. Either we keep 1:1 between BZ and
patch. Otherwise the workflow needs to be improvised to inform gerrit when the
On 05/29/2015 11:23 PM, Shyam wrote:
On 05/29/2015 12:51 PM, Niels de Vos wrote:
Hi all,
today we had a discussion about how to get the status of reported bugs
more correct and up to date. It is something that has come up several
times already, but now we have a BIG solution as Pranith calls
On 05/30/2015 08:10 AM, Pranith Kumar Karampuri wrote:
I see that kaleb already sent a patch for this:
http://review.gluster.org/#/c/11007 - master
http://review.gluster.org/#/c/11008 - NetBSD
I meant http://review.gluster.org/#/c/11008 for release-3.7 :-)
Pranith
I am going to abandon my
Niels,
As per git you are author for the test above. Could you please
help find RC for the failure. Log:
http://build.gluster.org/job/rackspace-regression-2GB-triggered/9812/consoleFull
I am going to re-trigger the build.
Pranith
___
hi,
I don't understand rpmbuild logs that well. But the following seems
to be the issue:
Start: build phase for glusterfs-3.8dev-0.314.git471b2e0.el6.src.rpm
Start: build setup for glusterfs-3.8dev-0.314.git471b2e0.el6.src.rpm
Finish: build setup for
On 05/30/2015 07:44 AM, Pranith Kumar Karampuri wrote:
On 05/30/2015 07:33 AM, Nagaprasad Sathyanarayana wrote:
It appears to me that glusterd-errno.h was added in the patch
http://review.gluster.org/10313, which was merged on 29th. Please
correct me if I am wrong.
I think it is supposed
On 30 May 2015 07:54, Pranith Kumar Karampuri pkara...@redhat.com wrote:
On 05/30/2015 07:44 AM, Pranith Kumar Karampuri wrote:
On 05/30/2015 07:33 AM, Nagaprasad Sathyanarayana wrote:
It appears to me that glusterd-errno.h was added in the patch
http://review.gluster.org/10313, which
Could it be due to the compilation errors?
http://build.gluster.org/job/glusterfs-devrpms-el6/9019/ :
glusterd-locks.c:24:28: error: glusterd-errno.h: No such file or directory
CC glusterd_la-glusterd-mgmt-handler.lo
glusterd-locks.c: In function 'glusterd_mgmt_v3_lock':
On 05/30/2015 07:33 AM, Nagaprasad Sathyanarayana wrote:
It appears to me that glusterd-errno.h was added in the patch
http://review.gluster.org/10313, which was merged on 29th. Please correct me if
I am wrong.
I think it is supposed to be added to Makefile as well. Let me do some
testing.
On 05/30/2015 09:20 AM, Avra Sengupta wrote:
That is because the patch that introduces glusterd-errno.h is not yet
merged in 3.7. So glusterd-errno.h is still not present int release
3.7. I will update the patch introducing the header file itself with
the required change, and will abandon
Resent http://review.gluster.org/11011 with the Makefile changes for
release 3.7 branch. Unable to abandon http://review.gluster.org/11008 as
I don't think I have permissions to do so.
Regards,
Avra
On 05/30/2015 09:20 AM, Avra Sengupta wrote:
That is because the patch that introduces
21 matches
Mail list logo