point in time, no matter how valuable will
end up being too little too late.
Regards,
Avra
On 02/01/2017 04:52 PM, Avra Sengupta wrote:
Hi,
Leadership election is an integral part of server side replication,
targeted for Gluster 4.0. The following is the design document for
Leadership Election
Hi,
Leadership election is an integral part of server side replication,
targeted for Gluster 4.0. The following is the design document for
Leadership Election Xlator (LEX). It is a modular translator designed to
work both with and independently of JBR. I would like to request you to
kindly
Works fine for us Sriram. Friday 2-3 pm.
Regards,
Avra
On 01/12/2017 11:54 AM, sri...@marirs.net.in wrote:
Hi Avra,
Sorry for the late reply, could we have the meeting tomorrow? 2-3 pm?
Sriram
On Wed, Jan 11, 2017, at 11:58 AM, Avra Sengupta wrote:
Hi,
We can have a discussion tomorrow
single patch , which i'd posted today. Is it ok to drop
all the previous posted patches and consider from the new
one? Please suggest. >> >> Sriram >> >> >> On Thu, Dec 15,
2016, at 12:45 PM, Avra Sengupta wrote: >>> Hi Srir
Hi,
Jeff gave us an overview and walkthrough of FDL and Reconciliation. The
presentation can be viewed in the link below:
https://bluejeans.com/s/B8_U3/
Regards,
Avra
___
Gluster-devel mailing list
Gluster-devel@gluster.org
eed to do
anything else.
Could you have a look and let me know?
(Sorry for the delay in creating this)
Sriram
On Thu, Oct 13, 2016, at 12:15 PM, Avra Sengupta wrote:
Hi Sriram,
The point I was trying to make is, that we want that each patch
should compile by itself, and pass regression.
Hi,
Essentially a snapshot of a volume, which was taken when the volume had
nfs.ganesha option enabled, has a export.conf file for it. This file
will be restored to the Ganesha Export Directory, which has been moved
to the shared storage. Currently we fail the snapshot restore in a
scenario,
little confusing, since it'd
changes with different intentions).
Sriram
On Mon, Oct 3, 2016, at 03:54 PM, Avra Sengupta wrote:
Hi Sriram,
I posted a comment into the first patch. It doesn't compile by
itself. We need to update the respective makefiles to be able to
compile it. Then we can int
e easy for a review
(accept/reject). Let me know if there is something off the methods
followed with gluster devel. Thanks
Sriram
On Mon, Sep 19, 2016, at 10:58 PM, Avra Sengupta wrote:
Hi Sriram,
I have created a bug for this
(https://bugzilla.redhat.com/show_bug.cgi?id=1377437). Th
On 09/19/2016 11:34 PM, Jeff Darcy wrote:
I would like to collaborate in investigating the memory-management, and
also bringing multiplexing to snapshots. For starters, will be going
through your patch(1400+ lines of change, that's one big ass patch :p)
That's nothing. I've seen 7000-line
On 09/19/2016 06:56 PM, Jeff Darcy wrote:
I have brick multiplexing[1] functional to the point that it passes all basic
AFR, EC, and quota tests. There are still some issues with tiering, and I
wouldn't consider snapshots functional at all, but it seemed like a good point
to see how well it
:08 PM, Avra Sengupta wrote:
Hi Sriram,
Sorry for the delay in response. I started going through the commits
in the github repo. I finished going through the first commit, where
you create a plugin structure and move code. Following is the commit
link:
https://github.com/sriramster/glusterfs/
Hi,
I ran "make -j" on the latest master, followed by make install. The make
install, by itself is doing a fresh compile every time (and totally
ignoring the make i did before it). Is there any recent change, which
would cause this. Thanks.
Regards,
Avra
Hi Samikshan,
I found the crash on RHEL 6.5, and it's constantly reproducible on my
setup.
Regards,
Avra
On 09/15/2016 06:21 PM, Samikshan Bairagya wrote:
On 09/15/2016 03:48 PM, Avra Sengupta wrote:
Hi,
I was trying to run valgrind with glusterd using the following command:
valgrind
Hi,
I was trying to run valgrind with glusterd using the following command:
valgrind --leak-check=full --log-file=/tmp/glusterd.log glusterd
This command used to work before, rather seamlessly but now glusterd
crashes with the following bt:
Sep 8, 2016, at 01:18 PM, Avra Sengupta wrote:
Hi Sriram,
Rajesh is on a vacation, and will be available towards the end of
next week. He will be sharing his feedback once he is back.
Meanwhile I will have a look at the patch and share my feedback with
you. But it will take me some time to go t
Hi Pranith,
The following set of automated and manual tests need to pass before
doing a release for snapshot component:
1. The entire snapshot regression suite present in the source
repository, which as of now consist of:
a. ./basic/volume-snapshot.t
b. ./basic/volume-snapshot-clone.t
Hi,
I would like to propose the following talks:
1. Ttitle: Server side replication
Theme: Gluster.Next
Abstract: Achieving availability and correctness without
compromising performance, is always a challenge in designing new storage
replication technologies. In this talk I would talk
In that case it is expected behaviour. Thanks.
Regards,
Avra
On 08/05/2016 11:36 AM, Milind Changire wrote:
The bricks are NOT lvm mounted.
The bricks are just directories on the root file-system.
Milind
On 08/05/2016 11:25 AM, Avra Sengupta wrote:
Hi Milind,
Are the bricks lvm mounterd
Hi Milind,
Are the bricks lvm mounterd bricks. This field is populated for lvm
mounted bricks, and used by them. For regular bricks, which don't have a
mount point this valus is ignored.
Regards,
Avra
On 08/04/2016 07:44 PM, Atin Mukherjee wrote:
glusterd_get_brick_mount_dir () does a
<amukh...@redhat.com
<mailto:amukh...@redhat.com>> wrote:
On Mon, Jul 25, 2016 at 5:37 PM, Atin Mukherjee
<amukh...@redhat.com <mailto:amukh...@redhat.com>> wrote:
On Mon, Jul 25, 2016 at 4:34 PM, Avra Sengupta
<aseng...@redhat.com <mail
,
Avra
On 07/25/2016 02:33 PM, Avra Sengupta wrote:
The failure suggests that the port snapd is trying to bind to is
already in use. But snapd has been modified to use a new port
everytime. I am looking into this.
On 07/25/2016 02:23 PM, Nithya Balachandran wrote:
More failures:
https
The failure suggests that the port snapd is trying to bind to is already
in use. But snapd has been modified to use a new port everytime. I am
looking into this.
On 07/25/2016 02:23 PM, Nithya Balachandran wrote:
More failures:
On 07/13/2016 02:37 AM, Niels de Vos wrote:
On Wed, Jul 13, 2016 at 12:37:17AM +0530, Avra Sengupta wrote:
Thanks Joe for the feedback. We are aware of the following issue, and we
will try and address this by going for a more generic approach, which will
not have platform dependencies.
I'm
by
systemd timers. We might want to consider using systemd.timer for
systemd distros and crontab for legacy distros.
On 07/08/2016 03:01 AM, Avra Sengupta wrote:
Hi,
Snaphsots in gluster have a scheduler, which relies heavily on
crontab, and the shared storage. I would like people using
. I hacked the scheduler to use a location under /var/lib.
I also think there needs to be a way to schedule the removal of snapshots.
-Alastair
On 8 July 2016 at 06:01, Avra Sengupta <aseng...@redhat.com
<mailto:aseng...@redhat.com>> wrote:
Hi,
Snaphsots in gluster have
On Tue, Jul 12, 2016 at 4:36 PM, Avra Sengupta
<aseng...@redhat.com <mailto:aseng...@redhat.com>> wrote:
Hi Atin,
Please check the testcase result in the console. It
clearly states the reason of the failure. A quick search
Hi Atin,
Please check the testcase result in the console. It clearly states the
reason of the failure. A quick search of 30815, as shown in the testcase
shows that the error that is generated is a thinp issue, and we can see
fallocate failing and lvm not properly being setup in the
Hi,
Snaphsots in gluster have a scheduler, which relies heavily on crontab,
and the shared storage. I would like people using this scheduler, or for
people to use this scheduler, and provide us feedback on it's
experience. We are looking for feedback on ease of use, complexity of
features,
Hi,
I have sent a patch(http://review.gluster.org/#/c/14226/1) in accordance
to lock/unlock fops in jbr-server and the discussion we had below.
Please feel free to review the same. Thanks.
Regards,
Avra
On 03/03/2016 12:21 PM, Avra Sengupta wrote:
On 03/03/2016 02:29 AM, Shyam wrote
Hi Raghavendra,
As part of the patch (http://review.gluster.org/#/c/13730/16), the
inode_ctx is not created in posix_acl_ctx_get(). Because of this the
testcase in http://review.gluster.org/#/c/13623/ breaks. It fails with
the following logs:
[2016-03-28 13:43:39.216168] D [MSGID: 0]
On 03/02/2016 02:02 PM, Venky Shankar wrote:
On Wed, Mar 02, 2016 at 01:40:08PM +0530, Avra Sengupta wrote:
Hi,
All fops in NSR, follow a specific workflow as described in this
UML(https://docs.google.com/presentation/d/1lxwox72n6ovfOwzmdlNCZBJ5vQcCaONvZva0aLWKUqk/edit?usp=sharing).
However
Hi,
Currently on a successful connection between protocol server and client,
the protocol client initiates a CHILD_UP event in the client stack. At
this point in time, only the connection between server and client is
established, and there is no guarantee that the server side stack is
ready
Well, we got quite a few suggestions. So I went ahead and created a
doodle poll. Please find the link below for the poll, and vote for the
name you think will be the best.
http://doodle.com/poll/h7gfdhswrbsxxiaa
Regards,
Avra
On 01/21/2016 12:21 PM, Avra Sengupta wrote:
On 01/21/2016 12:20
Hi,
We have two patches(mentioned below) for NSR client and NSR server
available. These patches provide the basic client and server
functionality as described in the design
(https://docs.google.com/document/d/1bbxwjUmKNhA08wTmqJGkVd_KNCyaAMhpzx4dswokyyA/edit?usp=sharing).
It would be great
a goody.
Feel free to add more than one entry.
Regards,
Avra
On 01/21/2016 10:08 AM, Pranith Kumar Karampuri wrote:
On 01/19/2016 08:00 PM, Avra Sengupta wrote:
Hi,
The leader election based replication has been called NSR or "New
Style Replication" for a while now. We would like to
On 01/21/2016 12:20 PM, Atin Mukherjee wrote:
Etherpad link please?
Oops My Bad. Here it is https://public.pad.fsfe.org/p/NSR_name_suggestions
On 01/21/2016 12:19 PM, Avra Sengupta wrote:
Thanks for the suggestion Pranith. To make things interesting, we have
created an etherpad where people
Adding http://review.gluster.org/#/c/13119/ to the list. Hopefully it
will go in today.
On 01/20/2016 01:31 PM, Venky Shankar wrote:
Pranith Kumar Karampuri wrote:
https://public.pad.fsfe.org/p/glusterfs-3.7.7 is the final list of
patches I am waiting for before making 3.7.7 release.
Hi,
The leader election based replication has been called NSR or "New Style
Replication" for a while now. We would like to have a new name for the
same that's less generic. It can be something like "Leader Driven
Replication" or something more specific that would make sense a few
years down
On 01/07/2016 02:39 PM, Emmanuel Dreyfus wrote:
On Wed, Jan 06, 2016 at 05:49:04PM +0530, Ravishankar N wrote:
I re triggered NetBSD regressions for http://review.gluster.org/#/c/13041/3
but they are being run in silent mode and are not completing. Can some one
from the infra-team take a look?
On 01/07/2016 07:24 PM, Jeff Darcy wrote:
I'd prefer a "defined level of effort" approach which *might* reduce the
benefit we derive from NetBSD testing but *definitely* keeps the cost
under control.
Did we identify the worst offenders within the spurious failing tests?
We could ignore their
Hi,
As almost all the components targeted for Gluster 4.0 have moved from
design phase to implementation phase on some level or another, I feel
it's time to get some consensus on the logging framework we are going to
use. Are we going to stick with the message ID formatted logging
framework
My 2 cents on this:
The decision we take on this should have certain rationale, and I see
two key things affecting that decision.
1. How much of the code we are planning to write now, is going to make
it to the final product. If we are sure that a sizeable amount of code
we are writing now,
Hi,
I am unable to access gluster.readdocs.org . Is anyone else facing the
same issue.
Regards,
Avra
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel
On 10/14/2015 02:05 PM, Sankarshan Mukhopadhyay wrote:
On Wed, Oct 14, 2015 at 2:03 PM, Avra Sengupta <aseng...@redhat.com> wrote:
I am unable to access gluster.readdocs.org . Is anyone else facing the same
issue.
<https://gluster.readthedocs.org/en/latest/> ?
Thanks. Looks like
On 09/07/2015 05:50 PM, Kaushal M wrote:
Hi Richard,
Thanks a lot for you feedback. I've done my replies inline.
On Sat, Sep 5, 2015 at 5:46 AM, Richard Wareing wrote:
Hey Atin (and the wider community),
This looks interesting, though I have a couple questions:
1. Language
, Avra Sengupta wrote:
Hi,
I am having a look at the core. Will update shortly.
Regards,
Avra
On 09/04/2015 11:46 AM, Joseph Fernandes wrote:
./tests/bugs/snapshot/bug-1227646.t
https://build.gluster.org/job/rackspace-regression-2GB-triggered/14021/consoleFull
Hi,
I am having a look at the core. Will update shortly.
Regards,
Avra
On 09/04/2015 11:46 AM, Joseph Fernandes wrote:
./tests/bugs/snapshot/bug-1227646.t
https://build.gluster.org/job/rackspace-regression-2GB-triggered/14021/consoleFull
___
Hi,
NetBSD regression runs are failing coz of a build error.
https://build.gluster.org/job/rackspace-netbsd7-regression-triggered/9867/consoleFull
https://build.gluster.org/job/rackspace-netbsd7-regression-triggered/9865/consoleFull
On 08/31/2015 12:10 PM, Emmanuel Dreyfus wrote:
Avra Sengupta <aseng...@redhat.com> wrote:
IOError: [Errno 1] Operation not permitted:
'/usr/pkg/lib/python2.7/site-packages/gluster/glupy/__init__.pyc'
My fault: this is the immutable flag that prevents installation. I
removed it from /u
Hi,
NetBSD regressions seem to be failing regularly with
./tests/basic/tier/tier_lookup_heal.t Following are a could of times it
has been encountered. Can you please look into this, as it is blocking
NetBSD regression runs.
On 08/22/2015 06:56 AM, Emmanuel Dreyfus wrote:
Emmanuel Dreyfus m...@netbsd.org wrote:
Yes, this is again a test corrupting random system files.
I started rebuild of nbslave7[149] from image...
Done.
Thanks a lot Emmanuel. The tests seem to be starting fine now.
Hi,
All NetBSD regression failures are again failing (more like refusing to
build), with the following error.
[2015-08-21 10:53:51.N]:++ G_LOG:./tests/basic/meta.t: TEST: 18
Started volinfo_field patchy Status ++
Is someone aware of this issue. Right now no NetBSD
+ Adding Vijaikumar
On 08/20/2015 04:19 PM, Niels de Vos wrote:
On Thu, Aug 20, 2015 at 03:05:56AM -0400, Susant Palai wrote:
Hi,
I tried running netbsd regression twice on a patch. And twice it failed at
the same point. Here is the error:
snip
Build GlusterFS
***
+
Lot of test runs are failing with the following:
+ '/opt/qa/build.sh'
File /usr/pkg/lib/python2.7/site.py, line 601
[2015-08-19 05:45:06.N]:++ G_LOG:./tests/basic/quota-anon-fd-nfs.t:
TEST: 85 ! fd_write 3 content ++
^
SyntaxError: invalid token
+ RET=1
Does it
Still hitting this on freebsd and netbsd smoke runs on release 3.6
branch. Are we merging patches on release 3.6 branch for now even with
these failures. I have two such patches that need to be merged.
Regards,
Avra
On 07/06/2015 02:32 PM, Niels de Vos wrote:
On Mon, Jul 06, 2015 at
+ Adding Raghavendra Bhat.
When is the next GA planned on this branch? And can we take patches in
this branch while this is being investigated.
Regards,
Avra
On 08/18/2015 12:07 PM, Avra Sengupta wrote:
Still hitting this on freebsd and netbsd smoke runs on release 3.6
branch. Are we
be a
good idea: would the data be readable ok? Any known side effect that
could cause issues?
On 17 Aug 2015 10:12 am, Avra Sengupta aseng...@redhat.com
mailto:aseng...@redhat.com wrote:
Hi Thibault,
Instead of backing up, individual bricks or the entire thin
logical volume, you
On 08/18/2015 09:25 AM, Atin Mukherjee wrote:
On 08/17/2015 02:20 PM, Avra Sengupta wrote:
That patch itself might not pass all regressions as it might fail at the
geo-rep test. I have sent a patch (http://review.gluster.org/#/c/11934/)
with both the tests being moved to bad test. Talur could
Hi,
The NetBSD regression tests are continuously failing with errors in the
following tests:
./tests/basic/mount-nfs-auth.t
./tests/basic/quota-anon-fd-nfs.t
Is there any recent change that is trigerring this behaviour. Also
currently one machine is running NetBSD tests. Can someone with
On 08/17/2015 12:29 PM, Vijaikumar M wrote:
On Monday 17 August 2015 12:22 PM, Avra Sengupta wrote:
Hi,
The NetBSD regression tests are continuously failing with errors in
the following tests:
./tests/basic/mount-nfs-auth.t
./tests/basic/quota-anon-fd-nfs.t
quota-anon-fd-nfs.t is known
Will send a patch moving ./tests/basic/mount-nfs-auth.t and
./tests/geo-rep/georep-basic-dr-rsync.t to bad test.
Regards,
Avra
On 08/17/2015 12:45 PM, Avra Sengupta wrote:
On 08/17/2015 12:29 PM, Vijaikumar M wrote:
On Monday 17 August 2015 12:22 PM, Avra Sengupta wrote:
Hi,
The NetBSD
:
tests/basic/mount-nfs-auth.t has been already been added to bad test by
http://review.gluster.org/11933
~Atin
On 08/17/2015 02:09 PM, Avra Sengupta wrote:
Will send a patch moving ./tests/basic/mount-nfs-auth.t and
./tests/geo-rep/georep-basic-dr-rsync.t to bad test.
Regards,
Avra
On 08/17/2015
Hi Jeff,
I am looking into the NSR feature for Gluster.Next. Currently I have
started going through the feature page, and the design
(http://review.gluster.org/#/c/8915/), and the current NSR
codebase(patches pointed to in the feature page). Could you point me to
any other documentation I
On 08/05/2015 03:06 PM, Atin Mukherjee wrote:
On 08/05/2015 02:58 PM, Avra Sengupta wrote:
Hi,
As reported in https://bugzilla.redhat.com/show_bug.cgi?id=1218732, in
the event where there is no opErrstr, some gluster commands'(like
snapshot status, volume status etc.) xml output shows
In that case will stick with opErrstr/ for all the null elements.
On 08/05/2015 04:10 PM, Prashanth Pai wrote:
Having (null) is not common in xml convention. Usually, it's either
opErrstr/
or
opErrstr/opErrstr
Regards,
-Prashanth Pai
- Original Message -
From: Avra Sengupta
Hi,
I have a few queries. Some of them might be pre-mature given that this
is just the initial mail scoping the plan, but nevertheless please find
them inline
Regards,
Avra
On 08/04/2015 02:57 PM, Krishnan Parthasarathi wrote:
# GlusterD 2.0 plan (Aug-Oct '15)
[This text in this email
The particular slave (slave21) containing the cores is down. I however
have access to slave0, so trying to recreate it on that slave and will
analyze the core when I get it.
Regards,
Avra
On 07/20/2015 03:19 PM, Ravishankar N wrote:
One more core for volume-snapshot.t:
0x7f8f3cb379d1 in start_thread () from ./lib64/libpthread.so.0
#22 0x7f8f3c4a18fd in clone () from ./lib64/libc.so.6
Regards,
Avra
On 07/20/2015 04:38 PM, Avra Sengupta wrote:
The particular slave (slave21) containing the cores is down. I however
have access to slave0, so trying to recreate
] spuriously even after the fix got
merged. Could you look into this?
[1]:
http://build.gluster.org/job/rackspace-regression-2GB-triggered/12242/consoleFull
On Thu, Jul 9, 2015 at 10:15 AM, Avra Sengupta aseng...@redhat.com wrote:
Sent a patch for this yesterday. http://review.gluster.org/#/c
-1109889.t'
failing
Also failing on:
http://build.gluster.org/job/rackspace-regression-2GB-triggered/12069/consoleFull
Regards,
Nithya
- Original Message -
From: Vijaikumar M vmall...@redhat.com
To: Gluster Devel gluster-devel@gluster.org, Avra Sengupta
aseng...@redhat.com, Rajesh Joseph
Hi,
Today with with enabling volume set option
cluster.enable-shared-storage, we create a shared storage volume called
gluster_shared_storage for the user, and mount it on all the nodes in
the cluster. Currently this volume is used for features like
nfs-ganesha, snapshot scheduler and
On 06/17/2015 12:12 PM, Rajesh Joseph wrote:
- Original Message -
From: Kaushal M kshlms...@gmail.com
To: Emmanuel Dreyfus m...@netbsd.org
Cc: Gluster Devel gluster-devel@gluster.org, gluster-infra
gluster-in...@gluster.org
Sent: Wednesday, 17 June, 2015 11:59:22 AM
Subject: Re:
Hi,
Could you please merge the following release 3.7 patches
http://review.gluster.org/#/c/11151/
http://review.gluster.org/#/c/11159/
Regards,
Avra
On 06/11/2015 01:24 AM, Avra Sengupta wrote:
Hi,
Could you please merge the following patch:
http://review.gluster.org/#/c/11139/
Regards
Hi,
I am getting devrpm failure and smoke failures for the patch
http://review.gluster.org/#/c/11139/. The failures are of the nature
that the test itself doesn't initiate. For example:
http://build.gluster.org/job/glusterfs-devrpms-el7/2527/console
I tried to re-trigger just these tests,
Hi,
Can you please merge the following patches:
http://review.gluster.org/#/c/11087/
Regards,
Avra
On 06/09/2015 08:06 PM, Avra Sengupta wrote:
Thanks KP :)
On 06/09/2015 07:51 PM, Krishnan Parthasarathi wrote:
http://review.gluster.org/#/c/11042/
http://review.gluster.org/#/c/11100
Hi,
Could you please merge the following patch:
http://review.gluster.org/#/c/11139/
Regards,
Avra
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel
+Adding gluster-infra
On 06/11/2015 10:39 AM, Pranith Kumar Karampuri wrote:
Last time when this happened Kaushal/vijay fixed it if I remember
correctly.
+kaushal +Vijay
Pranith
On 06/11/2015 10:38 AM, Anoop C S wrote:
On 06/11/2015 10:33 AM, Ravishankar N wrote:
I'm unable to push a patch
Hi,
New patches being submitted are not getting NetBSD regressions run on
them. Even manually they are not getting triggered. Is anyone aware of this?
Regards,
Avra
___
Gluster-devel mailing list
Gluster-devel@gluster.org
Thanks KP :)
On 06/09/2015 07:51 PM, Krishnan Parthasarathi wrote:
http://review.gluster.org/#/c/11042/
http://review.gluster.org/#/c/11100/
Merged.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
Hi,
Could you please merge the following patches in release 3.7 branch. It's
got code review +1s and all regressions have passed.
http://review.gluster.org/#/c/11042/
http://review.gluster.org/#/c/11100/
Regards,
Avra
___
Gluster-devel mailing
Requesting again. Can I get access to one of the slave machines so that
I can investigate the failure in volume-snapshot.t
Regards,
Avra
On 06/03/2015 12:30 PM, Avra Sengupta wrote:
+ Adding gluster-infra
On 06/03/2015 12:16 PM, Avra Sengupta wrote:
Hi,
Can I get access to one of the slave
Aravinda has given it a +1
Regards,
Avra
On 06/01/2015 03:34 PM, Krishnan Parthasarathi wrote:
Could you get a +1 from Aravinda?
- Original Message -
Hi KP,
Can we have this patch in 3.7.1 before you make the release.
http://review.gluster.org/#/c/10993/
Regards,
Avra
On
Got it. Thanks Niels.
Regards,
Avra
On 05/29/2015 01:44 PM, Niels de Vos wrote:
On Fri, May 29, 2015 at 09:11:22AM +0530, Avra Sengupta wrote:
Hi,
Usually when a patch is backported to release 3.7 branch it contains the
following from the patch already merged in master:
Change-Id
Resent http://review.gluster.org/11011 with the Makefile changes for
release 3.7 branch. Unable to abandon http://review.gluster.org/11008 as
I don't think I have permissions to do so.
Regards,
Avra
On 05/30/2015 09:20 AM, Avra Sengupta wrote:
That is because the patch that introduces
Hi,
What you are refering with the glusterd being restarted might be true,
but I don't remember if the two different peer addresses were before and
after restarting glusterd, or after restarting glusterd and before/after
peer status. We might need to re-do this exercise on a rackspace vm to
Thanks Vijay.
Regards,
Avra
On 05/22/2015 11:46 AM, Vijay Bellur wrote:
On 05/22/2015 11:41 AM, Avra Sengupta wrote:
Hi,
Given that we are holding all patches till we get the regression tests
fixed, can't we get a couple more vms. That will help increase the pace.
Have offlined slave23
on for a while longer for people to investigate with.
Regards and best wishes,
Justin Clift
On 21 May 2015, at 14:22, Avra Sengupta aseng...@redhat.com wrote:
Hi,
Can I get access to a rackspace VM so that I can debug this particular testcase
on it.
Regards,
Avra
Forwarded Message
Hi,
I am done using the machine. I have sent a patch for the issue
(http://review.gluster.org/#/c/10895/).
Regards,
Avra
On 05/22/2015 12:05 PM, Avra Sengupta wrote:
Thanks Vijay.
Regards,
Avra
On 05/22/2015 11:46 AM, Vijay Bellur wrote:
On 05/22/2015 11:41 AM, Avra Sengupta wrote:
Hi
The regressions have passed on http://review.gluster.org/#/c/10871/
Regards,
Avra
On 05/21/2015 12:29 PM, Atin Mukherjee wrote:
On 05/21/2015 12:27 PM, Vijay Bellur wrote:
On 05/21/2015 12:23 PM, Krishnan Parthasarathi wrote:
Smoke test has failed due to jenkins related issue. We need to
Thanks for merging the patch. I have backported
it(http://review.gluster.org/#/c/10871/) to release 3.7 branch as well.
Regards,
Avra
On 05/20/2015 05:52 PM, Avra Sengupta wrote:
I've sent a patch(http://review.gluster.org/#/c/10840/) to remove this
from the test-suite. Once it get's merged I
Hi,
I am not able to reproduce this failure in my set-up. I am aware that
Atin was able to do so successfully a few days back, and I tried
something similar with the following loop.
for i in {1..100}; do export DEBUG=1; prove -r
./tests/basic/volume-snapshot-clone.t 1; lines=`less 1 | grep
...@redhat.com
To: Avra Sengupta aseng...@redhat.com, gluster Devel
gluster-devel@gluster.org, atin Mukherjee amukh...@redhat.com,
Krishnan Parthasarathi kpart...@redhat.com, rjosep Rajesh Joseph
rjos...@redhat.com
On 05/21/2015 02:44 PM, Avra Sengupta wrote:
Hi,
I am not able to reproduce
Given that the fix which is tested by this patch is no longer present, I
think we should remove this patch from the test-suite itself. Could
anyone confirm if there are any concerns in doing so. If not I will send
a patch to do the same.
Regards,
Avra
On 05/08/2015 11:28 AM, Avra Sengupta
Message -
Given that the fix which is tested by this patch is no longer present, I
think we should remove this patch from the test-suite itself. Could
anyone confirm if there are any concerns in doing so. If not I will send
a patch to do the same.
Regards,
Avra
On 05/08/2015 11:28 AM, Avra
be any impact due to new feature.
Kindly let us know if any other impact on console or we need to take
care of anything else as result of this feature.
Thanks and Regards,
Shubhendu
On 05/15/2015 07:30 PM, Avra Sengupta wrote:
Hi,
A shared storage meta-volume is currently being used
Hi,
A shared storage meta-volume is currently being used by
snapshot-scheduler, geo-replication, and nfs-ganesha. In order to
simplify the creation and set-up of the same, we are introducing a
global volume set option(cluster.enable-shared-storage).
On enabling this option, the system
Hi,
I have run NetBSD regression on http://review.gluster.org/#/c/10358/
twice. http://build.gluster.org/job/netbsd6-smoke/6066/console and
http://build.gluster.org/job/netbsd6-smoke/6135/console. Both the times
the regressions have passed but the results are not getting reflected in
the
dont reflect on the patchset.
Regards,
Avra
On 05/10/2015 10:47 AM, Avra Sengupta wrote:
Hi,
I have run NetBSD regression on http://review.gluster.org/#/c/10358/
twice. http://build.gluster.org/job/netbsd6-smoke/6066/console and
http://build.gluster.org/job/netbsd6-smoke/6135/console. Both
Hi Pranith,
Could you please provide a regression instance where the snapshot tests
failed. I had a look at
http://build.gluster.org/job/rackspace-regression-2GB-triggered/8148/consoleFull
but, the logs for bug-1162498.t are not present for that instance.
Similarly other instances recorded
1 - 100 of 110 matches
Mail list logo