- Original Message -
From: Krishnan Parthasarathi kpart...@redhat.com
To: Raghavendra Gowdappa rgowd...@redhat.com
Cc: Pranith Kumar Karampuri pkara...@redhat.com, Vijay Bellur
vbel...@redhat.com, Vijaikumar M
vmall...@redhat.com, Gluster Devel gluster-devel@gluster.org,
On Thursday 02 July 2015 11:27 AM, Krishnan Parthasarathi wrote:
Yes. The PROC_MAX is the maximum no. of 'worker' threads that would be spawned
for a given
syncenv.
- Original Message -
- Original Message -
From: Krishnan Parthasarathi kpart...@redhat.com
To: Pranith Kumar
- Original Message -
From: Vijaikumar M vmall...@redhat.com
To: Kotresh Hiremath Ravishankar khire...@redhat.com, Gluster Devel
gluster-devel@gluster.org
Cc: Sachin Pandit span...@redhat.com
Sent: Thursday, July 2, 2015 12:01:03 PM
Subject: Re: Regression Failure:
Working fine for me now. In case someone hasn't checked, try using it.
- Original Message -
From: Anoop C S achir...@redhat.com
To: gluster-devel@gluster.org
Cc: Anuradha Talur ata...@redhat.com
Sent: Thursday, July 2, 2015 10:41:35 AM
Subject: Re: [Gluster-devel] Unable to send
Comments inline.
- Original Message -
From: Sachin Pandit span...@redhat.com
To: Kotresh Hiremath Ravishankar khire...@redhat.com
Cc: Gluster Devel gluster-devel@gluster.org
Sent: Thursday, July 2, 2015 12:21:44 PM
Subject: Re: [Gluster-devel] Regression Failure:
Comments inline.
Thanks and Regards,
Kotresh H R
- Original Message -
From: Susant Palai spa...@redhat.com
To: Sachin Pandit span...@redhat.com
Cc: Kotresh Hiremath Ravishankar khire...@redhat.com, Gluster Devel
gluster-devel@gluster.org
Sent: Thursday, July 2, 2015 12:35:08 PM
We look into this issue
Thanks,
Vijay
On Thursday 02 July 2015 11:46 AM, Kotresh Hiremath Ravishankar wrote:
Hi,
I see quota.t regression failure for the following. The changes are related to
example programs in libgfchangelog.
- Original Message -
From: Kotresh Hiremath Ravishankar khire...@redhat.com
To: Susant Palai spa...@redhat.com
Cc: Gluster Devel gluster-devel@gluster.org
Sent: Thursday, July 2, 2015 1:03:18 PM
Subject: Re: [Gluster-devel] Regression Failure: ./tests/basic/quota.t
Comments
Hi,
glusterfs-3.6.4beta1 has been released and the packages for
RHEL/Fedora/Centos can be found here.
http://download.gluster.org/pub/gluster/glusterfs/qa-releases/3.6.4beta2/
Requesting people running 3.6.x to please try it out and let us know if
there are any issues.
This release
Thanks Dan!.
Pranith
On 07/02/2015 06:14 PM, Dan Lambright wrote:
I'll check on this.
- Original Message -
From: Pranith Kumar Karampuri pkara...@redhat.com
To: Gluster Devel gluster-devel@gluster.org, Joseph Fernandes
josfe...@redhat.com
Sent: Thursday, July 2, 2015 5:40:34 AM
On Thu, Jul 2, 2015 at 1:26 PM, Nithya Balachandran nbala...@redhat.com
wrote:
- Original Message -
From: Kotresh Hiremath Ravishankar khire...@redhat.com
To: Susant Palai spa...@redhat.com
Cc: Gluster Devel gluster-devel@gluster.org
Sent: Thursday, July 2, 2015 1:03:18 PM
I'll check on this.
- Original Message -
From: Pranith Kumar Karampuri pkara...@redhat.com
To: Gluster Devel gluster-devel@gluster.org, Joseph Fernandes
josfe...@redhat.com
Sent: Thursday, July 2, 2015 5:40:34 AM
Subject: [Gluster-devel] Failure in
hi,
When glusterfs mount process is coming up all cluster xlators wait
for at least one event from all the children before propagating the
status upwards. Sometimes client xlator takes upto 2 minutes to
propogate this
event(https://bugzilla.redhat.com/show_bug.cgi?id=1054694#c0) Due to
Hi Atin,
You are right!!! I was using the version 3.5 in production. And when I've
checked the Gluster source code, I checked the wrong commit (not the latest
commit in the master branch).
Currently, you've already implemented my the proposed solution. It was done
at the function
Not at all a problem. I am here to help Rarylson :)
-Atin
Sent from one plus one
On Jul 2, 2015 7:23 PM, Rarylson Freitas raryl...@gmail.com wrote:
Hi Atin,
You are right!!! I was using the version 3.5 in production. And when I've
checked the Gluster source code, I checked the wrong commit
I agree that a generic solution for all cluster xlators would be good.
Only question I have is whether parallel notifications are specially
handled somewhere.
For example, if client xlator sends EC_CHILD_DOWN after a timeout, it's
possible that an immediate EC_CHILD_UP is sent if the brick
This is caused because when bind-insecure is turned on (which is the default
now), it may happen
that brick is not able to bind to port assigned by Glusterd for example
49192-49195...
It seems to occur because the rpc_clnt connections are binding to ports in the
same range.
so brick fails to
Or perhaps we could just get everyone to stop using 'inline'
I agree that it would be a good thing to reduce/modify our use of
'inline' significantly. Any advantage gained from avoiding normal
function-all entry/exit has to be weighed against cache pollution from
having the same code repeated
Thanks Prasanna for the patches :)
-Atin
Sent from one plus one
On Jul 2, 2015 9:19 PM, Prasanna Kalever pkale...@redhat.com wrote:
This is caused because when bind-insecure is turned on (which is the
default now), it may happen
that brick is not able to bind to port assigned by Glusterd for
Joe,
Please refer to Prasanna's mail. He has uploaded a patch to solve it.
-Atin
Sent from one plus one
On Jul 2, 2015 9:42 PM, Joseph Fernandes josfe...@redhat.com wrote:
Hi All,
This is the same issue as the previous tiering regression failure.
Volume brick not able to start brick
Hi All,
This is the same issue as the previous tiering regression failure.
Volume brick not able to start brick because port is busy
[2015-07-02 10:20:20.601372] [run.c:190:runner_log] (--
/build/install/lib/libglusterfs.so.0(_gf_log_callingfn+0x240)[0x7f05e080bc32]
(--
Pranith,
I understand the bug and a more generic layer solution would be
desirable and apt, rather than repeating things at each xlator.
However, I am always confused about notifications and its processing, so
cannot state with conviction that this is fine and will work elegantly.
Will
I've reverted [1] which brought the change allow-insecure to be on by default.
The patch seems to have issues which will be addressed and merged later. The
revert can be found at [2].
[1] http://review.gluster.org/11274
[2] http://review.gluster.org/11507
Please let me know if the regressions
On 07/02/2015 07:04 PM, Pranith Kumar Karampuri wrote:
hi,
When glusterfs mount process is coming up all cluster xlators wait
for at least one event from all the children before propagating the
status upwards. Sometimes client xlator takes upto 2 minutes to
propogate this
24 matches
Mail list logo