On 05/21/2014 08:50 PM, Vijaikumar M wrote:
KP, Atin and myself did some debugging and found that there was a
deadlock in glusterd.
When creating a volume snapshot, the back-end operation 'taking a
lvm_snapshot and starting brick' for the each brick
are executed in parallel using synctask
It should be possible. I'll check and do the change.
~kaushal
On Thu, May 22, 2014 at 8:14 AM, Pranith Kumar Karampuri
pkara...@redhat.com wrote:
- Original Message -
From: Pranith Kumar Karampuri pkara...@redhat.com
To: Justin Clift jus...@gluster.org
Cc: Gluster Devel
The glusterds spawned using cluster.rc store their logs at
/d/backends/N/glusterd.log . But the cleanup() function cleans
/d/backends/, so those logs are lost before we can archive.
cluster.rc should be fixed to use a better location for the logs.
~kaushal
On Thu, May 22, 2014 at 11:45 AM,
somepath/glusterd-backend%N.log maybe?
On 22/05/2014, at 8:03 AM, Kaushal M wrote:
The glusterds spawned using cluster.rc store their logs at
/d/backends/N/glusterd.log . But the cleanup() function cleans
/d/backends/, so those logs are lost before we can archive.
cluster.rc should be
Kaushal,
Rebalance status command seems to be failing sometimes. I sent a mail about
such spurious failure earlier today. Did you get a chance to look at the logs
and confirm that rebalance didn't fail and it is indeed a timeout?
Pranith
- Original Message -
From: Kaushal M
[Adding the right alias for gluster-devel this time around]
On 05/22/2014 05:29 PM, Vijay Bellur wrote:
Hi All,
Given the addition of new sub-maintainers release maintainers to the
community [1], I have felt the need to publish a set of guidelines for
all categories of maintainers to have a
Thanks Justin, I found the problem. The VM can be deleted now.
Turns out, there was more than enough time for the rebalance to complete.
But we hit a race, which caused a command to fail.
The particular test that failed is waiting for rebalance to finish. It does
this by doing a 'gluster volume
On Wed, May 21, 2014 at 06:40:57PM +0200, Niels de Vos wrote:
A lot of work has been done on getting blockers resolved for the next
3.5 release. We're not there yet, but we're definitely getting close to
releasing a 1st beta.
Humble will follow-up with an email related to the documentation
On 05/22/2014 02:10 AM, Alex Pyrgiotis wrote:
On 02/17/2014 06:22 PM, Vijay Bellur wrote:
On 02/17/2014 05:11 PM, Alex Pyrgiotis wrote:
On 02/10/2014 07:06 PM, Vijay Bellur wrote:
On 02/05/2014 04:10 PM, Alex Pyrgiotis wrote:
Hi all,
Just wondering, do we have any news on that?
Hi Alex,
Here are the important locations in the XFS tree coming from 2.6.32 branch
STATIC int
xfs_set_acl(struct inode *inode, int type, struct posix_acl *acl)
{
struct xfs_inode *ip = XFS_I(inode);
unsigned char *ea_name;
int error;
if (S_ISLNK(inode-i_mode))
http://review.gluster.com/#/c/7823/ - the fix here
On Thu, May 22, 2014 at 1:41 PM, Harshavardhana
har...@harshavardhana.net wrote:
Here are the important locations in the XFS tree coming from 2.6.32 branch
STATIC int
xfs_set_acl(struct inode *inode, int type, struct posix_acl *acl)
{
- Original Message -
From: Kaushal M kshlms...@gmail.com
To: Justin Clift jus...@gluster.org, Gluster Devel
gluster-devel@gluster.org
Sent: Thursday, May 22, 2014 6:04:29 PM
Subject: Re: [Gluster-devel] bug-857330/normal.t failure
Thanks Justin, I found the problem. The VM can
- Original Message -
On 22/05/2014, at 1:34 PM, Kaushal M wrote:
Thanks Justin, I found the problem. The VM can be deleted now.
Done. :)
Turns out, there was more than enough time for the rebalance to complete.
But we hit a race, which caused a command to fail.
The
13 matches
Mail list logo