On Friday 27 June 2014 10:47 AM, Raghavendra Talur wrote:
Inline.
- Original Message -
From: Atin Mukherjee amukh...@redhat.com
To: Sachin Pandit span...@redhat.com, Gluster Devel
gluster-devel@gluster.org, gluster-us...@gluster.org
Sent: Thursday, June 26, 2014 3:30:31 PM
Subject: Re:
On Monday 30 June 2014 16:18:09 Shyamsundar Ranganathan wrote:
Will this rebalance on access feature be enabled always or only during a
brick addition/removal to move files that do not go to the affected brick
while the main rebalance is populating or removing files from the brick ?
The
- Original Message -
From: Shyamsundar Ranganathan srang...@redhat.com
To: Xavier Hernandez xhernan...@datalab.es
Cc: gluster-devel@gluster.org
Sent: Tuesday, July 1, 2014 1:48:09 AM
Subject: Re: [Gluster-devel] Feature review: Improved rebalance performance
From: Xavier
- Original Message -
From: Xavier Hernandez xhernan...@datalab.es
To: Raghavendra Gowdappa rgowd...@redhat.com
Cc: Shyamsundar Ranganathan srang...@redhat.com, gluster-devel@gluster.org
Sent: Tuesday, July 1, 2014 3:10:29 PM
Subject: Re: [Gluster-devel] Feature review: Improved
On Tuesday 01 July 2014 05:55:51 Raghavendra Gowdappa wrote:
- Original Message -
Another thing to consider for future versions is to modify the current
DHT
to a consistent hashing and even the hash value (using gfid instead of
a
hash of the name would solve the rename
Hi everyone,
As everyone hopefully knows by now, improving the peer identification
mechanism within Glusterd is one of the features being targeted for
glusterfs-3.6. [0]
I had proposed this a while back, but had not been able to do much
work related to this till now. Varun (CCd) and I have been
Thank you all for the feedback.
Following will be the display shown to the user for snapshot delete command.
---
Case 1 : Single snap
[root@snapshot-24 glusterfs]# gluster snapshot delete snap-name
Deleting snap will erase all the information about
Hi,
while the erasure code xlator is being reviewed, I'm thinking about how to
handle some kinds of errors.
In normal circumstances all bricks will give the same answers to the same
requests, however, after some brick failures, underlying file system
corruption or any other factors, it's
Hi,
current implementation of ec xlator uses inodelk/entrylk before each operation
to guarantee exclusive access to the inode. This implementation blocks any
other request to the same inode/entry until the previous operation has
completed and unlocked it.
This adds a lot of latency to each
hi Xavi,
Writev inodelk lock whole file, so write speed is bad. If
inodelk(offset,len),
IDA_KEY_SIZE xattr will be not consistent crossing bricks from unorder writev.
So how about just use IDA_KEY_VERSION and bricks ia_size to check data crash?
Drop IDA_KEY_SIZE, and lookup lock whole
On Tuesday 01 July 2014 21:37:57 haiwei.xie-soulinfo wrote:
hi Xavi,
Writev inodelk lock whole file, so write speed is bad. If
inodelk(offset,len), IDA_KEY_SIZE xattr will be not consistent crossing
bricks from unorder writev.
So how about just use IDA_KEY_VERSION and bricks ia_size
From: Xavier Hernandez xhernan...@datalab.es
On Monday 30 June 2014 16:18:09 Shyamsundar Ranganathan wrote:
Will this rebalance on access feature be enabled always or only during
a
brick addition/removal to move files that do not go to the affected brick
while the main rebalance is
On 01/07/2014, at 11:30 AM, Kaushal M wrote:
snip
Varun (CCd) and I have been working on
this since last week, and are hoping to get at least the base
framework ready and merged into 3.6.
Cool. Personally, I reckon this is extremely important, as a lot
of future changes will rely on it being
13 matches
Mail list logo