Hi,
Please find my comments inline.
> Hi all
>
> Gluster-3.7.6 in 'Quota' problem occurs in the following test case.
>
> (it doesn't occur if don't enable the volume quota)
>
> Volume usage mismatch occurs when using glusterfs-3.7.6 on ZFS.
>
> Can you help with the following problems?
>
>
> 1. z
hi,
Background: Quick-read + open-behind xlators are developed to help
in small file workload reads like apache webserver, tar etc to get the
data of the file in lookup FOP itself. What happens is, when a lookup
FOP is executed, GF_CONTENT_KEY is added in xdata with max-length and
posix x
Initially I was suspecting about server-quorum be the culprit which is
not the case. By any chance is your network flaky?
On 02/01/2016 10:33 PM, Logan Barfield wrote:
> Volume Name: data02
> Type: Replicate
> Volume ID: 1c8928b1-f49e-4950-be06-0f8ce5adf870
> Status: Started
> Number of Bricks: 1
On 02/01/2016 10:24 PM, Pranith Kumar Karampuri wrote:
On 02/01/2016 10:16 PM, Joe Julian wrote:
WTF?
if (!xattrs_list) {
ret = -EINVAL;
gf_msg (this->name, GF_LOG_ERROR, -ret,
AFR_MSG_NO_CHANGELOG,
"Unable to fetch afr pending c
On 02/01/2016 10:16 PM, Joe Julian wrote:
WTF?
if (!xattrs_list) {
ret = -EINVAL;
gf_msg (this->name, GF_LOG_ERROR, -ret,
AFR_MSG_NO_CHANGELOG,
"Unable to fetch afr pending changelogs. Is
op-version"
" >
Please take a look at updated test results.
Test: find /mnt/volume -type d
RAM usage after "find" finishes: ~ 10.8G (see "ps" output [1]).
Statedump after "find" finishes: [2].
Then I did drop_caches, and RAM usage dropped to ~4.7G [3].
Statedump after drop_caches: [4].
Here is diff between s
Volume Name: data02
Type: Replicate
Volume ID: 1c8928b1-f49e-4950-be06-0f8ce5adf870
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: gluster-stor01:/export/data/brick02 <-- 10.1.1.10
Brick2: gluster-stor02:/export/data/brick02 <-- 10.1.1.11
Options Reconfigured:
s
WTF?
if (!xattrs_list) {
ret = -EINVAL;
gf_msg (this->name, GF_LOG_ERROR, -ret, AFR_MSG_NO_CHANGELOG,
"Unable to fetch afr pending changelogs. Is op-version"
" >= 30707?");
goto out;
}
On 02/01/2016 02:48 PM, Xavier Hernandez wrote:
Hi,
On 01/02/16 09:54, Soumya Koduri wrote:
On 02/01/2016 01:39 PM, Oleksandr Natalenko wrote:
Wait. It seems to be my bad.
Before unmounting I do drop_caches (2), and glusterfs process CPU usage
goes to 100% for a while. I haven't waited fo
Hi Oleksandr,
On 01/02/16 09:09, Oleksandr Natalenko wrote:
Wait. It seems to be my bad.
Before unmounting I do drop_caches (2), and glusterfs process CPU usage
goes to 100% for a while.
That's the expected behavior after applying the nlookup count patch. As
it's configured now, gluster won'
Hi,
On 01/02/16 09:54, Soumya Koduri wrote:
On 02/01/2016 01:39 PM, Oleksandr Natalenko wrote:
Wait. It seems to be my bad.
Before unmounting I do drop_caches (2), and glusterfs process CPU usage
goes to 100% for a while. I haven't waited for it to drop to 0%, and
instead perform unmount. It
On 02/01/2016 01:39 PM, Oleksandr Natalenko wrote:
Wait. It seems to be my bad.
Before unmounting I do drop_caches (2), and glusterfs process CPU usage
goes to 100% for a while. I haven't waited for it to drop to 0%, and
instead perform unmount. It seems glusterfs is purging inodes and that's
Wait. It seems to be my bad.
Before unmounting I do drop_caches (2), and glusterfs process CPU usage
goes to 100% for a while. I haven't waited for it to drop to 0%, and
instead perform unmount. It seems glusterfs is purging inodes and that's
why it uses 100% of CPU. I've re-tested it, waiting
13 matches
Mail list logo