- Original Message -
> From: "Saravanakumar Arumugam"
> To: "Raghavendra Gowdappa"
> Cc: "Gluster Devel"
> Sent: Wednesday, May 4, 2016 6:48:46 PM
> Subject: Re: Possible spurious failure -
Hi,
I changed cluster.min-free-inodes to "0". Remount the volume on
clients. inode full messages not coming to syslog anymore but I see
disperse-56 subvolume still not being used.
Anything I can do to resolve this issue? Maybe I can destroy and
recreate the volume but I am not sure It will fix
I also checked the df output all 20 bricks are same like below:
/dev/sdu1 7.3T 34M 7.3T 1% /bricks/20
On Tue, May 3, 2016 at 1:40 PM, Raghavendra G wrote:
>
>
> On Mon, May 2, 2016 at 11:41 AM, Serkan Çoban wrote:
>>
>> >1. What is the out put of
Hello folks,
I've just started this week at Red Hat. Over the next year or so, I'll be
helping with cleaning up the existing CI pipeline and improving it so that we
have much better confidence with releases.
Amy has been helping me get an overview of the infrastructure we have and we
decided
Hi,
I checked with this(1) C program if FS is returning something wrong
but that is not the case.
I also try to understand the source code, somehow in below statement
subvol_filled_inodes sets to true.
if (conf->du_stats[i].avail_inodes <
conf->min_free_inodes) {
subvol_filled_inodes = _gf_true;
This seems spurious. I have verified in master branch and test passed.
Let me know if you see this again.
Thanks,
saravaana
On 05/04/2016 06:35 PM, Raghavendra Gowdappa wrote:
Please have a look at:
https://build.gluster.org/job/rackspace-regression-2GB-triggered/20432/consoleFull
However the
Please have a look at:
https://build.gluster.org/job/rackspace-regression-2GB-triggered/20432/consoleFull
However the test runs successfully on my local machine, making me wonder
whether its a spurious failure
[root@unused glusterfs]# prove ./tests/bugs/changelog/bug-1208470.t
On 04/05/16 14:47, Raghavendra Gowdappa wrote:
- Original Message -
From: "Xavier Hernandez"
To: "Raghavendra Gowdappa"
Cc: "Gluster Devel"
Sent: Wednesday, May 4, 2016 5:37:56 PM
Subject: Re: [Gluster-devel]
On 05/04/2016 06:18 PM, ABHISHEK PALIWAL wrote:
> I am talking about the time taken by the GlusterD to mark the process
> offline because
> here GlusterD is responsible to making brick online/offline.
>
> is it configurable?
No, there is no such configuration
>
> On Wed, May 4, 2016 at 5:53
I am talking about the time taken by the GlusterD to mark the process
offline because
here GlusterD is responsible to making brick online/offline.
is it configurable?
On Wed, May 4, 2016 at 5:53 PM, Atin Mukherjee wrote:
> Abhishek,
>
> See the response inline.
>
>
> On
- Original Message -
> From: "Xavier Hernandez"
> To: "Raghavendra Gowdappa"
> Cc: "Gluster Devel"
> Sent: Wednesday, May 4, 2016 5:37:56 PM
> Subject: Re: [Gluster-devel] Possible bug in the communications layer ?
Abhishek,
See the response inline.
On 05/04/2016 05:43 PM, ABHISHEK PALIWAL wrote:
> Hi Atin,
>
> please reply, is there any configurable time out parameter for brick
> process to go offline which we can increase?
>
> Regards,
> Abhishek
>
> On Thu, Apr 21, 2016 at 12:34 PM, ABHISHEK PALIWAL
Hi Atin,
please reply, is there any configurable time out parameter for brick
process to go offline which we can increase?
Regards,
Abhishek
On Thu, Apr 21, 2016 at 12:34 PM, ABHISHEK PALIWAL
wrote:
> Hi Atin,
>
> Please answer following doubts as well:
>
> 1 .If
I think I've found the problem.
1567 case SP_STATE_READING_PROC_HEADER:
1568 __socket_proto_read (priv, ret);
1569
1570 /* there can be 'xdata' in read response, figure
it out */
1571 xdrmem_create (, proghdr_buf, default_read_size,
Hi Soumya,
Please find attached nfs.log file.
Regards,
Abhishek
On Wed, May 4, 2016 at 11:45 AM, ABHISHEK PALIWAL
wrote:
> HI Soumya,
>
> Thanks for reply.
>
> Yes, I am getting following error in /var/log/glusterfs/nfs.log file
>
> [2016-04-25 06:27:23.721851] E
HI Soumya,
Thanks for reply.
Yes, I am getting following error in /var/log/glusterfs/nfs.log file
[2016-04-25 06:27:23.721851] E [MSGID: 112109] [nfs.c:1482:init] 0-nfs:
Failed to initialize protocols
Please suggest me how can I resolve it.
Regards,
Abhishek
On Wed, May 4, 2016 at 11:33 AM,
Out of the three bugs, one is because of a mistake in backport that I did.
I will mark the other two as bad tests in 3.7 branch and re-trigger my
patch so that we can merge it.
On Tue, May 3, 2016 at 7:08 PM, Atin Mukherjee
wrote:
> Strange, there is no delta in
Hi Abhishek,
Below 'rpcinfo' output doesn't list 'nfsacl' protocol. That must be the
reason client is not able set ACLs. Could you please check the log file
'/var/lib/glusterfs/nfs.log' if there are any errors logged with respect
protocol registration failures.
Thanks,
Soumya
On 05/04/2016
18 matches
Mail list logo