I think this is because of the commit 96fb3562. In this 3 options, which were
already in readdir-ahead but not exposed as volume set, were exposed as
volume set options(allowed to be set with op-version GD_OP_VERSION_3_9_1):
rda-request-size, rda-low-wmark, rda-cache-limit, rda-high-wmark
Along wi
This is definitely a bug and we have broken the older client support. [1]
is the one which has caused it.
We have changed the type of rda-request-size from int to size_t because of
which the older clients are unable to understand the KB format. I remember
when this patch was in its implementation
Pranith, Ravi,
While testing 3.10 I faced a hung FUSE mount, and on further debugging,
it looks like there are 2 problems here,
1) A deadlock in gf_timer locks, when invoked from a fault signal handler
2) The fault itself being caused due to AFR inode ctx pointer being
invalid, and hence a c
Poornima/Du,
I encountered a lot of logs from readdir-ahead, when running a ls -l
performance benchmark test as follows,
[2017-02-16 18:57:10.358205] E [MSGID: 129006]
[readdir-ahead.c:576:rda_opendir] 0-testvolssd-readdir-ahead: Dict get
of key:readdir-filter-directories failed with :-2
The
On 02/16/2017 07:23 PM, Jeff Darcy wrote:
It's *primarily* a test tool. It could also be useful to sysadmins IMO,
hence its inclusion, but I guess I could live with it being left out.
Oh! we can add it no problem, I thought it was a left over by accident.
It might make more sense for gf_atta
> On an upgrade test, when upgrading clients, glusterfs-fuse RPM now
> needed libgfapi, this is due to gf_attach being packaged as a part of
> glusterfs-fuse.
>
> Jeff, we do not need to package gf_attach, right? This is a test tool,
> if I am correct.
It's *primarily* a test tool. It could also
On an upgrade test, when upgrading clients, glusterfs-fuse RPM now
needed libgfapi, this is due to gf_attach being packaged as a part of
glusterfs-fuse.
Jeff, we do not need to package gf_attach, right? This is a test tool,
if I am correct.
Niels, you did add this to .gitignore, but looks li
Hi,
Post upgrading brick nodes from 3.8.8 to 3.10rc0 (sort of), I tried a to
mount from a 3.8.8 client (as I had not upgraded the client till then).
The mount failed with the following in the logs (at the end of the mail).
The issue was that I did an rpm -U to get the latest version, so all v
On Feb 15, 2017 5:39 PM, "Jeff Darcy" wrote:
One of the issues that has come up with multiplexing is that all of the
bricks in a process end up sharing a single log file. The reaction from
both of the people who have mentioned this is that we should find a way to
give each brick its own log even
On Thu, Feb 16, 2017 at 3:43 AM, Xavier Hernandez
wrote:
> Hi everyone,
>
> I would need some reviews if you have some time:
>
> A memory leak fix in fuse:
> * Patch already merged in master and 3.10
> * Backport to 3.9: https://review.gluster.org/16402
> * Backport to 3.8: https://re
I may not be a code contributor, but I do tend to read the code a lot to figure
out how things work out to track down a bug. I find that format changes that
are not related to code changes within the same commit just make the whole
commit more complex and harder to read.
My preference would be
On 02/16/2017 03:35 AM, Xavier Hernandez wrote:
- 1421649: Ashis/Niels when can we expect a fix to land for this?
I think this will require more thinking and participation from experts
on security and selinux to come up with a good and clean solution. Not
sure if this can be done before 3.10
On 02/16/2017 05:07 AM, Niels de Vos wrote:
On Wed, Feb 15, 2017 at 08:47:20PM -0500, Shyam wrote:
Current bug list [2]:
- 1415226: Kaleb/Niels do we need to do more for the python dependency or
is the last fix in?
There was an email where these python2 changes were causing problems
with upg
On Thu, Feb 16, 2017 at 08:41:23AM -0500, Jeff Darcy wrote:
> In the last few days, I've seen both of these kinds of review comments
> (not necessarily on my own patches or from the same reviewers).
>
> (a) "Please fix the style in the entire function where you changed one line."
>
> (b) "This st
On 02/16/2017 05:27 AM, Rajesh Joseph wrote:
On Thu, Feb 16, 2017 at 9:46 AM, Ravishankar N wrote:
On 02/16/2017 04:09 AM, Jeff Darcy wrote:
One of the issues that has come up with multiplexing is that all of the
bricks in a process end up sharing a single log file. The reaction from
both of
In the last few days, I've seen both of these kinds of review comments (not
necessarily on my own patches or from the same reviewers).
(a) "Please fix the style in the entire function where you changed one line."
(b) "This style change should be in a separate patch."
It's clearly not helpful to
> Debugging will involve getting far more/bigger files from customers
> unless we have a script (?) to grep out only those messages pertaining
> to the volume in question. IIUC, this would just be grepping for the
> volname and then determining which brick each message pertains to
> based on the br
> What about the log levels? Each volume can configure different log
> levels. Will you carve
> out a separate process in case log levels are changed for a volume?
I don't think we need to go that far, but you do raise a good point.
Log levels would need to be fetched from a brick-specific locatio
> As for file descriptor count/memory usage, I think we should be okay
> as it is not any worse than that in the non-multiplexed approach we
> have today.
I don't think that's true. Each logging context allocates a certain
amount of memory. Let's call that X. With N bricks in separate
processes
On Thu, Feb 16, 2017 at 9:46 AM, Ravishankar N wrote:
> On 02/16/2017 04:09 AM, Jeff Darcy wrote:
>>
>> One of the issues that has come up with multiplexing is that all of the
>> bricks in a process end up sharing a single log file. The reaction from
>> both of the people who have mentioned this
On Wed, Feb 15, 2017 at 08:47:20PM -0500, Shyam wrote:
> Hi,
>
> The 3.10 release tracker [1], shows 6 bugs needing a fix in 3.10. We need to
> get RC1 out so that we can start tracking the same for a potential release.
>
> Request folks on these bugs to provide a date by when we can expect a fix
On Wed, Feb 15, 2017 at 08:47:20PM -0500, Shyam wrote:
> Hi,
>
> The 3.10 release tracker [1], shows 6 bugs needing a fix in 3.10. We need to
> get RC1 out so that we can start tracking the same for a potential release.
>
> Request folks on these bugs to provide a date by when we can expect a fix
We had a well attended and active (and productive) meeting this time.
Thank you everyone for your attendance.
We discussed 3.10, 3.8.9 and an upcoming infra downtime.
shyam once again reminded everyone about the very close release date
for 3.10. 3.10.0 is still expected on the 21st. A RC1 release
Hi everyone,
I would need some reviews if you have some time:
A memory leak fix in fuse:
* Patch already merged in master and 3.10
* Backport to 3.9: https://review.gluster.org/16402
* Backport to 3.8: https://review.gluster.org/16403
A safe fallback for dynamic code generation in E
Hi Shyam,
On 16/02/17 02:47, Shyam wrote:
Hi,
The 3.10 release tracker [1], shows 6 bugs needing a fix in 3.10. We
need to get RC1 out so that we can start tracking the same for a
potential release.
Request folks on these bugs to provide a date by when we can expect a
fix for these issues.
Re
25 matches
Mail list logo