Merged the patch from Mohit
https://review.gluster.org/#/c/glusterfs/+/21898/ is now merged. The issue
is not completely *fixed*, but is RCA'd for the memory consumption. We are
working on fixing the issues, meantime, to unblock the currently pending
patches, the above merge helps.
Please rebase y
Yeah, but Pranith mentioned that the issue is seen even without the iobuf
patch, so the test may fail even after fixing the thread count? Hence
reducing the volume count as suggested may be a better option.
Regards,
Poornima
On Thu, Dec 20, 2018 at 2:41 PM Amar Tumballi wrote:
> Considering, we
Considering, we have the effort to reduce the threads in progress, should
we mark it as known issue till we get the other reduced threads patch
merged?
-Amar
On Thu, Dec 20, 2018 at 2:38 PM Poornima Gurusiddaiah
wrote:
> So, this failure is related to patch [1] iobuf. Thanks to Pranith for
> id
So, this failure is related to patch [1] iobuf. Thanks to Pranith for
identifying this. This patch increases the memory consumption in the brick
mux use case(**) and causes oom kill. But it is not the problem with the
patch itself. The only way to rightly fix it is to fix the issue [2]. That
said w
Since yesterday at least 10+ patches have failed regression on
./tests/bugs/core/bug-1432542-mpx-restart-crash.t
Help to debug them soon would be appreciated.
Regards,
Amar
--
Amar Tumballi (amarts)
___
Gluster-devel mailing list
Gluster-devel@glu