Re: [Gluster-devel] quota-rename.t core in netbsd

2016-10-06 Thread Manikandan Selvaganesh
Hi,

I looked at the test case and everything looks fine.

Is it possible to get a NetBSD machine to debug this issue? Or, could you
attach core/log files to debug.



--
Thanks & Regards,
Manikandan Selvaganesh.

On Wed, Oct 5, 2016 at 11:25 PM, Vijay Bellur  wrote:

> Hi All,
>
> I observed a few crashes due to quota-rename.t in netbsd regression
> runs [1] [2].  Raghavendra - can you please take a look when you get a
> chance?
>
> The core files and logs cannot be downloaded from the URLs in jenkins
> job console history for NetBSD. I have logged a bug [3] on the
> infrastructure for that.
>
> Thanks,
> Vijay
>
> [1] https://build.gluster.org/job/netbsd7-regression/942/consoleFull
>
> [2]  https://build.gluster.org/job/netbsd7-regression/945/console
>
> [3] https://bugzilla.redhat.com/show_bug.cgi?id=1382097
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Regression caused to gfapi applications with enabling client-io-threads by default

2016-10-06 Thread Surabhi Bhalothia


On 10/06/2016 11:36 AM, Soumya Koduri wrote:



On 10/05/2016 07:32 PM, Pranith Kumar Karampuri wrote:



On Wed, Oct 5, 2016 at 2:00 PM, Soumya Koduri > wrote:

Hi,

With http://review.gluster.org/#/c/15051/
, performace/client-io-threads
is enabled by default. But with that we see regression caused to
nfs-ganesha application trying to un/re-export any glusterfs volume.
This shall be the same case with any gfapi application using
glfs_fini().

More details and the RCA can be found at [1].

In short, iot-worker threads spawned  (when the above option is
enabled) are not cleaned up as part of io-threads-xlator->fini() and
those threads could end up accessing invalid/freed memory post
glfs_fini().

The actual fix is to address io-threads-xlator->fini() to cleanup
those threads before exiting. But since those threads' IDs are
currently not stored, the fix could be very intricate and take a
while. So till then to avoid all existing applications crash, I
suggest to keep this option disabled by default and update this
known_issue with enabling this option in the release-notes.

I sent a patch to revert the commit -
http://review.gluster.org/#/c/15616/
 [2]


Good catch! I think the correct fix would be to make sure all threads
die as part of PARENT_DOWN then?


From my understanding, I think these threads should be cleaned up as 
part of xlator->fini().I am not sure if it needs to be handled even 
for PARENT_DOWN as well. Do we re-spawn the threads as part of 
PARENT_UP then?


Till that part gets fixed, can we make this option back to off by 
default to avoid the regressions with master and release-3.9 branch?
I agree with Soumya to make this option back to off until we get full 
fix as we are hitting regressions with Samba/ganesha both.


Thanks,
Soumya




Comments/Suggestions are welcome.

Thanks,
Soumya

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1380619#c11

[2] http://review.gluster.org/#/c/15616/





--
Pranith

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Regression caused to gfapi applications with enabling client-io-threads by default

2016-10-06 Thread Soumya Koduri



On 10/05/2016 07:32 PM, Pranith Kumar Karampuri wrote:



On Wed, Oct 5, 2016 at 2:00 PM, Soumya Koduri > wrote:

Hi,

With http://review.gluster.org/#/c/15051/
, performace/client-io-threads
is enabled by default. But with that we see regression caused to
nfs-ganesha application trying to un/re-export any glusterfs volume.
This shall be the same case with any gfapi application using
glfs_fini().

More details and the RCA can be found at [1].

In short, iot-worker threads spawned  (when the above option is
enabled) are not cleaned up as part of io-threads-xlator->fini() and
those threads could end up accessing invalid/freed memory post
glfs_fini().

The actual fix is to address io-threads-xlator->fini() to cleanup
those threads before exiting. But since those threads' IDs are
currently not stored, the fix could be very intricate and take a
while. So till then to avoid all existing applications crash, I
suggest to keep this option disabled by default and update this
known_issue with enabling this option in the release-notes.

I sent a patch to revert the commit -
http://review.gluster.org/#/c/15616/
 [2]


Good catch! I think the correct fix would be to make sure all threads
die as part of PARENT_DOWN then?


From my understanding, I think these threads should be cleaned up as 
part of xlator->fini().I am not sure if it needs to be handled even for 
PARENT_DOWN as well. Do we re-spawn the threads as part of PARENT_UP then?


Till that part gets fixed, can we make this option back to off by 
default to avoid the regressions with master and release-3.9 branch?


Thanks,
Soumya




Comments/Suggestions are welcome.

Thanks,
Soumya

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1380619#c11

[2] http://review.gluster.org/#/c/15616/





--
Pranith

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel