The failure looks similar to the issue I had mentioned in [1]

In short for some reason the cleanup (the cleanup function that we call in
our .t files) seems to be taking more time and also not cleaning up
properly. This leads to problems for the 2nd iteration (where basic things
such as volume creation or volume start itself fails due to ENODATA or
ENOENT errors).

The 2nd iteration of the uss.t ran had the following errors.

"[2019-04-29 09:08:15.275773]:++++++++++ G_LOG:./tests/basic/uss.t: TEST:
39 gluster --mode=script --wignore volume set patchy nfs.disable false
++++++++++
[2019-04-29 09:08:15.390550]  : volume set patchy nfs.disable false :
SUCCESS
[2019-04-29 09:08:15.404624]:++++++++++ G_LOG:./tests/basic/uss.t: TEST: 42
gluster --mode=script --wignore volume start patchy ++++++++++
[2019-04-29 09:08:15.468780]  : volume start patchy : FAILED : Failed to
get extended attribute trusted.glusterfs.volume-id for brick dir
/d/backends/3/patchy_snap_mnt. Reason : No data available
"

These are the initial steps to create and start volume. Why
trusted.glusterfs.volume-id extended attribute is absent is not sure. The
analysis in [1] had errors of ENOENT (i.e. export directory itself was
absent).
I suspect this to be because of some issue with the cleanup mechanism at
the end of the tests.

[1] https://lists.gluster.org/pipermail/gluster-devel/2019-April/056104.html

On Tue, Apr 30, 2019 at 8:37 AM Sanju Rakonde <srako...@redhat.com> wrote:

> Hi Raghavendra,
>
> ./tests/basic/uss.t is timing out in release-6 branch consistently. One
> such instance is https://review.gluster.org/#/c/glusterfs/+/22641/. Can
> you please look into this?
>
> --
> Thanks,
> Sanju
>
_______________________________________________
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

Reply via email to