Hi All,
There is a good chance that, the inode on which unref came has already been
zero refed and added to the purge list. This can happen when inode table is
being destroyed (glfs_fini is something which destroys the inode table).
Consider a directory 'a' which has a file 'b'. Now as part of
To make things relatively easy for the cleanup () function in the test
framework, I think it would be better to ensure that uss.t itself deletes
snapshots and the volume once the tests are done. Patch [1] has been
submitted for review.
[1] https://review.gluster.org/#/c/glusterfs/+/22649/
Regards
The failure looks similar to the issue I had mentioned in [1]
In short for some reason the cleanup (the cleanup function that we call in
our .t files) seems to be taking more time and also not cleaning up
properly. This leads to problems for the 2nd iteration (where basic things
such as volume cre
Hi Raghavendra,
./tests/basic/uss.t is timing out in release-6 branch consistently. One
such instance is https://review.gluster.org/#/c/glusterfs/+/22641/. Can you
please look into this?
--
Thanks,
Sanju
___
Gluster-devel mailing list
Gluster-devel@glu
Yes, please open github issues for these RFEs and close the BZs.
Thanks
On Tue, Apr 30, 2019 at 6:46 AM Soumya Koduri wrote:
> Hi,
>
> To track any new feature or improvements we are currently using github .
> I assume those issues refer to the ones which are actively being worked
> upon. How d
Hi,
To track any new feature or improvements we are currently using github .
I assume those issues refer to the ones which are actively being worked
upon. How do we track backlogs which may not get addressed (at least in
the near future)?
For eg., I am planning to close couple of RFE BZs [1]
Thanks, Amar for sharing the patch, I will test and share the result.
On Tue, Apr 30, 2019 at 2:23 PM Amar Tumballi Suryanarayan <
atumb...@redhat.com> wrote:
> Shreyas/Kevin tried to address it some time back using
> https://bugzilla.redhat.com/show_bug.cgi?id=1428049 (
> https://review.gluster.
Shreyas/Kevin tried to address it some time back using
https://bugzilla.redhat.com/show_bug.cgi?id=1428049 (
https://review.gluster.org/16830)
I vaguely remember the reason to keep the hash value 1 was done during the
time when we had dictionary itself sent as on wire protocol, and in most
other p
Hi all,
Some of you folks may be familiar with HA solution provided for
nfs-ganesha by gluster using pacemaker and corosync.
That feature was removed in glusterfs 3.10 in favour for common HA
project "Storhaug". Even Storhaug was not progressed
much from last two years and current developme