Re: [Gluster-devel] [Gluster-users] "rpc_clnt_ping_timer_expired" errors

2019-11-30 Thread Raghavendra Gowdappa
adding gluster-users and glusterdevel as the discussion has some generic points +Gluster-users +Gluster Devel On Mon, Mar 4, 2019 at 11:43 PM Raghavendra Gowdappa wrote: > > > On Mon, Mar 4, 2019 at 11:26 PM Yaniv Kaul wrote: > >> Is it that busy that it cannot reply f

Re: [Gluster-devel] glusterfsd memory leak issue found after enable ssl

2019-05-08 Thread Raghavendra Gowdappa
Thanks!! On Thu, May 9, 2019 at 8:34 AM Zhou, Cynthia (NSB - CN/Hangzhou) < cynthia.z...@nokia-sbell.com> wrote: > Hi, > > Ok, It is posted to https://review.gluster.org/#/c/glusterfs/+/22687/ > > > > > > > > *From:* Raghavendra Gowdappa > *Sent:* We

Re: [Gluster-devel] glusterfsd memory leak issue found after enable ssl

2019-05-08 Thread Raghavendra Gowdappa
On Wed, May 8, 2019 at 1:29 PM Zhou, Cynthia (NSB - CN/Hangzhou) < cynthia.z...@nokia-sbell.com> wrote: > Hi 'Milind Changire' , > > The leak is getting more and more clear to me now. the unsolved memory > leak is because of in gluterfs version 3.12.15 (in my env)the ssl context > is a shared

Re: [Gluster-devel] glusterd stuck for glusterfs with version 3.12.15

2019-04-25 Thread Raghavendra Gowdappa
ng of the same socket > at the same time, but after my test with this patch this problem also > exists, so I think event_handled is still called too early to allow > concurrent handling of the same socket happen, and after move it to the end > of socket_event_poll this glusterd stuck

Re: [Gluster-devel] glusterd stuck for glusterfs with version 3.12.15

2019-04-15 Thread Raghavendra Gowdappa
reading didn't find any issues with the way iobref is handled even when there is concurrent reading when the previous message was still not notified. I'll continue to investigate how objects are shared across two instances of pollin. Will post if I find anything interesting. cynthia > > *From:

Re: [Gluster-devel] glusterd stuck for glusterfs with version 3.12.15

2019-04-09 Thread Raghavendra Gowdappa
On Mon, Apr 8, 2019 at 7:42 AM Zhou, Cynthia (NSB - CN/Hangzhou) < cynthia.z...@nokia-sbell.com> wrote: > Hi glusterfs experts, > > Good day! > > In my test env, sometimes glusterd stuck issue happened, and it is not > responding to any gluster commands, when I checked this issue I find that >

Re: [Gluster-devel] [Gluster-users] POSIX locks and disconnections between clients and bricks

2019-03-28 Thread Raghavendra Gowdappa
On Thu, Mar 28, 2019 at 2:37 PM Xavi Hernandez wrote: > On Thu, Mar 28, 2019 at 3:05 AM Raghavendra Gowdappa > wrote: > >> >> >> On Wed, Mar 27, 2019 at 8:38 PM Xavi Hernandez >> wrote: >> >>> On Wed, Mar 27, 2019 at 2:20 PM Pranith

Re: [Gluster-devel] [Gluster-users] POSIX locks and disconnections between clients and bricks

2019-03-27 Thread Raghavendra Gowdappa
7, 2019 at 1:13 PM Pranith Kumar Karampuri < >>> pkara...@redhat.com> wrote: >>> >>>> >>>> >>>> On Wed, Mar 27, 2019 at 5:13 PM Xavi Hernandez >>>> wrote: >>>> >>>>> On Wed, Mar 27, 2019 at 1

Re: [Gluster-devel] [Gluster-users] POSIX locks and disconnections between clients and bricks

2019-03-27 Thread Raghavendra Gowdappa
On Wed, Mar 27, 2019 at 4:22 PM Raghavendra Gowdappa wrote: > > > On Wed, Mar 27, 2019 at 12:56 PM Xavi Hernandez > wrote: > >> Hi Raghavendra, >> >> On Wed, Mar 27, 2019 at 2:49 AM Raghavendra Gowdappa >> wrote: >> >>> All, >>> &g

Re: [Gluster-devel] [Gluster-users] POSIX locks and disconnections between clients and bricks

2019-03-27 Thread Raghavendra Gowdappa
On Wed, Mar 27, 2019 at 12:56 PM Xavi Hernandez wrote: > Hi Raghavendra, > > On Wed, Mar 27, 2019 at 2:49 AM Raghavendra Gowdappa > wrote: > >> All, >> >> Glusterfs cleans up POSIX locks held on an fd when the client/mount >> through which those locks

[Gluster-devel] POSIX locks and disconnections between clients and bricks

2019-03-26 Thread Raghavendra Gowdappa
All, Glusterfs cleans up POSIX locks held on an fd when the client/mount through which those locks are held disconnects from bricks/server. This helps Glusterfs to not run into a stale lock problem later (For eg., if application unlocks while the connection was still down). However, this means

Re: [Gluster-devel] [Gluster-Maintainers] GF_CALLOC to GF_MALLOC conversion - is it safe?

2019-03-21 Thread Raghavendra Gowdappa
On Thu, Mar 21, 2019 at 4:16 PM Atin Mukherjee wrote: > All, > > In the last few releases of glusterfs, with stability as a primary theme > of the releases, there has been lots of changes done on the code > optimization with an expectation that such changes will have gluster to > provide better

Re: [Gluster-devel] [Gluster-users] "rpc_clnt_ping_timer_expired" errors

2019-03-21 Thread Raghavendra Gowdappa
gluster server. > > So, following your suggestions, I executed, on s04 node, the top command. > In attachment, you can find the related output. > top output doesn't contain cmd/thread names. Was there anything wrong. > Thank you very much for your help. > Regards, > Mauro

Re: [Gluster-devel] [Gluster-users] "rpc_clnt_ping_timer_expired" errors

2019-03-14 Thread Raghavendra Gowdappa
oon as the activity load will be high. > Thank you, > Mauro > > On 14 Mar 2019, at 04:57, Raghavendra Gowdappa > wrote: > > > > On Wed, Mar 13, 2019 at 3:55 PM Mauro Tridici > wrote: > >> Hi Raghavendra, >> >> Yes, server.event-thread has been

Re: [Gluster-devel] [Gluster-users] "rpc_clnt_ping_timer_expired" errors

2019-03-13 Thread Raghavendra Gowdappa
x6b) [0x55ef0101632b] ) 0-: > received signum (15), shutting down > > *CRITICALS:* > *CWD: /var/log/glusterfs * > *COMMAND: grep " C " *.log |grep "2019-03-13 06:"* > > no critical errors at 06:xx > only one critical error during the day > > *[root@s06

Re: [Gluster-devel] [Gluster-users] "rpc_clnt_ping_timer_expired" errors

2019-03-11 Thread Raghavendra Gowdappa
f “s06" gluster server. > > I noticed a lot of intermittent warning and error messages. > > Thank you in advance, > Mauro > > > > On 4 Mar 2019, at 18:45, Raghavendra Gowdappa wrote: > > > +Gluster Devel , +Gluster-users > > > I would like to point

Re: [Gluster-devel] [Gluster-users] Experiences with FUSE in real world - Presentationat Vault 2019

2019-03-07 Thread Raghavendra Gowdappa
ards, > Strahil Nikolov > On Mar 7, 2019 08:54, Raghavendra Gowdappa wrote: > > Unfortunately, there is no recording. However, we are willing to discuss > our findings if you've specific questions. We can do that in this thread. > > On Thu, Mar 7, 2019 at 10:33

Re: [Gluster-devel] [Gluster-users] Experiences with FUSE in real world - Presentationat Vault 2019

2019-03-06 Thread Raghavendra Gowdappa
gt; On Mar 5, 2019 11:13, Raghavendra Gowdappa wrote: > > All, > > Recently me, Manoj and Csaba presented on positives and negatives of > implementing File systems in userspace using FUSE [1]. We had based the > talk on our experiences with Glusterfs having FUSE as the nat

Re: [Gluster-devel] [Gluster-users] "rpc_clnt_ping_timer_expired" errors

2019-03-04 Thread Raghavendra Gowdappa
change. > > Regards, > Mauro > > On 4 Mar 2019, at 16:55, Raghavendra Gowdappa wrote: > > > > On Mon, Mar 4, 2019 at 8:54 PM Mauro Tridici > wrote: > >> Hi Raghavendra, >> >> thank you for your reply. >> Yes, you are right. It is a problem

Re: [Gluster-devel] Disabling read-ahead and io-cache for native fuse mounts

2019-02-12 Thread Raghavendra Gowdappa
On Wed, Feb 13, 2019 at 11:16 AM Manoj Pillai wrote: > > > On Wed, Feb 13, 2019 at 10:51 AM Raghavendra Gowdappa > wrote: > >> >> >> On Tue, Feb 12, 2019 at 5:38 PM Raghavendra Gowdappa >> wrote: >> >>> All, >>> >

Re: [Gluster-devel] Disabling read-ahead and io-cache for native fuse mounts

2019-02-12 Thread Raghavendra Gowdappa
On Tue, Feb 12, 2019 at 5:38 PM Raghavendra Gowdappa wrote: > All, > > We've found perf xlators io-cache and read-ahead not adding any > performance improvement. At best read-ahead is redundant due to kernel > read-ahead > One thing we are still figuring out is whether

Re: [Gluster-devel] [Gluster-users] Disabling read-ahead and io-cache for native fuse mounts

2019-02-12 Thread Raghavendra Gowdappa
turn on for a class of applications or problems. Or are you just talking about the standard group settings for virt as a > custom profile? > > On Feb 12, 2019, at 7:22 AM, Raghavendra Gowdappa > wrote: > > https://review.gluster.org/22203 > > On Tue, Feb 12, 2019 at 5:38 PM Raghavend

Re: [Gluster-devel] Failing test case ./tests/bugs/distribute/bug-1161311.t

2019-02-12 Thread Raghavendra Gowdappa
ure. > Yes. In my tests too, I saw these msgs. But, i thought they are not accounted in waiting time. > Regards, > Amar > > On Tue, 12 Feb 2019 at 19:30, Raghavendra Gowdappa >> wrote: >> >>> >>> >>> On Tue, Feb 12, 2019 at 7:16 PM Mohit Agraw

Re: [Gluster-devel] Failing test case ./tests/bugs/distribute/bug-1161311.t

2019-02-12 Thread Raghavendra Gowdappa
On Tue, Feb 12, 2019 at 7:16 PM Mohit Agrawal wrote: > Hi, > > I have observed the test case ./tests/bugs/distribute/bug-1161311.t is > getting timed > I've seen failure of this too in some of my patches. out on build server at the time of running centos regression on one of my > patch

Re: [Gluster-devel] Disabling read-ahead and io-cache for native fuse mounts

2019-02-12 Thread Raghavendra Gowdappa
https://review.gluster.org/22203 On Tue, Feb 12, 2019 at 5:38 PM Raghavendra Gowdappa wrote: > All, > > We've found perf xlators io-cache and read-ahead not adding any > performance improvement. At best read-ahead is redundant due to kernel > read-ahead and at worst io-cac

[Gluster-devel] Disabling read-ahead and io-cache for native fuse mounts

2019-02-12 Thread Raghavendra Gowdappa
All, We've found perf xlators io-cache and read-ahead not adding any performance improvement. At best read-ahead is redundant due to kernel read-ahead and at worst io-cache is degrading the performance for workloads that doesn't involve re-read. Given that VFS already have both these

[Gluster-devel] Memory management, OOM kills and glusterfs

2019-02-04 Thread Raghavendra Gowdappa
All, Me, Csaba and Manoj are presenting our experiences with using FUSE as an interface for Glusterfs at Vault'19 [1]. One of the areas Glusterfs has faced difficulties is with memory management. One of the reasons for high memory consumption has been the amount of memory consumed by glusterfs

Re: [Gluster-devel] Latency analysis of GlusterFS' network layer for pgbench

2019-01-25 Thread Raghavendra Gowdappa
On Sat, Jan 26, 2019 at 8:03 AM Raghavendra Gowdappa wrote: > > > On Fri, Jan 11, 2019 at 8:09 PM Raghavendra Gowdappa > wrote: > >> Here is the update of the progress till now: >> * The client profile attached till now shows the tuple creation is >> dom

Re: [Gluster-devel] Latency analysis of GlusterFS' network layer for pgbench

2019-01-25 Thread Raghavendra Gowdappa
On Sat, Jan 26, 2019 at 8:03 AM Raghavendra Gowdappa wrote: > > > On Fri, Jan 11, 2019 at 8:09 PM Raghavendra Gowdappa > wrote: > >> Here is the update of the progress till now: >> * The client profile attached till now shows the tuple creation is >> dom

Re: [Gluster-devel] Latency analysis of GlusterFS' network layer for pgbench

2019-01-25 Thread Raghavendra Gowdappa
On Fri, Jan 11, 2019 at 8:09 PM Raghavendra Gowdappa wrote: > Here is the update of the progress till now: > * The client profile attached till now shows the tuple creation is > dominated by writes and fstats. Note that fstats are side-effects of writes > as writes invalidat

Re: [Gluster-devel] Latency analysis of GlusterFS' network layer for pgbench

2019-01-11 Thread Raghavendra Gowdappa
and Xavi, this workload is very sensitive to latency (than to concurrency). So, I am hopeful the above approaches will give positive results. [5] https://bugzilla.redhat.com/show_bug.cgi?id=1664934 regards, Raghavendra On Fri, Dec 28, 2018 at 12:44 PM Raghavendra Gowdappa wrote: > > >

Re: [Gluster-devel] [Gluster-users] On making ctime generator enabled by default in stack

2019-01-02 Thread Raghavendra Gowdappa
On Mon, Nov 12, 2018 at 10:48 AM Amar Tumballi wrote: > > > On Mon, Nov 12, 2018 at 10:39 AM Vijay Bellur wrote: > >> >> >> On Sun, Nov 11, 2018 at 8:25 PM Raghavendra Gowdappa >> wrote: >> >>> >>> >>> On Sun, Nov 11, 2018

[Gluster-devel] [DHT] serialized readdir(p) across subvols and effect on performance

2018-12-31 Thread Raghavendra Gowdappa
All, As many of us are aware, readdir(p)s are serialized across DHT subvols. One of the intuitive first reactions for this algorithm is that readdir(p) is going to be slow. However this is partly true as reading the contents of a directory is normally split into multiple readdir(p) calls and

Re: [Gluster-devel] Latency analysis of GlusterFS' network layer for pgbench

2018-12-27 Thread Raghavendra Gowdappa
On Mon, Dec 24, 2018 at 6:05 PM Raghavendra Gowdappa wrote: > > > On Mon, Dec 24, 2018 at 3:40 PM Sankarshan Mukhopadhyay < > sankarshan.mukhopadh...@gmail.com> wrote: > >> [pulling the conclusions up to enable better in-line] >> >> > Conclusion

Re: [Gluster-devel] Latency analysis of GlusterFS' network layer for pgbench

2018-12-27 Thread Raghavendra Gowdappa
On Mon, Dec 24, 2018 at 6:05 PM Raghavendra Gowdappa wrote: > > > On Mon, Dec 24, 2018 at 3:40 PM Sankarshan Mukhopadhyay < > sankarshan.mukhopadh...@gmail.com> wrote: > >> [pulling the conclusions up to enable better in-line] >> >> > Conclusion

Re: [Gluster-devel] Latency analysis of GlusterFS' network layer for pgbench

2018-12-25 Thread Raghavendra Gowdappa
On Mon, Dec 24, 2018 at 6:05 PM Raghavendra Gowdappa wrote: > > > On Mon, Dec 24, 2018 at 3:40 PM Sankarshan Mukhopadhyay < > sankarshan.mukhopadh...@gmail.com> wrote: > >> [pulling the conclusions up to enable better in-line] >> >> > Conclusion

Re: [Gluster-devel] Latency analysis of GlusterFS' network layer for pgbench

2018-12-24 Thread Raghavendra Gowdappa
On Mon, Dec 24, 2018 at 3:40 PM Sankarshan Mukhopadhyay < sankarshan.mukhopadh...@gmail.com> wrote: > [pulling the conclusions up to enable better in-line] > > > Conclusions: > > > > We should never have a volume with caching-related xlators disabled. The > price we pay for it is too high. We

Re: [Gluster-devel] [master][FAILED] brick-mux-regression

2018-12-02 Thread Raghavendra Gowdappa
On Mon, Dec 3, 2018 at 8:25 AM Raghavendra Gowdappa wrote: > > > On Sat, Dec 1, 2018 at 11:02 AM Milind Changire > wrote: > >> failed brick-mux-regression job: >> https://build.gluster.org/job/regression-on-demand-multiplex/411/console >> >> patch: &g

Re: [Gluster-devel] [master][FAILED] brick-mux-regression

2018-12-02 Thread Raghavendra Gowdappa
On Sat, Dec 1, 2018 at 11:02 AM Milind Changire wrote: > failed brick-mux-regression job: > https://build.gluster.org/job/regression-on-demand-multiplex/411/console > > patch: > https://review.gluster.org/c/glusterfs/+/21719 > Does this happen only with the above patch? Does brick-mux

Re: [Gluster-devel] [Gluster-users] On making ctime generator enabled by default in stack

2018-11-05 Thread Raghavendra Gowdappa
On Tue, Nov 6, 2018 at 9:58 AM Vijay Bellur wrote: > > > On Mon, Nov 5, 2018 at 7:56 PM Raghavendra Gowdappa > wrote: > >> All, >> >> There is a patch [1] from Kotresh, which makes ctime generator as default >> in stack. Currently ctime generator is being

[Gluster-devel] On making ctime generator enabled by default in stack

2018-11-05 Thread Raghavendra Gowdappa
All, There is a patch [1] from Kotresh, which makes ctime generator as default in stack. Currently ctime generator is being recommended only for usecases where ctime is important (like for Elasticsearch). However, a reliable (c)(m)time can fix many consistency issues within glusterfs stack too.

Re: [Gluster-devel] Release 5: GA and what are we waiting on

2018-10-11 Thread Raghavendra Gowdappa
On Thu, Oct 11, 2018 at 9:16 PM Krutika Dhananjay wrote: > > > On Thu, Oct 11, 2018 at 8:55 PM Shyam Ranganathan > wrote: > >> So we are through with a series of checks and tasks on release-5 (like >> ensuring all backports to other branches are present in 5, upgrade >> testing, basic

Re: [Gluster-devel] Glusterfs and Structured data

2018-10-07 Thread Raghavendra Gowdappa
+Gluster-users On Mon, Oct 8, 2018 at 9:34 AM Raghavendra Gowdappa wrote: > > > On Fri, Feb 9, 2018 at 4:30 PM Raghavendra Gowdappa > wrote: > >> >> >> - Original Message - >> > From: "Pranith Kumar Karampuri" >> > To:

Re: [Gluster-devel] Glusterfs and Structured data

2018-10-07 Thread Raghavendra Gowdappa
On Fri, Feb 9, 2018 at 4:30 PM Raghavendra Gowdappa wrote: > > > - Original Message - > > From: "Pranith Kumar Karampuri" > > To: "Raghavendra G" > > Cc: "Gluster Devel" > > Sent: Friday, February 9, 2018 2:30:59 PM

Re: [Gluster-devel] ./rfc.sh not pushing patch to gerrit

2018-10-05 Thread Raghavendra Gowdappa
What is the agreed upon clang version for Glusterfs project? Is it clang-6? On Fri, Oct 5, 2018 at 1:58 PM Raghavendra Gowdappa wrote: > clang-4.0.1 pushes patch, but still doesn't understand some keys in > clang-format. > > [rgowdapp@rgowdapp glusterfs]$ ./rfc.sh > [detach

Re: [Gluster-devel] ./rfc.sh not pushing patch to gerrit

2018-10-05 Thread Raghavendra Gowdappa
, 2018 at 10:47 AM Raghavendra Gowdappa wrote: > We should document (better still add checks in rfc.sh and warn user to > upgrade) that we need clang version x or greater. > > On Fri, Oct 5, 2018 at 10:45 AM Sachidananda URS wrote: > >> >> >> On Fri, Oct 5, 2018

Re: [Gluster-devel] ./rfc.sh not pushing patch to gerrit

2018-10-04 Thread Raghavendra Gowdappa
We should document (better still add checks in rfc.sh and warn user to upgrade) that we need clang version x or greater. On Fri, Oct 5, 2018 at 10:45 AM Sachidananda URS wrote: > > > On Fri, Oct 5, 2018 at 10:41 AM, Raghavendra Gowdappa > wrote: > >> General o

Re: [Gluster-devel] ./rfc.sh not pushing patch to gerrit

2018-10-04 Thread Raghavendra Gowdappa
General options: -help - Display available options (-help-hidden for more) -help-list- Display list of available options (-help-list-hidden for more) -version - Display the version of this program [rgowdapp@rgowdapp ~]$ clang-format

Re: [Gluster-devel] ./rfc.sh not pushing patch to gerrit

2018-10-04 Thread Raghavendra Gowdappa
On Fri, Oct 5, 2018 at 9:58 AM Sachidananda URS wrote: > > > On Fri, Oct 5, 2018 at 9:45 AM, Raghavendra Gowdappa > wrote: > >> >> >> On Fri, Oct 5, 2018 at 9:34 AM Raghavendra Gowdappa >> wrote: >> >>> >>> >>> On Fri, Oc

Re: [Gluster-devel] ./rfc.sh not pushing patch to gerrit

2018-10-04 Thread Raghavendra Gowdappa
On Fri, Oct 5, 2018 at 9:34 AM Raghavendra Gowdappa wrote: > > > On Fri, Oct 5, 2018 at 9:11 AM Kaushal M wrote: > >> On Fri, Oct 5, 2018 at 9:05 AM Raghavendra Gowdappa >> wrote: >> > >> > >> > >> > On Fri, Oct 5, 2018 at 8:53 AM

Re: [Gluster-devel] ./rfc.sh not pushing patch to gerrit

2018-10-04 Thread Raghavendra Gowdappa
On Fri, Oct 5, 2018 at 9:11 AM Kaushal M wrote: > On Fri, Oct 5, 2018 at 9:05 AM Raghavendra Gowdappa > wrote: > > > > > > > > On Fri, Oct 5, 2018 at 8:53 AM Amar Tumballi > wrote: > >> > >> Can you try below diff in your rfc, and let me

Re: [Gluster-devel] ./rfc.sh not pushing patch to gerrit

2018-10-04 Thread Raghavendra Gowdappa
assing smoke due to >> coding standard check" >> echo "Please install 'clang-format' to format the patch before >> submitting" >> fi >> +set -e >> >> if [ "$DRY_RUN" = 1 ]; then >> dri

[Gluster-devel] ./rfc.sh not pushing patch to gerrit

2018-10-04 Thread Raghavendra Gowdappa
All, [rgowdapp@rgowdapp glusterfs]$ ./rfc.sh + rebase_changes + GIT_EDITOR=./rfc.sh + git rebase -i origin/master [detached HEAD 34fabdd] cluster/dht: clang-format dht-common.c 1 file changed, 10674 insertions(+), 11166 deletions(-) rewrite xlators/cluster/dht/src/dht-common.c (88%) [detached

[Gluster-devel] Update of work on fixing POSIX compliance issues in Glusterfs

2018-10-01 Thread Raghavendra Gowdappa
All, There have been issues related to POSIX compliance especially while running Database workloads on Glusterfs. Recently we've worked on fixing some of them. This mail is an update on that effort. The issues themselves can be classfied into

Re: [Gluster-devel] On making performance.parallel-readdir as a default option

2018-09-21 Thread Raghavendra Gowdappa
On Fri, Sep 21, 2018 at 11:25 PM Raghavendra Gowdappa wrote: > Hi all, > > We've a feature performance.parallel-readdir [1] that is known to improve > performance of readdir operations [2][3][4]. The option is especially > useful when distribute scale is relatively large (&g

[Gluster-devel] On making performance.parallel-readdir as a default option

2018-09-21 Thread Raghavendra Gowdappa
Hi all, We've a feature performance.parallel-readdir [1] that is known to improve performance of readdir operations [2][3][4]. The option is especially useful when distribute scale is relatively large (>10) and is known to improve performance of readdir operations even on smaller scale of

[Gluster-devel] [regression tests] seeing files from previous test run

2018-08-13 Thread Raghavendra Gowdappa
All, I was consistently seeing failures for test https://review.gluster.org/#/c/glusterfs/+/20639/12/tests/bugs/readdir-ahead/bug-1390050.t TEST glusterfs --volfile-server=$H0 --volfile-id=$V0 $M0 rm -rf $M0/* TEST mkdir -p $DIRECTORY #rm -rf $DIRECTORY/* TEST touch $DIRECTORY/file{0..10}

Re: [Gluster-devel] [Gluster-Maintainers] Master branch lock down: RCA for tests ( ./tests/bugs/quick-read/bug-846240.t)

2018-08-12 Thread Raghavendra Gowdappa
Failure is tracked by bz: https://bugzilla.redhat.com/show_bug.cgi?id=1615096 Earlier this test did following things on M0 and M1 mounted on same volume: 1 create file M0/testfile 2 open an fd on M0/testfile 3 remove the file from M1, M1/testfile 4 echo "data" >>

Re: [Gluster-devel] [Gluster-Maintainers] Master branch lock down: RCA for tests (bug-1368312.t)

2018-08-12 Thread Raghavendra Gowdappa
Failure of this test is tracked by bz https://bugzilla.redhat.com/ show_bug.cgi?id=1608158. I was trying to debug regression failures on [1] and observed that split-brain-resolution.t was failing consistently. = TEST 45 (line 88): 0 get_pending_heal_count patchy

Re: [Gluster-devel] [Gluster-Maintainers] bug-1368312.t

2018-08-12 Thread Raghavendra Gowdappa
Failure of this test is tracked by bz https://bugzilla.redhat.com/show_bug.cgi?id=1608158. I was trying to debug regression failures on [1] and observed that split-brain-resolution.t was failing consistently. = TEST 45 (line 88): 0 get_pending_heal_count patchy

Re: [Gluster-devel] Master branch lock down status (Wed, August 08th)

2018-08-12 Thread Raghavendra Gowdappa
On Sun, Aug 12, 2018 at 9:11 AM, Raghavendra Gowdappa wrote: > > > On Sat, Aug 11, 2018 at 10:33 PM, Shyam Ranganathan > wrote: > >> On 08/09/2018 10:58 PM, Raghavendra Gowdappa wrote: >> > >> > >> > On Fri, Aug 10, 2018 at 1:38 AM, Shyam Ran

Re: [Gluster-devel] Master branch lock down status (Wed, August 08th)

2018-08-11 Thread Raghavendra Gowdappa
On Sat, Aug 11, 2018 at 10:33 PM, Shyam Ranganathan wrote: > On 08/09/2018 10:58 PM, Raghavendra Gowdappa wrote: > > > > > > On Fri, Aug 10, 2018 at 1:38 AM, Shyam Ranganathan > <mailto:srang...@redhat.com>> wrote: > > > > On 08/08/2018 09:04 P

Re: [Gluster-devel] ./tests/basic/afr/metadata-self-heal.t core dumped

2018-08-09 Thread Raghavendra Gowdappa
On Fri, Aug 10, 2018 at 11:21 AM, Pranith Kumar Karampuri < pkara...@redhat.com> wrote: > > > On Fri, Aug 10, 2018 at 8:54 AM Raghavendra Gowdappa > wrote: > >> All, >> >> Details can be found at: >> https://build.gluster.org/job/centos7-regressio

Re: [Gluster-devel] ./tests/basic/afr/metadata-self-heal.t core dumped

2018-08-09 Thread Raghavendra Gowdappa
This looks to be from the code change https://review.gluster.org/#/c/glusterfs/+/20639/4/libglusterfs/src/gf-dirent.c I've reverted the changes and retriggered tests. Sorry about the confusion. On Fri, Aug 10, 2018 at 8:54 AM, Raghavendra Gowdappa wrote: > All, > > Details can

[Gluster-devel] ./tests/basic/afr/metadata-self-heal.t core dumped

2018-08-09 Thread Raghavendra Gowdappa
All, Details can be found at: https://build.gluster.org/job/centos7-regression/2190/console Process that core dumped: glfs_shdheal Note that the patch on which this regression failures is on readdir-ahead which is not loaded in xlator graph of self heal daemon. >From bt, *23:53:24*

Re: [Gluster-devel] Master branch lock down status (Wed, August 08th)

2018-08-09 Thread Raghavendra Gowdappa
On Fri, Aug 10, 2018 at 1:38 AM, Shyam Ranganathan wrote: > On 08/08/2018 09:04 PM, Shyam Ranganathan wrote: > > Today's patch set 7 [1], included fixes provided till last evening IST, > > and its runs can be seen here [2] (yay! we can link to comments in > > gerrit now). > > > > New failures:

Re: [Gluster-devel] gluster fuse comsumes huge memory

2018-08-08 Thread Raghavendra Gowdappa
On Thu, Aug 9, 2018 at 10:43 AM, Raghavendra Gowdappa wrote: > > > On Thu, Aug 9, 2018 at 10:36 AM, huting3 wrote: > >> grep count will ouput nothing, so I grep size, the results are: >> >> $ grep itable glusterdump.109182.dump.153

Re: [Gluster-devel] gluster fuse comsumes huge memory

2018-08-08 Thread Raghavendra Gowdappa
网易邮箱大师 <http://mail.163.com/dashi/> 定制 > > On 08/9/2018 12:36,Raghavendra Gowdappa > wrote: > > Can you get the output of following cmds? > > # grep itable | grep lru | grep count > > # grep itable | grep active | grep count > > On Thu, Aug 9, 2018

Re: [Gluster-devel] gluster fuse comsumes huge memory

2018-08-08 Thread Raghavendra Gowdappa
netease.com > > <https://maas.mail.163.com/dashi-web-extend/html/proSignature.html?=huting3=huting3%40corp.netease.com=1=https%3A%2F%2Fmail-online.nosdn.127.net%2Fqiyelogo%2FdefaultAvatar.png=%5B%22huting3%40corp.netease.com%22%5D=https%3A%2F%2Fmail-online.nosdn.127.net%2Fqiyeicon%2F209a29

Re: [Gluster-devel] gluster fuse comsumes huge memory

2018-08-08 Thread Raghavendra Gowdappa
On Thu, Aug 9, 2018 at 8:55 AM, huting3 wrote: > Hi expert: > > I meet a problem when I use glusterfs. The problem is that the fuse client > consumes huge memory when write a lot of files(>million) to the gluster, > at last leading to killed by OS oom. The memory the fuse process consumes >

Re: [Gluster-devel] tests/bugs/distribute/bug-1122443.t - spurious failure

2018-08-04 Thread Raghavendra Gowdappa
On Fri, Aug 3, 2018 at 5:03 PM, Raghavendra Gowdappa wrote: > Will take a look. > Patch https://review.gluster.org/16419 indeed had a bug. It used to zero out stats just retaining ia_gfid and ia_type. However, fuse_readdirp_cbk would pass the attributes as valid to kernel causing the bug

Re: [Gluster-devel] tests/bugs/distribute/bug-1122443.t - spurious failure

2018-08-03 Thread Raghavendra Gowdappa
On Fri, Aug 3, 2018 at 5:58 PM, Yaniv Kaul wrote: > Why not revert, fix and resubmit (unless you can quickly fix it)? > Y. > https://review.gluster.org/20634 > > On Fri, Aug 3, 2018, 5:04 PM Raghavendra Gowdappa > wrote: > >> Will take a look. >> >>

Re: [Gluster-devel] [Gluster-Maintainers] Release 5: Master branch health report (Week of 30th July)

2018-08-03 Thread Raghavendra Gowdappa
On Fri, Aug 3, 2018 at 4:01 PM, Kotresh Hiremath Ravishankar < khire...@redhat.com> wrote: > Hi Du/Poornima, > > I was analysing bitrot and geo-rep failures and I suspect there is a bug > in some perf xlator > that was one of the cause. I was seeing following behaviour in few runs. > > 1. Geo-rep

Re: [Gluster-devel] tests/bugs/distribute/bug-1122443.t - spurious failure

2018-08-03 Thread Raghavendra Gowdappa
Will take a look. On Fri, Aug 3, 2018 at 3:08 PM, Krutika Dhananjay wrote: > Adding Raghavendra G who actually restored and reworked on this after it > was abandoned. > > -Krutika > > On Fri, Aug 3, 2018 at 2:38 PM, Nithya Balachandran > wrote: > >> Using git bisect, the patch that introduced

Re: [Gluster-devel] [Gluster-Maintainers] Release 5: Master branch health report (Week of 30th July)

2018-08-02 Thread Raghavendra Gowdappa
On Thu, Aug 2, 2018 at 5:48 PM, Kotresh Hiremath Ravishankar < khire...@redhat.com> wrote: > I am facing different issue in softserve machines. The fuse mount itself > is failing. > I tried day before yesterday to debug geo-rep failures. I discussed with > Raghu, > but could not root cause it. >

Re: [Gluster-devel] ./tests/bugs/snapshot/bug-1167580-set-proper-uid-and-gid-during-nfs-access.t fails if non-anonymous fds are used in read path

2018-08-02 Thread Raghavendra Gowdappa
t; send, I agree to mark the tests as bad. > Thanks Rafi. > > Regards > Rafi KC > > > - Original Message - > From: "Raghavendra Gowdappa" > To: "Sunny Kumar" , "Rafi" > Cc: "Gluster Devel" > Sent: Thursday, Augus

Re: [Gluster-devel] ./tests/bugs/snapshot/bug-1167580-set-proper-uid-and-gid-during-nfs-access.t fails if non-anonymous fds are used in read path

2018-08-02 Thread Raghavendra Gowdappa
I've filed a bug to track this failure: https://bugzilla.redhat.com/show_bug.cgi?id=1611532 As a stop gap measure I propose to mark the test as Bad to unblock patches [1][2]. Are maintainers of snapshot in agreement with this? regards, Raghavendra On Wed, Aug 1, 2018 at 10:28 AM, Raghavendra

[Gluster-devel] ./tests/bugs/snapshot/bug-1167580-set-proper-uid-and-gid-during-nfs-access.t fails if non-anonymous fds are used in read path

2018-07-31 Thread Raghavendra Gowdappa
Sunny/Rafi, I was trying to debug regression failures on [1]. Note that patch [1] only disables usage of anonymous fds on readv. So, I tried the same test disabling performance.open-behind [root@rhs-client27 glusterfs]# git diff diff --git

Re: [Gluster-devel] ./tests/00-geo-rep/georep-basic-dr-rsync.t fails

2018-07-28 Thread Raghavendra Gowdappa
Few failures on https://review.gluster.org/#/c/20576/ seen too. On Sat, Jul 28, 2018 at 2:47 PM, Raghavendra Gowdappa wrote: > Kotresh, > > The test failed on master (without the patch) too. I've seen failures on > this earlier too. > https://build.gluster.org/job/centos7-

[Gluster-devel] ./tests/00-geo-rep/georep-basic-dr-rsync.t fails

2018-07-28 Thread Raghavendra Gowdappa
Kotresh, The test failed on master (without the patch) too. I've seen failures on this earlier too. regards, Raghavendra ___ Gluster-devel mailing list Gluster-devel@gluster.org https://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] regression failures on afr/split-brain-resolution

2018-07-24 Thread Raghavendra Gowdappa
On Wed, Jul 25, 2018 at 10:25 AM, Ravishankar N wrote: > > > On 07/24/2018 08:45 PM, Raghavendra Gowdappa wrote: > > > I tried higher values of attribute-timeout and its not helping. Are there > any other similar split brain related tests? Can I mark these tests bad for &g

Re: [Gluster-devel] regression failures on afr/split-brain-resolution

2018-07-24 Thread Raghavendra Gowdappa
On Tue, Jul 24, 2018 at 6:54 PM, Ravishankar N wrote: > > > On 07/24/2018 06:30 PM, Ravishankar N wrote: > > > > On 07/24/2018 02:56 PM, Raghavendra Gowdappa wrote: > > All, > > I was trying to debug regression failures on [1] and observed that > split-brain-r

Re: [Gluster-devel] regression failures on afr/split-brain-resolution

2018-07-24 Thread Raghavendra Gowdappa
On Tue, Jul 24, 2018 at 8:36 PM, Raghavendra Gowdappa wrote: > > > On Tue, Jul 24, 2018 at 8:35 PM, Raghavendra Gowdappa > wrote: > >> >> >> On Tue, Jul 24, 2018 at 6:30 PM, Ravishankar N >> wrote: >> >>> >>> >>> On

Re: [Gluster-devel] regression failures on afr/split-brain-resolution

2018-07-24 Thread Raghavendra Gowdappa
On Tue, Jul 24, 2018 at 8:35 PM, Raghavendra Gowdappa wrote: > > > On Tue, Jul 24, 2018 at 6:30 PM, Ravishankar N > wrote: > >> >> >> On 07/24/2018 02:56 PM, Raghavendra Gowdappa wrote: >> >> All, >> >> I was trying to debug re

Re: [Gluster-devel] regression failures on afr/split-brain-resolution

2018-07-24 Thread Raghavendra Gowdappa
On Tue, Jul 24, 2018 at 6:30 PM, Ravishankar N wrote: > > > On 07/24/2018 02:56 PM, Raghavendra Gowdappa wrote: > > All, > > I was trying to debug regression failures on [1] and observed that > split-brain-resolution.t was failing consistently. > > ==

[Gluster-devel] regression failures on afr/split-brain-resolution

2018-07-24 Thread Raghavendra Gowdappa
All, I was trying to debug regression failures on [1] and observed that split-brain-resolution.t was failing consistently. = TEST 45 (line 88): 0 get_pending_heal_count patchy ./tests/basic/afr/split-brain-resolution.t .. 45/45 RESULT 45: 1

Re: [Gluster-devel] The ctime of fstat is not correct which lead to "tar" utility error

2018-07-22 Thread Raghavendra Gowdappa
think more about this. > > Best Regards, > > George > > > > > > > > *From:* gluster-devel-boun...@gluster.org [mailto:gluster-devel-bounces@ > gluster.org] *On Behalf Of *Raghavendra Gowdappa > *Sent:* Monday, July 23, 2018 10:37 AM > > *To:* Lian, Geo

Re: [Gluster-devel] The ctime of fstat is not correct which lead to "tar" utility error

2018-07-22 Thread Raghavendra Gowdappa
On Sun, Jul 22, 2018 at 1:41 PM, Raghavendra Gowdappa wrote: > George, > > Sorry. I sent you a version of the fix which was stale. Can you try with: > https://review.gluster.org/20549 > > This patch passes the test case you've given. > Patchset 1 solves this problem. Howeve

Re: [Gluster-devel] The ctime of fstat is not correct which lead to "tar" utility error

2018-07-22 Thread Raghavendra Gowdappa
`/' from member names > > /mnt/test/file.txt > > tar: /mnt/test/file.txt: file changed as we read it > > File: /mnt/test/file.txt > > Size: 512 Blocks: 1 IO Block: 131072 regular file > > Device: 33h/51d Inode: 13569976446871695205 Links: 1 > > Acc

Re: [Gluster-devel] The ctime of fstat is not correct which lead to "tar" utility error

2018-07-20 Thread Raghavendra Gowdappa
gt; >>>Can you let us know whether this helps? > > > > > > The patch can resolve this issue, I have verified in Gluster 4.2(master > trunk branch) and Gluster 3.12.3! > Thanks we'll merge it. > > Thanks & Best Regards, > > George > > > > *From:* gluster-d

Re: [Gluster-devel] The ctime of fstat is not correct which lead to "tar" utility error

2018-07-19 Thread Raghavendra Gowdappa
On Thu, Jul 19, 2018 at 2:29 PM, Lian, George (NSB - CN/Hangzhou) < george.l...@nokia-sbell.com> wrote: > Hi, Gluster Experts, > > > > In glusterfs version 3.12.3, There seems a “fstat” issue for ctime after > we use fsync, > > We have a demo execute binary which write some data and then do fsync

Re: [Gluster-devel] Storing list of dentries of children in parent inode

2018-07-02 Thread Raghavendra Gowdappa
On Fri, Jun 29, 2018 at 1:02 PM, Amar Tumballi wrote: > > > On Fri, Jun 29, 2018 at 12:25 PM, Vijay Bellur wrote: > >> >> >> On Wed, Jun 27, 2018 at 10:15 PM Raghavendra Gowdappa < >> rgowd...@redhat.com> wrote: >> >>> All, >>> &

Re: [Gluster-devel] Storing list of dentries of children in parent inode

2018-06-29 Thread Raghavendra Gowdappa
On Fri, Jun 29, 2018 at 12:25 PM, Vijay Bellur wrote: > > > On Wed, Jun 27, 2018 at 10:15 PM Raghavendra Gowdappa > wrote: > >> All, >> >> There is a requirement in write-behind where during readdirp we may have >> to invalidate iatts/stats of s

Re: [Gluster-devel] Storing list of dentries of children in parent inode

2018-06-29 Thread Raghavendra Gowdappa
On Fri, Jun 29, 2018 at 1:02 PM, Amar Tumballi wrote: > > > On Fri, Jun 29, 2018 at 12:25 PM, Vijay Bellur wrote: > >> >> >> On Wed, Jun 27, 2018 at 10:15 PM Raghavendra Gowdappa < >> rgowd...@redhat.com> wrote: >> >>> All, >>> &

[Gluster-devel] Storing list of dentries of children in parent inode

2018-06-27 Thread Raghavendra Gowdappa
All, There is a requirement in write-behind where during readdirp we may have to invalidate iatts/stats of some of the children of the directory [1]. For lack of better alternatives I added a dentry list to parent inode which contains all children that've been linked (through lookup or readdirp

Re: [Gluster-devel] [features/locks] Fetching lock info in lookup

2018-06-20 Thread Raghavendra Gowdappa
On Thu, Jun 21, 2018 at 6:55 AM, Raghavendra Gowdappa wrote: > > > On Wed, Jun 20, 2018 at 9:09 PM, Xavi Hernandez > wrote: > >> On Wed, Jun 20, 2018 at 4:29 PM Raghavendra Gowdappa >> wrote: >> >>> Krutika, >>> >>> This pat

Re: [Gluster-devel] [features/locks] Fetching lock info in lookup

2018-06-20 Thread Raghavendra Gowdappa
On Wed, Jun 20, 2018 at 9:09 PM, Xavi Hernandez wrote: > On Wed, Jun 20, 2018 at 4:29 PM Raghavendra Gowdappa > wrote: > >> Krutika, >> >> This patch doesn't seem to be getting counts per domain, like number of >> inodelks or entrylks acquired in a domain

Re: [Gluster-devel] [features/locks] Fetching lock info in lookup

2018-06-20 Thread Raghavendra Gowdappa
n Wed, Jun 20, 2018 at 12:58 PM, Raghavendra Gowdappa wrote: > > > On Wed, Jun 20, 2018 at 12:06 PM, Krutika Dhananjay > wrote: > >> We do already have a way to get inodelk and entrylk count from a bunch of >> fops, introduced in http://review.gluster.org/10880.

Re: [Gluster-devel] [features/locks] Fetching lock info in lookup

2018-06-20 Thread Raghavendra Gowdappa
; >> >> On Wed, Jun 20, 2018 at 9:06 AM, Raghavendra Gowdappa < >> rgowd...@redhat.com> wrote: >> >>> All, >>> >>> We've a requirement in DHT [1] to query the number of locks granted on >>> an inode in lookup fop. I am planning to us

[Gluster-devel] [features/locks] Fetching lock info in lookup

2018-06-19 Thread Raghavendra Gowdappa
All, We've a requirement in DHT [1] to query the number of locks granted on an inode in lookup fop. I am planning to use xdata_req in lookup to pass down the relevant arguments for this query. I am proposing following signature: In lookup request path following key value pairs will be passed in

  1   2   3   4   5   >