Re: [Gluster-devel] [Gluster-Maintainers] Proposal: move glusterfs development to github workflow, completely

2019-08-26 Thread FNU Raghavendra Manjunath
+1 to the idea. On Mon, Aug 26, 2019 at 2:41 PM Niels de Vos wrote: > On Mon, Aug 26, 2019 at 08:08:36AM -0700, Joe Julian wrote: > > You can also see diffs between force pushes now. > > That is great! It is the feature that I was looking for. I have not > noticed it yet, will pay attention to i

Re: [Gluster-devel] fallocate behavior in glusterfs

2019-07-08 Thread FNU Raghavendra Manjunath
I have sent a rfc patch [1] for review. https://review.gluster.org/#/c/glusterfs/+/23011/ On Thu, Jul 4, 2019 at 1:13 AM Pranith Kumar Karampuri wrote: > > > On Wed, Jul 3, 2019 at 10:59 PM FNU Raghavendra Manjunath < > rab...@redhat.com> wrote: > >> >>

Re: [Gluster-devel] fallocate behavior in glusterfs

2019-07-03 Thread FNU Raghavendra Manjunath
On Wed, Jul 3, 2019 at 3:28 AM Pranith Kumar Karampuri wrote: > > > On Wed, Jul 3, 2019 at 10:14 AM Ravishankar N > wrote: > >> >> On 02/07/19 8:52 PM, FNU Raghavendra Manjunath wrote: >> >> >> Hi All, >> >> In glusterfs, there is a

[Gluster-devel] fallocate behavior in glusterfs

2019-07-02 Thread FNU Raghavendra Manjunath
Hi All, In glusterfs, there is an issue regarding the fallocate behavior. In short, if someone does fallocate from the mount point with some size that is greater than the available size in the backend filesystem where the file is present, then fallocate can fail with a subset of the required numbe

Re: [Gluster-devel] [Gluster-Maintainers] Build failed in Jenkins: regression-test-with-multiplex #1359

2019-06-03 Thread FNU Raghavendra Manjunath
Yes. I have sent this patch [1] for review. It is now not failing in regression tests. (i.e. uss.t is not failing) [1] https://review.gluster.org/#/c/glusterfs/+/22728/ Regards, Raghavendra On Sat, Jun 1, 2019 at 7:25 AM Atin Mukherjee wrote: > subdir-mount.t has started failing in brick mux r

Re: [Gluster-devel] making frame->root->unique more effective in debugging hung frames

2019-05-24 Thread FNU Raghavendra Manjunath
The idea looks OK. One of the things that probably need to be considered (more of an implementation detail though) is how to generate frame->root->unique. Because, for fuse, frame->root->unique is obtained by finh->unique which IIUC is got from the incoming fop from kernel itself. For protocol/ser

Re: [Gluster-devel] ./tests/basic/uss.t is timing out in release-6 branch

2019-05-22 Thread FNU Raghavendra Manjunath
apview-server and also log level being changed to TRACE in the .t file) [1] https://review.gluster.org/#/c/glusterfs/+/22649/ [2] https://review.gluster.org/#/c/glusterfs/+/22728/ Regards, Raghavendra On Wed, May 1, 2019 at 11:11 AM Sanju Rakonde wrote: > Thank you Raghavendra. > >

Re: [Gluster-devel] ./tests/bugs/snapshot/bug-1399598-uss-with-ssl.t generating core very often

2019-05-16 Thread FNU Raghavendra Manjunath
I am working on other uss issue. i.e. the occasional failure of uss.t due to delays in the brick-mux regression. Rafi? Can you please look into this? Regards, Raghavendra On Thu, May 16, 2019 at 9:48 AM Sanju Rakonde wrote: > In most of the regression jobs > ./tests/bugs/snapshot/bug-1399598-u

Re: [Gluster-devel] [Gluster-users] Meeting Details on footer of the gluster-devel and gluster-user mailing list

2019-05-07 Thread FNU Raghavendra Manjunath
+ 1 to this. There is also one more thing. For some reason, the community meeting is not visible in my calendar (especially NA region). I am not sure if anyone else also facing this issue. Regards, Raghavendra On Tue, May 7, 2019 at 5:19 AM Ashish Pandey wrote: > Hi, > > While we send a mail o

[Gluster-devel] inode table destruction

2019-04-30 Thread FNU Raghavendra Manjunath
Hi All, There is a good chance that, the inode on which unref came has already been zero refed and added to the purge list. This can happen when inode table is being destroyed (glfs_fini is something which destroys the inode table). Consider a directory 'a' which has a file 'b'. Now as part of

Re: [Gluster-devel] ./tests/basic/uss.t is timing out in release-6 branch

2019-04-30 Thread FNU Raghavendra Manjunath
/ Regards, Raghavendra On Tue, Apr 30, 2019 at 10:42 AM FNU Raghavendra Manjunath < rab...@redhat.com> wrote: > > The failure looks similar to the issue I had mentioned in [1] > > In short for some reason the cleanup (the cleanup function that we call in > our .t files) seems to be

Re: [Gluster-devel] ./tests/basic/uss.t is timing out in release-6 branch

2019-04-30 Thread FNU Raghavendra Manjunath
The failure looks similar to the issue I had mentioned in [1] In short for some reason the cleanup (the cleanup function that we call in our .t files) seems to be taking more time and also not cleaning up properly. This leads to problems for the 2nd iteration (where basic things such as volume cre

Re: [Gluster-devel] [Gluster-users] Proposal: Changes in Gluster Community meetings

2019-04-22 Thread FNU Raghavendra Manjunath
Hi, This is the agenda for tomorrow's community meeting for NA/EMEA timezone. https://hackmd.io/OqZbh7gfQe6uvVUXUVKJ5g?both On Thu, Apr 11, 2019 at 4:56 AM Amar Tumballi Suryanarayan < atumb...@redhat.com> wrote: > Hi All, > > Below is the final details of our community meeting, and I wi

Re: [Gluster-devel] test failure reports for last 15 days

2019-04-11 Thread FNU Raghavendra Manjunath
ectory issues in the next iteration causes the failure of uss.t in the 2nd iteration. Regards, Raghavendra On Wed, Apr 10, 2019 at 4:07 PM FNU Raghavendra Manjunath wrote: > > > On Wed, Apr 10, 2019 at 9:59 AM Atin Mukherjee > wrote: > >> And now for last 15 da

Re: [Gluster-devel] test failure reports for last 15 days

2019-04-10 Thread FNU Raghavendra Manjunath
On Wed, Apr 10, 2019 at 9:59 AM Atin Mukherjee wrote: > And now for last 15 days: > > https://fstat.gluster.org/summary?start_date=2019-03-25&end_date=2019-04-10 > > ./tests/bitrot/bug-1373520.t 18 ==> Fixed through > https://review.gluster.org/#/c/glusterfs/+/22481/, I don't see this > fail

Re: [Gluster-devel] Release 5: Branched and further dates

2018-10-04 Thread FNU Raghavendra Manjunath
On Thu, Oct 4, 2018 at 2:47 PM Kotresh Hiremath Ravishankar < khire...@redhat.com> wrote: > > > On Thu, Oct 4, 2018 at 9:03 PM Shyam Ranganathan > wrote: > >> On 09/13/2018 11:10 AM, Shyam Ranganathan wrote: >> > RC1 would be around 24th of Sep. with final release tagging around 1st >> > of Oct.

Re: [Gluster-devel] [Gluster-Maintainers] Memory overwrites due to processing vol files???

2018-10-03 Thread FNU Raghavendra Manjunath
On Fri, Sep 28, 2018 at 4:01 PM Shyam Ranganathan wrote: > We tested with ASAN and without the fix at [1], and it consistently > crashes at the mdcache xlator when brick mux is enabled. > On 09/28/2018 03:50 PM, FNU Raghavendra Manjunath wrote: > > > > I was looking into t

Re: [Gluster-devel] [Gluster-Maintainers] Memory overwrites due to processing vol files???

2018-09-28 Thread FNU Raghavendra Manjunath
I was looking into the issue and this is what I could find while working with shyam. There are 2 things here. 1) The multiplexed brick process for the snapshot(s) getting the client volfile (I suspect, it happened when restore operation was performed). 2) Memory corruption happening while t

Re: [Gluster-devel] Need inputs on patch #17985

2017-09-07 Thread FNU Raghavendra Manjunath
>From snapview client perspective one important thing to note. For building the context for the entry point (by default ".snaps") a explicit lookup has to be done on it. The dentry for ".snaps" is not returned when readdir is done on its parent directory (Not even when ls -a is done). So for buildi

Re: [Gluster-devel] [Gluster-Maintainers] Gluster Test Thursday - Release 3.9

2016-11-04 Thread FNU Raghavendra Manjunath
Tested Bitrot related aspects. Created data, enabled bitrot and created more data. The files were signed by the bitrot daemon. Simulated the corruption by editing a file directly in the backend. Triggered scrubbing (on demand). Found that the corrupted files were marked bad by the scrubber. Also r

Re: [Gluster-devel] Checklist for Bitrot component for upstream release

2016-09-09 Thread FNU Raghavendra Manjunath
I would recommend the following tests. 1) The tests in our regression tests 2) Creating data set of many files (of different sizes) and then enabling bit-rot on the volume. 3) scrubber throttling 4) Tests such as open + write + close and again open + write + close (i.e. before the bit-rot daemon

Re: [Gluster-devel] Checklist for FUSE bridge component for upstream release

2016-09-06 Thread FNU Raghavendra Manjunath
+1 On Fri, Sep 2, 2016 at 11:30 PM, Raghavendra Gowdappa wrote: > Checking for inode/fd leaks should be top priority. We have seen a bunch > of high memory consumption issues recently. [1] is first step towards that. > > [1] http://review.gluster.org/15318 > > - Original Message - > > Fr

Re: [Gluster-devel] Fuse client hangs on doing multithreading IO tests

2016-06-24 Thread FNU Raghavendra Manjunath
Hi, Any idea how big were the files that were being read? Can you please attach the logs from all the gluster server and client nodes? (the logs can be found in /var/log/glusterfs) Also please provide the /var/log/messages from all the server and client nodes. Regards, Raghavendra On Fri, Jun

Re: [Gluster-devel] [Gluster-Maintainers] Release Management Process change - proposal

2016-05-12 Thread FNU Raghavendra Manjunath
+1. On Tue, May 10, 2016 at 2:58 AM, Kaushal M wrote: > On Tue, May 10, 2016 at 12:01 AM, Vijay Bellur wrote: > > Hi All, > > > > We are blocked on 3.7.12 owing to this proposal. Appreciate any > > feedback on this! > > > > Thanks, > > Vijay > > > > On Thu, Apr 28, 2016 at 11:58 PM, Vijay Bel

Re: [Gluster-devel] Testcase broken due to posix iatt commit

2016-03-28 Thread FNU Raghavendra Manjunath
okup on root inode, then we need to create inode-ctx in > a posix_acl_ctx_get() function. > > Thanks, > Vijay > On 28-Mar-2016 7:37 PM, "FNU Raghavendra Manjunath" > wrote: > >> CCing Vijay Kumar who made the acl related changes in that patch. >> >>

Re: [Gluster-devel] Testcase broken due to posix iatt commit

2016-03-28 Thread FNU Raghavendra Manjunath
CCing Vijay Kumar who made the acl related changes in that patch. Vijay? Can you please look into it? Regards, Raghavendra On Mon, Mar 28, 2016 at 9:57 AM, Avra Sengupta wrote: > Hi Raghavendra, > > As part of the patch (http://review.gluster.org/#/c/13730/16), the > inode_ctx is not created i

Re: [Gluster-devel] Report ESTALE as ENOENT

2016-03-28 Thread FNU Raghavendra Manjunath
.openstack.org/show/335506/ > > Regards, > -Prashanth Pai > > - Original Message - > > From: "FNU Raghavendra Manjunath" > > To: "Soumya Koduri" > > Cc: "Ira Cooper" , "Gluster Devel" < > gluster-devel@gluster.org>

Re: [Gluster-devel] Report ESTALE as ENOENT

2016-03-24 Thread FNU Raghavendra Manjunath
. Just my 2 cents. Regards, Raghavendra On Thu, Mar 24, 2016 at 10:31 AM, Soumya Koduri wrote: > Thanks for the information. > > On 03/24/2016 07:34 PM, FNU Raghavendra Manjunath wrote: > >> >> Yes. I think the caching example mentioned by Shyam is a good example of &g

Re: [Gluster-devel] Report ESTALE as ENOENT

2016-03-24 Thread FNU Raghavendra Manjunath
Yes. I think the caching example mentioned by Shyam is a good example of ESTALE error. Also User Serviceable Snapshots (USS) relies heavily on ESTALE errors. Because the files/directories from the snapshots are assigned a virtual gfid on the fly when being looked up. If those inodes are purged out

Re: [Gluster-devel] Regression: Bitrot core generated by distribute/bug-1117851.t

2016-03-07 Thread FNU Raghavendra Manjunath
Hi, I have raised a bug for it ( https://bugzilla.redhat.com/show_bug.cgi?id=1315465). A patch has been sent for review (http://review.gluster.org/#/c/13628/). Regards, Raghavendra On Mon, Mar 7, 2016 at 11:04 AM, Poornima Gurusiddaiah wrote: > Hi, > > I see a bitrot crash caused by a dht te

[Gluster-devel] glusterfs-3.6.9 released

2016-03-04 Thread FNU Raghavendra Manjunath
Hi, glusterfs-3.6.9 has been released and the packages for RHEL/Fedora/Centos can be found here.http://download.gluster.org/pub/gluster/glusterfs/3.6/LATEST/ Requesting people running 3.6.x to please try it out and let us know if there are any issues. This release supposedly fixes the bugs listed

Re: [Gluster-devel] Community Meeting - March changes proposed

2016-03-02 Thread FNU Raghavendra Manjunath
+1 On Wed, Mar 2, 2016 at 12:48 AM, Venky Shankar wrote: > On Wed, Mar 02, 2016 at 10:47:03AM +0530, Kaushal M wrote: > > Couldn't reply earlier as I was asleep at the time. > > > > The time change should have announced during last weeks meeting, but > > no one around remembered this (I'd forgot

Re: [Gluster-devel] Need help with bitrot

2016-02-25 Thread FNU Raghavendra Manjunath
the translators, the way > they are stacked on client & server side, how control flows between them. > Can somebody please help? > > - Ajil > > > On Thu, Feb 25, 2016 at 7:27 AM, FNU Raghavendra Manjunath < > rab...@redhat.com> wrote: > >> Hi Ajil, >>

Re: [Gluster-devel] Need help with bitrot

2016-02-24 Thread FNU Raghavendra Manjunath
Hi Ajil, Expiry policy tells the signer (Bit-rot Daemon) to wait for a specific period of time before signing a object. Whenever a object is modified, a notification is sent to the signer by brick process (bit-rot-stub xlator sitting in the I/O path) upon getting a release (i.e. when all the fds

Re: [Gluster-devel] glusterfs-3.6.9 release plans

2016-02-24 Thread FNU Raghavendra Manjunath
Wed, Feb 17, 2016 at 7:18 PM, Kaushal M wrote: > > I'm online now. We can figure out what the problem is. > > > > On Feb 17, 2016 7:17 PM, "FNU Raghavendra Manjunath" > > wrote: > >> > >> Hi, Kaushal, > >> > >> I have b

Re: [Gluster-devel] glusterfs-3.6.9 release plans

2016-02-17 Thread FNU Raghavendra Manjunath
Hi, Kaushal, I have been trying to merge few patches. But every time I try (i.e. do a cherry pick in gerrit), a new patch set gets submitted. I need some help in resolving it. Regards, Raghavendra On Wed, Feb 17, 2016 at 8:31 AM, Kaushal M wrote: > Hey Johnny, > > Could you please provide an

Re: [Gluster-devel] Bitrot stub forget()

2016-02-16 Thread FNU Raghavendra Manjunath
Venky, Yes. You are right. We should not remove the quarantine entry in forget. We have to remove it upon getting -ve lookups in bit-rot-stub and upon getting an unlink. I have attached a patch for it. Unfortunately rfc.sh is failing for me with the below error. ssh: connect to host git.glust