Re: [Gluster-devel] rm -rf issues in Geo-replication

2015-05-22 Thread Kotresh Hiremath Ravishankar
Hi Aravinda, I think it's a good idea for now to solve problem in geo-replication. But as an application, geo-replication should not be doing this. The fix needs to be DHT. Thanks and Regards, Kotresh H R - Original Message - > From: "Aravinda" > To: "Gluster Devel" > Sent: Friday, Ma

[Gluster-devel] Regression Spurious Failure: inode-quota.t

2015-06-16 Thread Kotresh Hiremath Ravishankar
Hi, I see 'inode-quota.t' failed for my glusterd patch. It's not related to the patch. Could someone look into it? Thanks and Regards, Kotresh H R ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-

Re: [Gluster-devel] Regression Spurious Failure: inode-quota.t

2015-06-16 Thread Kotresh Hiremath Ravishankar
Sorry, here is the link. http://build.gluster.org/job/rackspace-regression-2GB-triggered/10820/consoleFull Thanks and Regards, Kotresh H R - Original Message - > From: "Sachin Pandit" > To: "Kotresh Hiremath Ravishankar" > Cc: "Gluster Devel"

Re: [Gluster-devel] Regression tests and improvement ideas

2015-06-17 Thread Kotresh Hiremath Ravishankar
Hi All, Another thing to be considered is every patch automatically triggers regressions. It is very unlikely that, the very Patch Set 1 submitted would be a merge candidate. There would be some or the other review comments to be addressed. Considering that, I think it would be a good idea to t

Re: [Gluster-devel] weighted-rebalance.t failure

2015-06-18 Thread Kotresh Hiremath Ravishankar
It failed for my patch as well. http://build.gluster.org/job/rackspace-netbsd7-regression-triggered/7043/consoleFull Thanks and Regards, Kotresh H R - Original Message - > From: "Atin Mukherjee" > To: "Nithya Balachandran" > Cc: "Gluster Devel" > Sent: Friday, June 19, 2015 10:05:49 A

[Gluster-devel] Regression Failure: bug-1134822-read-only-default-in-graph.t

2015-06-24 Thread Kotresh Hiremath Ravishankar
Hi All, The above mentioned testcase failed for me which is not related to the patch. Could someone look into it? http://build.gluster.org/job/rackspace-regression-2GB-triggered/11267/consoleFull Thanks and Regards, Kotresh H R ___ Gluster-devel maili

[Gluster-devel] Regresssion Failure (3.7 branch): afr-quota-xattr-mdata-heal.t

2015-06-24 Thread Kotresh Hiremath Ravishankar
Hi, I see the above test case failing for my patch which is not related. Could some one from AFR team look into it? http://build.gluster.org/job/rackspace-regression-2GB-triggered/11332/consoleFull Thanks and Regards, Kotresh H R ___ Gluster-devel mai

Re: [Gluster-devel] Regresssion Failure (3.7 branch): afr-quota-xattr-mdata-heal.t

2015-06-24 Thread Kotresh Hiremath Ravishankar
Ok, Thanks. I have re-triggered it. Thanks and Regards, Kotresh H R - Original Message - > From: "Pranith Kumar Karampuri" > To: "Kotresh Hiremath Ravishankar" , "Gluster Devel" > > Sent: Thursday, June 25, 2015 11:55:22 AM > Subject: Re

[Gluster-devel] Build and Regression failure in master branch!

2015-06-27 Thread Kotresh Hiremath Ravishankar
Hi, rpm build is consistently failing for the patch (http://review.gluster.org/#/c/11443/) with following error where as it is passing in local setup. ... Making all in performance Making all in write-behind Making all in src CC write-behind.lo write-behind.c:24:35: fatal error: write-be

Re: [Gluster-devel] Build and Regression failure in master branch!

2015-06-28 Thread Kotresh Hiremath Ravishankar
logging framework. Thanks and Regards, Kotresh H R - Original Message - > From: "Kotresh Hiremath Ravishankar" > To: "Gluster Devel" > Sent: Sunday, June 28, 2015 12:01:22 PM > Subject: [Gluster-devel] Build and Regression failure in master branch! > &

Re: [Gluster-devel] Build and Regression failure in master branch!

2015-06-28 Thread Kotresh Hiremath Ravishankar
Message - > From: "Atin Mukherjee" > To: "Kotresh Hiremath Ravishankar" > Cc: "Gluster Devel" > Sent: Sunday, June 28, 2015 12:56:21 PM > Subject: Re: [Gluster-devel] Build and Regression failure in master branch! > > -Atin > Sent from one

[Gluster-devel] Regression Failure: ./tests/basic/quota.t

2015-07-01 Thread Kotresh Hiremath Ravishankar
Hi, I see quota.t regression failure for the following. The changes are related to example programs in libgfchangelog. http://build.gluster.org/job/rackspace-regression-2GB-triggered/11785/consoleFull Could someone from quota team, take a look at it. Thanks and Regards, Kotresh H R _

Re: [Gluster-devel] Regression Failure: ./tests/basic/quota.t

2015-07-02 Thread Kotresh Hiremath Ravishankar
Comments inline. Thanks and Regards, Kotresh H R - Original Message - > From: "Susant Palai" > To: "Sachin Pandit" > Cc: "Kotresh Hiremath Ravishankar" , "Gluster Devel" > > Sent: Thursday, July 2, 2015 12:35:08 PM > Subj

[Gluster-devel] NetBSD regression tests not Initializing...

2015-07-03 Thread Kotresh Hiremath Ravishankar
Hi NetBSD regressions are not initializing because of following error consistently with multiple re-triggers. I see the same error for quite a few patches. http://review.gluster.org/#/c/11443/ Building remotely on nbslave72.cloud.gluster.org (netbsd7_regression) in workspace /home/jenkins/root/

Re: [Gluster-devel] NetBSD regression tests not Initializing...

2015-07-05 Thread Kotresh Hiremath Ravishankar
Thanks Emmanuel. Thanks and Regards, Kotresh H R - Original Message - > From: "Emmanuel Dreyfus" > To: "Kotresh Hiremath Ravishankar" , "Gluster Devel" > > Sent: Sunday, July 5, 2015 12:52:23 AM > Subject: Re: [Gluster-devel] NetBSD regr

Re: [Gluster-devel] NetBSD regression tests not Initializing...

2015-07-07 Thread Kotresh Hiremath Ravishankar
Hi Emmanuel, We are seeing these issues again on nbslave7h.cloud.gluster.org http://build.gluster.org/job/rackspace-netbsd7-regression-triggered/7974/console Thanks and Regards, Kotresh H R - Original Message - > From: "Emmanuel Dreyfus" > To: "Kotresh Hiremath Ra

Re: [Gluster-devel] Release 4.1: LTM release targeted for end of May

2018-03-21 Thread Kotresh Hiremath Ravishankar
Hi Shyam, Rafi and Me are proposing consistent time across replica feature for 4.1 https://github.com/gluster/glusterfs/issues/208 Thanks, Kotresh H R On Wed, Mar 21, 2018 at 2:05 PM, Ravishankar N wrote: > > > On 03/20/2018 07:07 PM, Shyam Ranganathan wrote: > >> On 03/12/2018 09:37 PM, Shya

Re: [Gluster-devel] [Gluster-Maintainers] Update: Gerrit review system has one more command now

2018-05-21 Thread Kotresh Hiremath Ravishankar
This will be very useful. Thank you. On Mon, May 21, 2018 at 11:45 PM, Vijay Bellur wrote: > > > On Mon, May 21, 2018 at 2:29 AM, Amar Tumballi > wrote: > >> Hi all, >> >> As a push towards more flexibility to our developers, and options to run >> more tests without too much effort, we are movi

Re: [Gluster-devel] [Gluster-Maintainers] Release 5: Master branch health report (Week of 30th July)

2018-08-01 Thread Kotresh Hiremath Ravishankar
On Thu, Aug 2, 2018 at 11:43 AM, Xavi Hernandez wrote: > On Thu, Aug 2, 2018 at 6:14 AM Atin Mukherjee wrote: > >> >> >> On Tue, Jul 31, 2018 at 10:11 PM Atin Mukherjee >> wrote: >> >>> I just went through the nightly regression report of brick mux runs and >>> here's what I can summarize. >>>

Re: [Gluster-devel] [Gluster-Maintainers] Release 5: Master branch health report (Week of 30th July)

2018-08-02 Thread Kotresh Hiremath Ravishankar
On Thu, Aug 2, 2018 at 3:49 PM, Xavi Hernandez wrote: > On Thu, Aug 2, 2018 at 6:14 AM Atin Mukherjee wrote: > >> >> >> On Tue, Jul 31, 2018 at 10:11 PM Atin Mukherjee >> wrote: >> >>> I just went through the nightly regression report of brick mux runs and >>> here's what I can summarize. >>> >

Re: [Gluster-devel] [Gluster-Maintainers] Release 5: Master branch health report (Week of 30th July)

2018-08-02 Thread Kotresh Hiremath Ravishankar
On Thu, Aug 2, 2018 at 4:50 PM, Amar Tumballi wrote: > > > On Thu, Aug 2, 2018 at 4:37 PM, Kotresh Hiremath Ravishankar < > khire...@redhat.com> wrote: > >> >> >> On Thu, Aug 2, 2018 at 3:49 PM, Xavi Hernandez >> wrote: >> >>&

Re: [Gluster-devel] [Gluster-Maintainers] Release 5: Master branch health report (Week of 30th July)

2018-08-02 Thread Kotresh Hiremath Ravishankar
On Thu, Aug 2, 2018 at 5:05 PM, Atin Mukherjee wrote: > > > On Thu, Aug 2, 2018 at 4:37 PM Kotresh Hiremath Ravishankar < > khire...@redhat.com> wrote: > >> >> >> On Thu, Aug 2, 2018 at 3:49 PM, Xavi Hernandez >> wrote: >> >>&

Re: [Gluster-devel] [Gluster-Maintainers] Release 5: Master branch health report (Week of 30th July)

2018-08-02 Thread Kotresh Hiremath Ravishankar
] E [fuse-bridge.c:4382:fuse_first_lookup] 0-fuse: first lookup on root failed (Transport endpoint is not connected) - On Thu, Aug 2, 2018 at 5:35 PM, Nigel Babu wrote: > On Thu, Aug 2, 2018 at 5:12 PM Kotresh Hiremath Ravishankar < > khire...@r

Re: [Gluster-devel] [Gluster-Maintainers] Release 5: Master branch health report (Week of 30th July)

2018-08-02 Thread Kotresh Hiremath Ravishankar
Raised the infra bug https://bugzilla.redhat.com/show_bug.cgi?id=1611635 On Thu, Aug 2, 2018 at 6:27 PM, Sankarshan Mukhopadhyay < sankarshan.mukhopadh...@gmail.com> wrote: > On Thu, Aug 2, 2018 at 5:48 PM, Kotresh Hiremath Ravishankar > wrote: > > I am facing different

Re: [Gluster-devel] [Gluster-Maintainers] Release 5: Master branch health report (Week of 30th July)

2018-08-02 Thread Kotresh Hiremath Ravishankar
Have attached in the Bug https://bugzilla.redhat.com/show_bug.cgi?id=1611635 On Thu, 2 Aug 2018, 22:21 Raghavendra Gowdappa, wrote: > > > On Thu, Aug 2, 2018 at 5:48 PM, Kotresh Hiremath Ravishankar < > khire...@redhat.com> wrote: > >> I am facing different issue

Re: [Gluster-devel] [Gluster-Maintainers] Release 5: Master branch health report (Week of 30th July)

2018-08-03 Thread Kotresh Hiremath Ravishankar
Hi Du/Poornima, I was analysing bitrot and geo-rep failures and I suspect there is a bug in some perf xlator that was one of the cause. I was seeing following behaviour in few runs. 1. Geo-rep synced data to slave. It creats empty file and then rsync syncs data. But test does "stat --format "

Re: [Gluster-devel] [Gluster-Maintainers] Master branch lock down status

2018-08-08 Thread Kotresh Hiremath Ravishankar
Hi Atin/Shyam For geo-rep test retrials. Could you take this instrumentation patch [1] and give a run? I am have tried thrice on the patch with brick mux enabled and without but couldn't hit geo-rep failure. May be some race and it's not happening with instrumentation patch. [1] https://review.gl

Re: [Gluster-devel] [Gluster-Maintainers] Master branch lock down status (Wed, August 08th)

2018-08-10 Thread Kotresh Hiremath Ravishankar
Hi Shyam/Atin, I have posted the patch[1] for geo-rep test cases failure: tests/00-geo-rep/georep-basic-dr-rsync.t tests/00-geo-rep/georep-basic-dr-tarssh.t tests/00-geo-rep/00-georep-verify-setup.t Please include patch [1] while triggering tests. The instrumentation patch [2] which w

Re: [Gluster-devel] [Gluster-infra] Setting up machines from softserve in under 5 mins

2018-08-14 Thread Kotresh Hiremath Ravishankar
In the /etc/hosts, I think it is adding different IP On Mon, Aug 13, 2018 at 5:59 PM, Rafi Kavungal Chundattu Parambil < rkavu...@redhat.com> wrote: > This is so nice. I tried it and succesfully created a test machine. It > would be great if there is a provision to extend the lifetime of vm's > b

Re: [Gluster-devel] Cloudsync with AFR

2018-09-16 Thread Kotresh Hiremath Ravishankar
Hi Anuradha, To enable the c-time (consistent time) feature. Please enable following two options. gluster vol set utime on gluster vol set ctime on Thanks, Kotresh HR On Fri, Sep 14, 2018 at 12:18 PM, Rafi Kavungal Chundattu Parambil < rkavu...@redhat.com> wrote: > Hi Anuradha, > > We have a

Re: [Gluster-devel] Clang-Formatter for GlusterFS.

2018-09-18 Thread Kotresh Hiremath Ravishankar
I have a different problem. clang is complaining on the 4.1 back port of a patch which is merged in master before clang-format is brought in. Is there a way I can get smoke +1 for 4.1 as it won't be neat to have clang changes in 4.1 and not in master for same patch. It might further affect the clea

Re: [Gluster-devel] Clang-Formatter for GlusterFS.

2018-09-18 Thread Kotresh Hiremath Ravishankar
On Tue, Sep 18, 2018 at 2:44 PM, Amar Tumballi wrote: > > > On Tue, Sep 18, 2018 at 2:33 PM, Kotresh Hiremath Ravishankar < > khire...@redhat.com> wrote: > >> I have a different problem. clang is complaining on the 4.1 back port of >> a patch which is merged in

Re: [Gluster-devel] Python3 build process

2018-09-27 Thread Kotresh Hiremath Ravishankar
On Thu, Sep 27, 2018 at 5:38 PM Kaleb S. KEITHLEY wrote: > On 9/26/18 8:28 PM, Shyam Ranganathan wrote: > > Hi, > > > > With the introduction of default python 3 shebangs and the change in > > configure.ac to correct these to py2 if the build is being attempted on > > a machine that does not have

Re: [Gluster-devel] Release 5: Branched and further dates

2018-10-04 Thread Kotresh Hiremath Ravishankar
On Thu, Oct 4, 2018 at 9:03 PM Shyam Ranganathan wrote: > On 09/13/2018 11:10 AM, Shyam Ranganathan wrote: > > RC1 would be around 24th of Sep. with final release tagging around 1st > > of Oct. > > RC1 now stands to be tagged tomorrow, and patches that are being > targeted for a back port include

Re: [Gluster-devel] [Gluster-Maintainers] Release 5: Branched and further dates

2018-10-07 Thread Kotresh Hiremath Ravishankar
On Fri, Oct 5, 2018 at 10:31 PM Shyam Ranganathan wrote: > On 10/05/2018 10:59 AM, Shyam Ranganathan wrote: > > On 10/04/2018 11:33 AM, Shyam Ranganathan wrote: > >> On 09/13/2018 11:10 AM, Shyam Ranganathan wrote: > >>> RC1 would be around 24th of Sep. with final release tagging around 1st > >>>

Re: [Gluster-devel] [Gluster-Maintainers] Release 5: Branched and further dates

2018-10-07 Thread Kotresh Hiremath Ravishankar
Had forgot to add milind, ccing. On Mon, Oct 8, 2018 at 11:41 AM Kotresh Hiremath Ravishankar < khire...@redhat.com> wrote: > > > On Fri, Oct 5, 2018 at 10:31 PM Shyam Ranganathan > wrote: > >> On 10/05/2018 10:59 AM, Shyam Ranganathan wrote: >> > On 10/04/20

Re: [Gluster-devel] Geo-rep tests failing on master Cent7-regressions

2018-12-04 Thread Kotresh Hiremath Ravishankar
On Tue, Dec 4, 2018 at 10:02 PM Amar Tumballi wrote: > Looks like that is correct, but that also is failing in another regression > shard/zero-flag.t > It's not related to this as it doesn't involve any code changes. Changes are restricted to tests.. > On Tue, Dec 4, 2018 at 7:40 PM Shyam Ranga

Re: [Gluster-devel] Bitrot: Time of signing depending on the file size???

2019-03-01 Thread Kotresh Hiremath Ravishankar
Interesting observation! But as discussed in the thread bitrot signing processes depends 2 min timeout (by default) after last fd closes. It doesn't have any co-relation with the size of the file. Did you happen to verify that the fd was still open for large files for some reason? On Fri, Mar 1,

Re: [Gluster-devel] Bitrot: Time of signing depending on the file size???

2019-03-05 Thread Kotresh Hiremath Ravishankar
no fds and they already had a signature. I don't know the > reason for this. Maybe the client still keep th fd open? I opened a bug for > this: > https://bugzilla.redhat.com/show_bug.cgi?id=1685023 > > Regards > David > > Am Fr., 1. März 2019 um 18:29 Uhr schrieb

[Gluster-devel] Solving Ctime Issue with legacy files [BUG 1593542]

2019-06-17 Thread Kotresh Hiremath Ravishankar
Hi All, The ctime feature is enabled by default from release gluster-6. But as explained in bug [1] there is a known issue with legacy files i.e., the files which are created before ctime feature is enabled. These files would not have "trusted.glusterfs.mdata" xattr which maintain time attributes

Re: [Gluster-devel] Solving Ctime Issue with legacy files [BUG 1593542]

2019-06-17 Thread Kotresh Hiremath Ravishankar
Hi Xavi, Reply inline. On Mon, Jun 17, 2019 at 5:38 PM Xavi Hernandez wrote: > Hi Kotresh, > > On Mon, Jun 17, 2019 at 1:50 PM Kotresh Hiremath Ravishankar < > khire...@redhat.com> wrote: > >> Hi All, >> >> The ctime feature is enabled by default from re

Re: [Gluster-devel] Solving Ctime Issue with legacy files [BUG 1593542]

2019-06-18 Thread Kotresh Hiremath Ravishankar
Hi Xavi, On Tue, Jun 18, 2019 at 12:28 PM Xavi Hernandez wrote: > Hi Kotresh, > > On Tue, Jun 18, 2019 at 8:33 AM Kotresh Hiremath Ravishankar < > khire...@redhat.com> wrote: > >> Hi Xavi, >> >> Reply inline. >> >> On Mon, Jun 17, 2019 at

Re: [Gluster-devel] could you help to check about a glusterfs issue seems to be related to ctime

2020-03-11 Thread Kotresh Hiremath Ravishankar
protection of updating time only if it's greater but that would open up the race when two clients are updating the same file. This would result in keeping the older time than the latest. This requires code change and I don't think that should be done. Thanks, Kotresh On Wed, Mar 11, 20

Re: [Gluster-devel] could you help to check about a glusterfs issue seems to be related to ctime

2020-03-11 Thread Kotresh Hiremath Ravishankar
t; > > *From:* Zhou, Cynthia (NSB - CN/Hangzhou) > *Sent:* 2020年3月12日 12:53 > *To:* 'Kotresh Hiremath Ravishankar' > *Cc:* 'Gluster Devel' > *Subject:* RE: could you help to check about a glusterfs issue seems to > be related to ctime > > > > Fr

Re: [Gluster-devel] could you help to check about a glusterfs issue seems to be related to ctime

2020-03-17 Thread Kotresh Hiremath Ravishankar
gt; cynthia > > > > *From:* Amar Tumballi > *Sent:* 2020年3月17日 13:18 > *To:* Zhou, Cynthia (NSB - CN/Hangzhou) > *Cc:* Kotresh Hiremath Ravishankar ; Gluster Devel < > gluster-devel@gluster.org> > *Subject:* Re: [Gluster-devel] could you help to check about a glust

Re: [Gluster-devel] Removing problematic language in geo-replication

2020-07-22 Thread Kotresh Hiremath Ravishankar
+1 On Wed, Jul 22, 2020 at 2:34 PM Ravishankar N wrote: > Hi, > > The gluster code base has some words and terminology (blacklist, > whitelist, master, slave etc.) that can be considered hurtful/offensive > to people in a global open source setting. Some of words can be fixed > trivially but the

<    1   2