Hi Aravinda,
I think it's a good idea for now to solve problem in geo-replication.
But as an application, geo-replication should not be doing this.
The fix needs to be DHT.
Thanks and Regards,
Kotresh H R
- Original Message -
> From: "Aravinda"
> To: "Gluster Devel"
> Sent: Friday, Ma
Hi,
I see 'inode-quota.t' failed for my glusterd patch.
It's not related to the patch. Could someone look into it?
Thanks and Regards,
Kotresh H R
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-
Sorry, here is the link.
http://build.gluster.org/job/rackspace-regression-2GB-triggered/10820/consoleFull
Thanks and Regards,
Kotresh H R
- Original Message -
> From: "Sachin Pandit"
> To: "Kotresh Hiremath Ravishankar"
> Cc: "Gluster Devel"
Hi All,
Another thing to be considered is every patch automatically triggers
regressions.
It is very unlikely that, the very Patch Set 1 submitted would be a merge
candidate.
There would be some or the other review comments to be addressed. Considering
that,
I think it would be a good idea to t
It failed for my patch as well.
http://build.gluster.org/job/rackspace-netbsd7-regression-triggered/7043/consoleFull
Thanks and Regards,
Kotresh H R
- Original Message -
> From: "Atin Mukherjee"
> To: "Nithya Balachandran"
> Cc: "Gluster Devel"
> Sent: Friday, June 19, 2015 10:05:49 A
Hi All,
The above mentioned testcase failed for me which is not related to the patch.
Could someone look into it?
http://build.gluster.org/job/rackspace-regression-2GB-triggered/11267/consoleFull
Thanks and Regards,
Kotresh H R
___
Gluster-devel maili
Hi,
I see the above test case failing for my patch which is not related.
Could some one from AFR team look into it?
http://build.gluster.org/job/rackspace-regression-2GB-triggered/11332/consoleFull
Thanks and Regards,
Kotresh H R
___
Gluster-devel mai
Ok, Thanks. I have re-triggered it.
Thanks and Regards,
Kotresh H R
- Original Message -
> From: "Pranith Kumar Karampuri"
> To: "Kotresh Hiremath Ravishankar" , "Gluster Devel"
>
> Sent: Thursday, June 25, 2015 11:55:22 AM
> Subject: Re
Hi,
rpm build is consistently failing for the patch
(http://review.gluster.org/#/c/11443/)
with following error where as it is passing in local setup.
...
Making all in performance
Making all in write-behind
Making all in src
CC write-behind.lo
write-behind.c:24:35: fatal error: write-be
logging framework.
Thanks and Regards,
Kotresh H R
- Original Message -
> From: "Kotresh Hiremath Ravishankar"
> To: "Gluster Devel"
> Sent: Sunday, June 28, 2015 12:01:22 PM
> Subject: [Gluster-devel] Build and Regression failure in master branch!
>
&
Message -
> From: "Atin Mukherjee"
> To: "Kotresh Hiremath Ravishankar"
> Cc: "Gluster Devel"
> Sent: Sunday, June 28, 2015 12:56:21 PM
> Subject: Re: [Gluster-devel] Build and Regression failure in master branch!
>
> -Atin
> Sent from one
Hi,
I see quota.t regression failure for the following. The changes are related to
example programs in libgfchangelog.
http://build.gluster.org/job/rackspace-regression-2GB-triggered/11785/consoleFull
Could someone from quota team, take a look at it.
Thanks and Regards,
Kotresh H R
_
Comments inline.
Thanks and Regards,
Kotresh H R
- Original Message -
> From: "Susant Palai"
> To: "Sachin Pandit"
> Cc: "Kotresh Hiremath Ravishankar" , "Gluster Devel"
>
> Sent: Thursday, July 2, 2015 12:35:08 PM
> Subj
Hi
NetBSD regressions are not initializing because of following error consistently
with multiple re-triggers.
I see the same error for quite a few patches.
http://review.gluster.org/#/c/11443/
Building remotely on nbslave72.cloud.gluster.org (netbsd7_regression) in
workspace /home/jenkins/root/
Thanks Emmanuel.
Thanks and Regards,
Kotresh H R
- Original Message -
> From: "Emmanuel Dreyfus"
> To: "Kotresh Hiremath Ravishankar" , "Gluster Devel"
>
> Sent: Sunday, July 5, 2015 12:52:23 AM
> Subject: Re: [Gluster-devel] NetBSD regr
Hi Emmanuel,
We are seeing these issues again on nbslave7h.cloud.gluster.org
http://build.gluster.org/job/rackspace-netbsd7-regression-triggered/7974/console
Thanks and Regards,
Kotresh H R
- Original Message -
> From: "Emmanuel Dreyfus"
> To: "Kotresh Hiremath Ra
Hi Shyam,
Rafi and Me are proposing consistent time across replica feature for 4.1
https://github.com/gluster/glusterfs/issues/208
Thanks,
Kotresh H R
On Wed, Mar 21, 2018 at 2:05 PM, Ravishankar N
wrote:
>
>
> On 03/20/2018 07:07 PM, Shyam Ranganathan wrote:
>
>> On 03/12/2018 09:37 PM, Shya
This will be very useful. Thank you.
On Mon, May 21, 2018 at 11:45 PM, Vijay Bellur wrote:
>
>
> On Mon, May 21, 2018 at 2:29 AM, Amar Tumballi
> wrote:
>
>> Hi all,
>>
>> As a push towards more flexibility to our developers, and options to run
>> more tests without too much effort, we are movi
On Thu, Aug 2, 2018 at 11:43 AM, Xavi Hernandez
wrote:
> On Thu, Aug 2, 2018 at 6:14 AM Atin Mukherjee wrote:
>
>>
>>
>> On Tue, Jul 31, 2018 at 10:11 PM Atin Mukherjee
>> wrote:
>>
>>> I just went through the nightly regression report of brick mux runs and
>>> here's what I can summarize.
>>>
On Thu, Aug 2, 2018 at 3:49 PM, Xavi Hernandez
wrote:
> On Thu, Aug 2, 2018 at 6:14 AM Atin Mukherjee wrote:
>
>>
>>
>> On Tue, Jul 31, 2018 at 10:11 PM Atin Mukherjee
>> wrote:
>>
>>> I just went through the nightly regression report of brick mux runs and
>>> here's what I can summarize.
>>>
>
On Thu, Aug 2, 2018 at 4:50 PM, Amar Tumballi wrote:
>
>
> On Thu, Aug 2, 2018 at 4:37 PM, Kotresh Hiremath Ravishankar <
> khire...@redhat.com> wrote:
>
>>
>>
>> On Thu, Aug 2, 2018 at 3:49 PM, Xavi Hernandez
>> wrote:
>>
>>&
On Thu, Aug 2, 2018 at 5:05 PM, Atin Mukherjee
wrote:
>
>
> On Thu, Aug 2, 2018 at 4:37 PM Kotresh Hiremath Ravishankar <
> khire...@redhat.com> wrote:
>
>>
>>
>> On Thu, Aug 2, 2018 at 3:49 PM, Xavi Hernandez
>> wrote:
>>
>>&
] E [fuse-bridge.c:4382:fuse_first_lookup]
0-fuse: first lookup on root failed (Transport endpoint is not connected)
-
On Thu, Aug 2, 2018 at 5:35 PM, Nigel Babu wrote:
> On Thu, Aug 2, 2018 at 5:12 PM Kotresh Hiremath Ravishankar <
> khire...@r
Raised the infra bug
https://bugzilla.redhat.com/show_bug.cgi?id=1611635
On Thu, Aug 2, 2018 at 6:27 PM, Sankarshan Mukhopadhyay <
sankarshan.mukhopadh...@gmail.com> wrote:
> On Thu, Aug 2, 2018 at 5:48 PM, Kotresh Hiremath Ravishankar
> wrote:
> > I am facing different
Have attached in the Bug https://bugzilla.redhat.com/show_bug.cgi?id=1611635
On Thu, 2 Aug 2018, 22:21 Raghavendra Gowdappa, wrote:
>
>
> On Thu, Aug 2, 2018 at 5:48 PM, Kotresh Hiremath Ravishankar <
> khire...@redhat.com> wrote:
>
>> I am facing different issue
Hi Du/Poornima,
I was analysing bitrot and geo-rep failures and I suspect there is a bug in
some perf xlator
that was one of the cause. I was seeing following behaviour in few runs.
1. Geo-rep synced data to slave. It creats empty file and then rsync syncs
data.
But test does "stat --format "
Hi Atin/Shyam
For geo-rep test retrials. Could you take this instrumentation patch [1]
and give a run?
I am have tried thrice on the patch with brick mux enabled and without but
couldn't hit
geo-rep failure. May be some race and it's not happening with
instrumentation patch.
[1] https://review.gl
Hi Shyam/Atin,
I have posted the patch[1] for geo-rep test cases failure:
tests/00-geo-rep/georep-basic-dr-rsync.t
tests/00-geo-rep/georep-basic-dr-tarssh.t
tests/00-geo-rep/00-georep-verify-setup.t
Please include patch [1] while triggering tests.
The instrumentation patch [2] which w
In the /etc/hosts, I think it is adding different IP
On Mon, Aug 13, 2018 at 5:59 PM, Rafi Kavungal Chundattu Parambil <
rkavu...@redhat.com> wrote:
> This is so nice. I tried it and succesfully created a test machine. It
> would be great if there is a provision to extend the lifetime of vm's
> b
Hi Anuradha,
To enable the c-time (consistent time) feature. Please enable following two
options.
gluster vol set utime on
gluster vol set ctime on
Thanks,
Kotresh HR
On Fri, Sep 14, 2018 at 12:18 PM, Rafi Kavungal Chundattu Parambil <
rkavu...@redhat.com> wrote:
> Hi Anuradha,
>
> We have a
I have a different problem. clang is complaining on the 4.1 back port of a
patch which is merged in master before
clang-format is brought in. Is there a way I can get smoke +1 for 4.1 as it
won't be neat to have clang changes
in 4.1 and not in master for same patch. It might further affect the clea
On Tue, Sep 18, 2018 at 2:44 PM, Amar Tumballi wrote:
>
>
> On Tue, Sep 18, 2018 at 2:33 PM, Kotresh Hiremath Ravishankar <
> khire...@redhat.com> wrote:
>
>> I have a different problem. clang is complaining on the 4.1 back port of
>> a patch which is merged in
On Thu, Sep 27, 2018 at 5:38 PM Kaleb S. KEITHLEY
wrote:
> On 9/26/18 8:28 PM, Shyam Ranganathan wrote:
> > Hi,
> >
> > With the introduction of default python 3 shebangs and the change in
> > configure.ac to correct these to py2 if the build is being attempted on
> > a machine that does not have
On Thu, Oct 4, 2018 at 9:03 PM Shyam Ranganathan
wrote:
> On 09/13/2018 11:10 AM, Shyam Ranganathan wrote:
> > RC1 would be around 24th of Sep. with final release tagging around 1st
> > of Oct.
>
> RC1 now stands to be tagged tomorrow, and patches that are being
> targeted for a back port include
On Fri, Oct 5, 2018 at 10:31 PM Shyam Ranganathan
wrote:
> On 10/05/2018 10:59 AM, Shyam Ranganathan wrote:
> > On 10/04/2018 11:33 AM, Shyam Ranganathan wrote:
> >> On 09/13/2018 11:10 AM, Shyam Ranganathan wrote:
> >>> RC1 would be around 24th of Sep. with final release tagging around 1st
> >>>
Had forgot to add milind, ccing.
On Mon, Oct 8, 2018 at 11:41 AM Kotresh Hiremath Ravishankar <
khire...@redhat.com> wrote:
>
>
> On Fri, Oct 5, 2018 at 10:31 PM Shyam Ranganathan
> wrote:
>
>> On 10/05/2018 10:59 AM, Shyam Ranganathan wrote:
>> > On 10/04/20
On Tue, Dec 4, 2018 at 10:02 PM Amar Tumballi wrote:
> Looks like that is correct, but that also is failing in another regression
> shard/zero-flag.t
>
It's not related to this as it doesn't involve any code changes. Changes
are restricted to tests..
> On Tue, Dec 4, 2018 at 7:40 PM Shyam Ranga
Interesting observation! But as discussed in the thread bitrot signing
processes depends 2 min timeout (by default) after last fd closes. It
doesn't have any co-relation with the size of the file.
Did you happen to verify that the fd was still open for large files for
some reason?
On Fri, Mar 1,
no fds and they already had a signature. I don't know the
> reason for this. Maybe the client still keep th fd open? I opened a bug for
> this:
> https://bugzilla.redhat.com/show_bug.cgi?id=1685023
>
> Regards
> David
>
> Am Fr., 1. März 2019 um 18:29 Uhr schrieb
Hi All,
The ctime feature is enabled by default from release gluster-6. But as
explained in bug [1] there is a known issue with legacy files i.e., the
files which are created before ctime feature is enabled. These files would
not have "trusted.glusterfs.mdata" xattr which maintain time attributes
Hi Xavi,
Reply inline.
On Mon, Jun 17, 2019 at 5:38 PM Xavi Hernandez wrote:
> Hi Kotresh,
>
> On Mon, Jun 17, 2019 at 1:50 PM Kotresh Hiremath Ravishankar <
> khire...@redhat.com> wrote:
>
>> Hi All,
>>
>> The ctime feature is enabled by default from re
Hi Xavi,
On Tue, Jun 18, 2019 at 12:28 PM Xavi Hernandez wrote:
> Hi Kotresh,
>
> On Tue, Jun 18, 2019 at 8:33 AM Kotresh Hiremath Ravishankar <
> khire...@redhat.com> wrote:
>
>> Hi Xavi,
>>
>> Reply inline.
>>
>> On Mon, Jun 17, 2019 at
protection of updating time only if
it's greater but that would open up the race when two clients are updating
the same file.
This would result in keeping the older time than the latest. This requires
code change and I don't think that should be done.
Thanks,
Kotresh
On Wed, Mar 11, 20
t;
>
> *From:* Zhou, Cynthia (NSB - CN/Hangzhou)
> *Sent:* 2020年3月12日 12:53
> *To:* 'Kotresh Hiremath Ravishankar'
> *Cc:* 'Gluster Devel'
> *Subject:* RE: could you help to check about a glusterfs issue seems to
> be related to ctime
>
>
>
> Fr
gt; cynthia
>
>
>
> *From:* Amar Tumballi
> *Sent:* 2020年3月17日 13:18
> *To:* Zhou, Cynthia (NSB - CN/Hangzhou)
> *Cc:* Kotresh Hiremath Ravishankar ; Gluster Devel <
> gluster-devel@gluster.org>
> *Subject:* Re: [Gluster-devel] could you help to check about a glust
+1
On Wed, Jul 22, 2020 at 2:34 PM Ravishankar N
wrote:
> Hi,
>
> The gluster code base has some words and terminology (blacklist,
> whitelist, master, slave etc.) that can be considered hurtful/offensive
> to people in a global open source setting. Some of words can be fixed
> trivially but the
101 - 146 of 146 matches
Mail list logo