Re: [Gluster-devel] [features/locks] Fetching lock info in lookup

2018-06-20 Thread Pranith Kumar Karampuri
On Thu, Jun 21, 2018 at 7:14 AM, Raghavendra Gowdappa wrote: > > > On Thu, Jun 21, 2018 at 6:55 AM, Raghavendra Gowdappa > wrote: > >> >> >> On Wed, Jun 20, 2018 at 9:09 PM, Xavi Hernandez >> wrote: >> >>> On Wed, Jun 20, 2018 at 4:29 PM Raghavendra Gowdappa < >>> rgowd...@redhat.com> wrote:

Re: [Gluster-devel] a link issue maybe introduced in a bug fix " Don't let NFS cache stat after writes"

2018-01-10 Thread Pranith Kumar Karampuri
*From:* gluster-devel-boun...@gluster.org [mailto:gluster-devel-bounces@ > gluster.org] *On Behalf Of *Pranith Kumar Karampuri > *Sent:* Wednesday, January 10, 2018 8:08 PM > *To:* Lian, George (NSB - CN/Hangzhou) <george.l...@nokia-sbell.com> > *Cc:* Zhou, Cynthia (NSB - CN/

Re: [Gluster-devel] a link issue maybe introduced in a bug fix " Don't let NFS cache stat after writes"

2018-01-10 Thread Pranith Kumar Karampuri
On Wed, Jan 10, 2018 at 11:09 AM, Lian, George (NSB - CN/Hangzhou) < george.l...@nokia-sbell.com> wrote: > Hi, Pranith Kumar, > > > > I has create a bug on Bugzilla https://bugzilla.redhat.com/ > show_bug.cgi?id=1531457 > > After my investigation for this link issue, I suppose your changes on >

Re: [Gluster-devel] cluster/dht: restrict migration of opened files

2018-01-18 Thread Pranith Kumar Karampuri
On Tue, Jan 16, 2018 at 2:52 PM, Raghavendra Gowdappa wrote: > All, > > Patch [1] prevents migration of opened files during rebalance operation. > If patch [1] affects you, please voice out your concerns. [1] is a stop-gap > fix for the problem discussed in issues [2][3] >

Re: [Gluster-devel] Release 3.13.2: Planned for 19th of Jan, 2018

2018-01-19 Thread Pranith Kumar Karampuri
On Fri, Jan 19, 2018 at 6:19 AM, Shyam Ranganathan wrote: > On 01/18/2018 07:34 PM, Ravishankar N wrote: > > > > > > On 01/18/2018 11:53 PM, Shyam Ranganathan wrote: > >> On 01/02/2018 11:08 AM, Shyam Ranganathan wrote: > >>> Hi, > >>> > >>> As release 3.13.1 is announced,

Re: [Gluster-devel] a link issue maybe introduced in a bug fix " Don't let NFS cache stat after writes"

2018-01-17 Thread Pranith Kumar Karampuri
-- > -- > > There are no active volume tasks > > > > root@dhcp35-190 - ~ > > 16:44:38 :) ⚡ kill -9 5309 5351 5393 > > > > Best Regards, > > George > > *From:* gluster-devel-boun...@gluster.org [mailto:gluster-d

Re: [Gluster-devel] a link issue maybe introduced in a bug fix " Don't let NFS cache stat after writes"

2018-01-18 Thread Pranith Kumar Karampuri
*ls: cannot access '/mnt/test': No such file or directory* > > *root@ubuntu:~# mkdir -p /mnt/test* > > *root@ubuntu:~# mount -t glusterfs ubuntu:/test /mnt/test* > > > > *root@ubuntu:~# cd /mnt/test* > > *root@ubuntu:/mnt/test# echo "abc">aaa* > > *

Re: [Gluster-devel] a link issue maybe introduced in a bug fix " Don't let NFS cache stat after writes"

2018-01-15 Thread Pranith Kumar Karampuri
egards, > > George > > > > *From:* Lian, George (NSB - CN/Hangzhou) > *Sent:* Thursday, January 11, 2018 2:01 PM > *To:* Pranith Kumar Karampuri <pkara...@redhat.com> > *Cc:* Zhou, Cynthia (NSB - CN/Hangzhou) <cynthia.z...@nokia-sbell.com>; > Gluster-devel@g

Re: [Gluster-devel] a link issue maybe introduced in a bug fix " Don't let NFS cache stat after writes"

2018-01-24 Thread Pranith Kumar Karampuri
s Sent: Wednesday, January 24, 2018 7:43 PM To: Pranith Kumar Karampuri <pkara...@redhat.com> Cc: Lian, George (NSB - CN/Hangzhou) <george.l...@nokia-sbell.com>; Zhou, Cynthia (NSB - CN/Hangzhou) <cynthia.z...@nokia-sbell.com>; Li, Deqian (NSB - CN/Hangzhou) <deqian...@nokia-s

Re: [Gluster-devel] [FAILED][master] tests/basic/afr/durability-off.t

2018-01-25 Thread Pranith Kumar Karampuri
On Thu, Jan 25, 2018 at 3:09 PM, Milind Changire wrote: > could AFR engineers check why tests/basic/afr/durability-off.t fails in > brick-mux mode; > Issue seems to be something with connections to the bricks at the time of mount. *09:30:04* dd: opening

Re: [Gluster-devel] a link issue maybe introduced in a bug fix " Don't let NFS cache stat after writes"

2018-01-28 Thread Pranith Kumar Karampuri
+Ravi, +Raghavendra G On 25 Jan 2018 8:49 am, "Pranith Kumar Karampuri" <pkara...@redhat.com> wrote: > > > On 25 Jan 2018 8:43 am, "Lian, George (NSB - CN/Hangzhou)" < > george.l...@nokia-sbell.com> wrote: > > Hi, > > I suppose

Re: [Gluster-devel] Glusterfs and Structured data

2018-02-06 Thread Pranith Kumar Karampuri
On Sun, Feb 4, 2018 at 5:09 PM, Raghavendra Gowdappa wrote: > All, > > One of our users pointed out to the documentation that glusterfs is not > good for storing "Structured data" [1], while discussing an issue [2]. Does > any of you have more context on the feasibility of

Re: [Gluster-devel] Glusterfs and Structured data

2018-02-09 Thread Pranith Kumar Karampuri
On Thu, Feb 8, 2018 at 12:05 PM, Raghavendra G wrote: > > > On Tue, Feb 6, 2018 at 8:15 PM, Vijay Bellur wrote: > >> >> >> On Sun, Feb 4, 2018 at 3:39 AM, Raghavendra Gowdappa > > wrote: >> >>> All, >>> >>> One of our users

Re: [Gluster-devel] [Gluster-Maintainers] Release 4.0: Release notes (please read and contribute)

2018-02-09 Thread Pranith Kumar Karampuri
On Tue, Jan 30, 2018 at 3:40 AM, Shyam Ranganathan wrote: > Hi, > > I have posted an initial draft version of the release notes here [1]. > > I would like to *suggest* the following contributors to help improve and > finish the release notes by 06th Feb, 2017. As you read

Re: [Gluster-devel] [Gluster-Maintainers] Release 5: Master branch health report (Week of 30th July)

2018-08-02 Thread Pranith Kumar Karampuri
On Thu, Aug 2, 2018 at 7:19 PM Atin Mukherjee wrote: > New addition - tests/basic/volume.t - failed twice atleast with shd core. > > One such ref - > https://build.gluster.org/job/centos7-regression/2058/console > I will take a look. > > > On Thu, Aug 2, 2018 at 6:28 PM Sankarshan

Re: [Gluster-devel] [Gluster-Maintainers] Release 5: Master branch health report (Week of 30th July)

2018-08-02 Thread Pranith Kumar Karampuri
On Thu, Aug 2, 2018 at 10:03 PM Pranith Kumar Karampuri wrote: > > > On Thu, Aug 2, 2018 at 7:19 PM Atin Mukherjee wrote: > >> New addition - tests/basic/volume.t - failed twice atleast with shd core. >> >> One such ref - >> https://build.gluster.org/job/cent

Re: [Gluster-devel] How long should metrics collection on a cluster take?

2018-07-25 Thread Pranith Kumar Karampuri
n that we haven't designed (or even listed) all the potential > action()s, I can't give you a list of everything to query. I guarantee > we'll need to know the up/down status, heal counts, and free capacity for > each brick and node. > Thanks for the detailed explanation. This h

Re: [Gluster-devel] How long should metrics collection on a cluster take?

2018-07-25 Thread Pranith Kumar Karampuri
On Thu, Jul 26, 2018 at 9:59 AM, Pranith Kumar Karampuri < pkara...@redhat.com> wrote: > > > On Wed, Jul 25, 2018 at 10:48 PM, John Strunk wrote: > >> I have not put together a list. Perhaps the following will help w/ the >> context though... >> >> The

Re: [Gluster-devel] How long should metrics collection on a cluster take?

2018-07-26 Thread Pranith Kumar Karampuri
I think EC/AFR/Quota components will definitely be affected with this approach. CCing them. Please feel free to CC anyone who works on commands that require a mount to give status. > > On Thu, Jul 26, 2018 at 12:57 AM Pranith Kumar Karampuri < > pkara...@redhat.com> wrote: > &

[Gluster-devel] Spurious smoke failure in build rpms

2018-08-09 Thread Pranith Kumar Karampuri
https://build.gluster.org/job/devrpm-el7/10441/console *10:12:42* Wrote: /home/jenkins/root/workspace/devrpm-el7/extras/LinuxRPM/rpmbuild/SRPMS/glusterfs-4.2dev-0.240.git4657137.el7.src.rpm*10:12:42* mv rpmbuild/SRPMS/* .*10:12:44* INFO: mock.py version 1.4.11 starting (python version =

Re: [Gluster-devel] [Gluster-Maintainers] Master branch lock down status (Wed, August 08th)

2018-08-09 Thread Pranith Kumar Karampuri
On Thu, Aug 9, 2018 at 6:34 AM Shyam Ranganathan wrote: > Today's patch set 7 [1], included fixes provided till last evening IST, > and its runs can be seen here [2] (yay! we can link to comments in > gerrit now). > > New failures: (added to the spreadsheet) >

Re: [Gluster-devel] [Gluster-Maintainers] Master branch lock down status

2018-08-09 Thread Pranith Kumar Karampuri
On Wed, Aug 8, 2018 at 5:08 AM Shyam Ranganathan wrote: > Deserves a new beginning, threads on the other mail have gone deep enough. > > NOTE: (5) below needs your attention, rest is just process and data on > how to find failures. > > 1) We are running the tests using the patch [2]. > > 2) Run

Re: [Gluster-devel] [Gluster-Maintainers] Master branch lock down for stabilization (unlocking the same)

2018-08-13 Thread Pranith Kumar Karampuri
On Mon, Aug 13, 2018 at 10:55 PM Shyam Ranganathan wrote: > On 08/13/2018 02:20 AM, Pranith Kumar Karampuri wrote: > > - At the end of 2 weeks, reassess master and nightly test status, and > > see if we need another drive towards stabilizing master by locking > dow

Re: [Gluster-devel] [Gluster-Maintainers] Master branch lock down for stabilization (unlocking the same)

2018-08-13 Thread Pranith Kumar Karampuri
On Mon, Aug 13, 2018 at 6:05 AM Shyam Ranganathan wrote: > Hi, > > So we have had master locked down for a week to ensure we only get fixes > for failing tests in order to stabilize the code base, partly for > release-5 branching as well. > > As of this weekend, we (Atin and myself) have been

Re: [Gluster-devel] [Gluster-Maintainers] Master branch lock down status (Thu, August 09th)

2018-08-10 Thread Pranith Kumar Karampuri
On Fri, Aug 10, 2018 at 6:34 AM Shyam Ranganathan wrote: > Today's test results are updated in the spreadsheet in sheet named "Run > patch set 8". > > I took in patch https://review.gluster.org/c/glusterfs/+/20685 which > caused quite a few failures, so not updating new failures as issue yet. >

Re: [Gluster-devel] Spurious smoke failure in build rpms

2018-08-09 Thread Pranith Kumar Karampuri
On Thu, Aug 9, 2018 at 4:19 PM Nigel Babu wrote: > Infra issue. Please file a bug. > https://bugzilla.redhat.com/show_bug.cgi?id=1614631 Thanks! > On Thu, Aug 9, 2018 at 3:57 PM Pranith Kumar Karampuri < > pkara...@redhat.com> wrote: > >> https://build.glust

Re: [Gluster-devel] ./tests/basic/afr/metadata-self-heal.t core dumped

2018-08-09 Thread Pranith Kumar Karampuri
On Fri, Aug 10, 2018 at 8:54 AM Raghavendra Gowdappa wrote: > All, > > Details can be found at: > https://build.gluster.org/job/centos7-regression/2190/console > > Process that core dumped: glfs_shdheal > > Note that the patch on which this regression failures is on readdir-ahead > which is not

Re: [Gluster-devel] [Gluster-Maintainers] Master branch lock down status (Wed, August 08th)

2018-08-10 Thread Pranith Kumar Karampuri
On Thu, Aug 9, 2018 at 4:02 PM Pranith Kumar Karampuri wrote: > > > On Thu, Aug 9, 2018 at 6:34 AM Shyam Ranganathan > wrote: > >> Today's patch set 7 [1], included fixes provided till last evening IST, >> and its runs can be seen here [2] (yay! we can link t

Re: [Gluster-devel] The ctime of fstat is not correct which lead to "tar" utility error

2018-07-19 Thread Pranith Kumar Karampuri
+Ravi On Thu, Jul 19, 2018 at 2:29 PM, Lian, George (NSB - CN/Hangzhou) < george.l...@nokia-sbell.com> wrote: > Hi, Gluster Experts, > > > > In glusterfs version 3.12.3, There seems a “fstat” issue for ctime after > we use fsync, > > We have a demo execute binary which write some data and then

Re: [Gluster-devel] How long should metrics collection on a cluster take?

2018-07-25 Thread Pranith Kumar Karampuri
On Tue, Jul 24, 2018 at 10:10 PM, Sankarshan Mukhopadhyay < sankarshan.mukhopadh...@gmail.com> wrote: > On Tue, Jul 24, 2018 at 9:48 PM, Pranith Kumar Karampuri > wrote: > > hi, > > Quite a few commands to monitor gluster at the moment take almost a &

Re: [Gluster-devel] How long should metrics collection on a cluster take?

2018-07-25 Thread Pranith Kumar Karampuri
understand what this might entail already... > > -John > > > On Wed, Jul 25, 2018 at 5:45 AM Pranith Kumar Karampuri < > pkara...@redhat.com> wrote: > >> >> >> On Tue, Jul 24, 2018 at 10:10 PM, Sankarshan Mukhopadhyay < >> sankarshan.muk

[Gluster-devel] How long should metrics collection on a cluster take?

2018-07-24 Thread Pranith Kumar Karampuri
hi, Quite a few commands to monitor gluster at the moment take almost a second to give output. Some categories of these commands: 1) Any command that needs to do some sort of mount/glfs_init. Examples: 1) heal info family of commands 2) statfs to find space-availability etc (On my

Re: [Gluster-devel] remaining entry in gluster volume heal info command even after reboot

2018-09-05 Thread Pranith Kumar Karampuri
Which version of gluster is this? On Wed, Sep 5, 2018 at 1:27 PM Zhou, Cynthia (NSB - CN/Hangzhou) < cynthia.z...@nokia-sbell.com> wrote: > Hi glusterfs experts: > >Good day! > >Recently when I do some test on my gluster env, I found that there > are some remaining entries in

Re: [Gluster-devel] remaining entry in gluster volume heal info command even after reboot

2018-09-05 Thread Pranith Kumar Karampuri
On Wed, Sep 5, 2018 at 1:27 PM Zhou, Cynthia (NSB - CN/Hangzhou) < cynthia.z...@nokia-sbell.com> wrote: > Hi glusterfs experts: > >Good day! > >Recently when I do some test on my gluster env, I found that there > are some remaining entries in command “gluster v heal mstate info”

Re: [Gluster-devel] remaining entry in gluster volume heal info command even after reboot

2018-09-05 Thread Pranith Kumar Karampuri
nt/export/testdir/common.txt on sn-0 node > > 8> Gluster v heal export info will show following and keep for long time > > # gluster v heal export info > > Brick sn-0.local:/mnt/bricks/export/brick > > /testdir > > Status: Connected > > Number of entries: 1 &g

Re: [Gluster-devel] fedora smoke failure on 3.12

2018-09-05 Thread Pranith Kumar Karampuri
Thanks a lot! On Wed, Sep 5, 2018 at 4:55 PM Anoop C S wrote: > On Wed, 2018-09-05 at 16:08 +0530, Anoop C S wrote: > > On Wed, 2018-09-05 at 15:44 +0530, Pranith Kumar Karampuri wrote: > > > It also failed on 4.1 > https://build.gluster.org/job/fedora-smoke/1665/console &

[Gluster-devel] fedora smoke failure on 3.12

2018-09-05 Thread Pranith Kumar Karampuri
https://build.gluster.org/job/fedora-smoke/1668/console I think it is happening because of missing tirpc changes in configure.ac? There are a series of patches for libtirpc starting with https://review.gluster.org/c/glusterfs/+/19235, I am not very good at reading configure.ac except for the

Re: [Gluster-devel] fedora smoke failure on 3.12

2018-09-05 Thread Pranith Kumar Karampuri
It also failed on 4.1 https://build.gluster.org/job/fedora-smoke/1665/console Looks like quite a few changes need to be ported for them to pass? On Wed, Sep 5, 2018 at 3:41 PM Pranith Kumar Karampuri wrote: > https://build.gluster.org/job/fedora-smoke/1668/console > > I think it is

Re: [Gluster-devel] a link issue maybe introduced in a bug fix " Don't let NFS cache stat after writes"

2018-01-24 Thread Pranith Kumar Karampuri
if ((buf->ia_nlink == 0) && (buf->ia_ctime == 0)) > > if (buf->ia_ctime == ) > > return 1; > > > > return 0; > > } > > > > void > > gf_zero_fill_stat (struct iatt *buf) > > { > > // buf->ia

Re: [Gluster-devel] a link issue maybe introduced in a bug fix " Don't let NFS cache stat after writes"

2018-01-24 Thread Pranith Kumar Karampuri
On Wed, Jan 24, 2018 at 2:24 PM, Pranith Kumar Karampuri < pkara...@redhat.com> wrote: > hi, >In the same commit you mentioned earlier, there was this code > earlier: > -/* Returns 1 if the stat seems to be filled with zeroes. */ > -int > -nfs_zero_fille

Re: [Gluster-devel] a link issue maybe introduced in a bug fix " Don't let NFS cache stat after writes"

2018-01-24 Thread Pranith Kumar Karampuri
void gf_zero_fill_stat (struct iatt *buf) { // buf->ia_nlink = 0; buf->ia_ctime = 0; } Thanks & Best Regards George *From:* Lian, George (NSB - CN/Hangzhou) *Sent:* Friday, January 19, 2018 10:03 AM *To:* Pranith Kumar Karampuri <pkara...@redhat.com>; Zhou, Cynth

Re: [Gluster-devel] a link issue maybe introduced in a bug fix " Don't let NFS cache stat after writes"

2018-01-17 Thread Pranith Kumar Karampuri
On Mon, Jan 15, 2018 at 1:55 PM, Pranith Kumar Karampuri < pkara...@redhat.com> wrote: > > > On Mon, Jan 15, 2018 at 8:46 AM, Lian, George (NSB - CN/Hangzhou) < > george.l...@nokia-sbell.com> wrote: > >> Hi, >> >> >> >> Have you reprod

Re: [Gluster-devel] tests/bugs/core/bug-1432542-mpx-restart-crash.t generated core

2018-03-11 Thread Pranith Kumar Karampuri
Thanks Atin! On Mon, Mar 12, 2018 at 9:49 AM, Atin Mukherjee <amukh...@redhat.com> wrote: > Mohit is aware of this issue and currently working on a patch. > > On Mon, Mar 12, 2018 at 9:47 AM, Pranith Kumar Karampuri < > pkara...@redhat.com> wrote: > >> hi, >

[Gluster-devel] tests/bugs/core/bug-1432542-mpx-restart-crash.t generated core

2018-03-11 Thread Pranith Kumar Karampuri
hi, In https://build.gluster.org/job/centos7-regression/274/consoleFull, the test in $SUBJECT generated core. It seems to be segfaulting in quota. But I didn't take a closer look. -- Pranith ___ Gluster-devel mailing list Gluster-devel@gluster.org

Re: [Gluster-devel] [Gluster-Maintainers] Release 4.1: LTM release targeted for end of May

2018-03-13 Thread Pranith Kumar Karampuri
On Tue, Mar 13, 2018 at 7:07 AM, Shyam Ranganathan wrote: > Hi, > > As we wind down on 4.0 activities (waiting on docs to hit the site, and > packages to be available in CentOS repositories before announcing the > release), it is time to start preparing for the 4.1 release.

Re: [Gluster-devel] [Gluster-Maintainers] Release 4.1: LTM release targeted for end of May

2018-03-13 Thread Pranith Kumar Karampuri
On Tue, Mar 13, 2018 at 4:26 PM, Pranith Kumar Karampuri < pkara...@redhat.com> wrote: > > > On Tue, Mar 13, 2018 at 1:51 PM, Amar Tumballi <atumb...@redhat.com> > wrote: > >> >> >>> >> Further, as we hit end of March, we would make it m

Re: [Gluster-devel] [Gluster-Maintainers] Release 4.1: LTM release targeted for end of May

2018-03-15 Thread Pranith Kumar Karampuri
On Wed, Mar 14, 2018 at 8:27 PM, Amye Scavarda <a...@redhat.com> wrote: > Responding on the architects question: > > On Tue, Mar 13, 2018 at 9:57 PM, Pranith Kumar Karampuri < > pkara...@redhat.com> wrote: > >> >> >> On Tue, Mar 13, 2018 a

Re: [Gluster-devel] Split brain after replacing a brick

2018-04-06 Thread Pranith Kumar Karampuri
Hey Manu, Long time! Sorry for the delay. On Sat, Mar 31, 2018 at 8:12 PM, Emmanuel Dreyfus wrote: > Hello > > After doing a replace-brick and a full heal, I am left with: > > Brick bidon:/export/wd0e > Status: Connected > Number of entries: 0 > > Brick

Re: [Gluster-devel] [Gluster-users] Any feature allow to add lock on a file between different apps?

2018-04-06 Thread Pranith Kumar Karampuri
You can use posix-locks i.e. fnctl based advisory locks on glusterfs just like any other fs. On Wed, Apr 4, 2018 at 8:30 AM, Lei Gong wrote: > Hello there, > > > > I want to know if there is a feature allow user to add lock on a file when > their app is modifying that file, so

Re: [Gluster-devel] Removal of use-compound-fops option in afr

2018-03-04 Thread Pranith Kumar Karampuri
Shyam, Do let me know if there is anything that needs to be done on the process front. On Mon, Mar 5, 2018 at 8:18 AM, Pranith Kumar Karampuri <pkara...@redhat.com > wrote: > hi, > We found that compound fops is not giving better performance in > replicate and I am think

[Gluster-devel] Removal of use-compound-fops option in afr

2018-03-04 Thread Pranith Kumar Karampuri
hi, We found that compound fops is not giving better performance in replicate and I am thinking of removing that code. Sent the patch at https://review.gluster.org/19655 -- Pranith ___ Gluster-devel mailing list Gluster-devel@gluster.org

Re: [Gluster-devel] Removal of use-compound-fops option in afr

2018-03-04 Thread Pranith Kumar Karampuri
On Mon, Mar 5, 2018 at 9:19 AM, Amar Tumballi wrote: > Pranith, > > > >> We found that compound fops is not giving better performance in >> replicate and I am thinking of removing that code. Sent the patch at >> https://review.gluster.org/19655 >> >> > If I understand

Re: [Gluster-devel] Removal of use-compound-fops option in afr

2018-03-05 Thread Pranith Kumar Karampuri
On Mon, Mar 5, 2018 at 7:10 PM, Shyam Ranganathan <srang...@redhat.com> wrote: > On 03/04/2018 10:15 PM, Pranith Kumar Karampuri wrote: > > Shyam, > > Do let me know if there is anything that needs to be done on the > > process front. > > I see that use-compo

Re: [Gluster-devel] [Gluster-Maintainers] Release 4.0: RC1 tagged

2018-02-28 Thread Pranith Kumar Karampuri
I found the following memory leak present in 3.13, 4.0 and master: https://bugzilla.redhat.com/show_bug.cgi?id=1550078 I will clone/port to 4.0 as soon as the patch is merged. On Wed, Feb 28, 2018 at 5:55 PM, Javier Romero wrote: > Hi all, > > Have tested on CentOS Linux

Re: [Gluster-devel] POC- Distributed regression testing framework

2018-10-04 Thread Pranith Kumar Karampuri
On Thu, Oct 4, 2018 at 2:15 PM Xavi Hernandez wrote: > On Thu, Oct 4, 2018 at 9:47 AM Amar Tumballi wrote: > >> >> >> On Thu, Oct 4, 2018 at 12:54 PM Xavi Hernandez >> wrote: >> >>> On Wed, Oct 3, 2018 at 11:57 AM Deepshikha Khandelwal < >>> dkhan...@redhat.com> wrote: >>> Hello folks,

Re: [Gluster-devel] index_lookup segfault in glusterfsd brick process

2018-10-04 Thread Pranith Kumar Karampuri
On Wed, Oct 3, 2018 at 11:20 PM 김경표 wrote: > Hello folks. > > Few days ago I found my EC(4+2) volume was degraded. > I am using 3.12.13-1.el7.x86_64. > One brick was down, below is bricklog > I am suspicious loc->inode bug in index.c (see attached picture) > In GDB, loc->inode is null > >>

Re: [Gluster-devel] [Gluster-users] Crash in glusterfs!!!

2018-09-24 Thread Pranith Kumar Karampuri
that the RC is correct and then I will send out the fix. > > Regards, > Abhishek > > On Mon, Sep 24, 2018 at 3:12 PM Pranith Kumar Karampuri < > pkara...@redhat.com> wrote: > >> >> >> On Mon, Sep 24, 2018 at 2:09 PM ABHISHEK PALIWAL >> wr

Re: [Gluster-devel] [Gluster-users] Crash in glusterfs!!!

2018-09-25 Thread Pranith Kumar Karampuri
t in between. > But the crash happened inside exit() code for which will be in libc which doesn't access any data structures in glusterfs. > > Regards, > Abhishek > > On Mon, Sep 24, 2018 at 9:11 PM Pranith Kumar Karampuri < > pkara...@redhat.com> wrote: > >>

Re: [Gluster-devel] Issue with posix locks

2019-03-31 Thread Pranith Kumar Karampuri
On Sun, Mar 31, 2019 at 11:29 PM Soumya Koduri wrote: > > > On 3/29/19 11:55 PM, Xavi Hernandez wrote: > > Hi all, > > > > there is one potential problem with posix locks when used in a > > replicated or dispersed volume. > > > > Some background: > > > > Posix locks allow any process to lock a

Re: [Gluster-devel] The state of lock heal and inodelk/entrylk heal ?

2019-03-21 Thread Pranith Kumar Karampuri
On Thu, Mar 21, 2019 at 9:15 AM Kinglong Mee wrote: > Hello folks, > > Lock self healing (recovery or replay) is added at > https://review.gluster.org/#/c/glusterfs/+/2766/ > > But it is removed at > https://review.gluster.org/#/c/glusterfs/+/12363/ > > I found some information about it at >

Re: [Gluster-devel] The state of lock heal and inodelk/entrylk heal ?

2019-03-21 Thread Pranith Kumar Karampuri
On Thu, Mar 21, 2019 at 11:50 AM Pranith Kumar Karampuri < pkara...@redhat.com> wrote: > > > On Thu, Mar 21, 2019 at 9:15 AM Kinglong Mee > wrote: > >> Hello folks, >> >> Lock self healing (recovery or replay) is added at >> ht

Re: [Gluster-devel] [Gluster-users] POSIX locks and disconnections between clients and bricks

2019-03-27 Thread Pranith Kumar Karampuri
On Wed, Mar 27, 2019 at 5:13 PM Xavi Hernandez wrote: > On Wed, Mar 27, 2019 at 11:52 AM Raghavendra Gowdappa > wrote: > >> >> >> On Wed, Mar 27, 2019 at 12:56 PM Xavi Hernandez >> wrote: >> >>> Hi Raghavendra, >>> >>> On Wed, Mar 27, 2019 at 2:49 AM Raghavendra Gowdappa < >>>

Re: [Gluster-devel] [Gluster-users] POSIX locks and disconnections between clients and bricks

2019-03-27 Thread Pranith Kumar Karampuri
On Wed, Mar 27, 2019 at 6:38 PM Xavi Hernandez wrote: > On Wed, Mar 27, 2019 at 1:13 PM Pranith Kumar Karampuri < > pkara...@redhat.com> wrote: > >> >> >> On Wed, Mar 27, 2019 at 5:13 PM Xavi Hernandez >> wrote: >> >>> On Wed, Mar

Re: [Gluster-devel] [Gluster-users] POSIX locks and disconnections between clients and bricks

2019-03-27 Thread Pranith Kumar Karampuri
On Wed, Mar 27, 2019 at 8:38 PM Xavi Hernandez wrote: > On Wed, Mar 27, 2019 at 2:20 PM Pranith Kumar Karampuri < > pkara...@redhat.com> wrote: > >> >> >> On Wed, Mar 27, 2019 at 6:38 PM Xavi Hernandez >> wrote: >> >>> On Wed, Mar 2

Re: [Gluster-devel] test failure reports for last 15 days

2019-04-15 Thread Pranith Kumar Karampuri
e first problem is why the client gets disconnected and the server > doesn't get any notification. The script is stopping bricks 2 and 3 when > this happens. Brick 0 shouldn't fail here. It seems related to the > > The second problem is that when we receive a new connection from a cli

Re: [Gluster-devel] Release 6.1: Expected tagging on April 10th

2019-04-16 Thread Pranith Kumar Karampuri
patch and then get it merged again. > > -Amar > > On Wed, Apr 17, 2019 at 8:04 AM Atin Mukherjee > wrote: > >> >> >> On Wed, Apr 17, 2019 at 12:33 AM Pranith Kumar Karampuri < >> pkara...@redhat.com> wrote: >> >>> >>> >>

Re: [Gluster-devel] making frame->root->unique more effective in debugging hung frames

2019-05-27 Thread Pranith Kumar Karampuri
On Sat, May 25, 2019 at 10:22 AM Pranith Kumar Karampuri < pkara...@redhat.com> wrote: > > > On Fri, May 24, 2019 at 10:57 PM FNU Raghavendra Manjunath < > rab...@redhat.com> wrote: > >> >> The idea looks OK. One of the things that probably need to be c

Re: [Gluster-devel] making frame->root->unique more effective in debugging hung frames

2019-05-24 Thread Pranith Kumar Karampuri
transport_t structure (and from my understanding, the transport->xid is > just incremented by everytime a > new rpc request is created). > > Overall the suggestion looks fine though. > I am planning to do the same thing transport->xid does. I will send out the patch > Regards

[Gluster-devel] making frame->root->unique more effective in debugging hung frames

2019-05-24 Thread Pranith Kumar Karampuri
Hi, At the moment new stack doesn't populate frame->root->unique in all cases. This makes it difficult to debug hung frames by examining successive state dumps. Fuse and server xlator populate it whenever they can, but other xlators won't be able to assign one when they need to create a

Re: [Gluster-devel] questions on callstubs and "link-count" in index translator

2019-04-28 Thread Pranith Kumar Karampuri
On Fri, Apr 26, 2019 at 10:55 PM Junsong Li wrote: > Hello list, > > > > I have a couple of questions on index translator implementation. > >- Why does gluster need callstub and a different worker queue (and >thread) to process those call stubs? Is it just to lower the priority of >

Re: [Gluster-devel] fallocate behavior in glusterfs

2019-07-03 Thread Pranith Kumar Karampuri
On Wed, Jul 3, 2019 at 10:14 AM Ravishankar N wrote: > > On 02/07/19 8:52 PM, FNU Raghavendra Manjunath wrote: > > > Hi All, > > In glusterfs, there is an issue regarding the fallocate behavior. In > short, if someone does fallocate from the mount point with some size that > is greater than the

Re: [Gluster-devel] fallocate behavior in glusterfs

2019-07-03 Thread Pranith Kumar Karampuri
On Wed, Jul 3, 2019 at 10:59 PM FNU Raghavendra Manjunath wrote: > > > On Wed, Jul 3, 2019 at 3:28 AM Pranith Kumar Karampuri < > pkara...@redhat.com> wrote: > >> >> >> On Wed, Jul 3, 2019 at 10:14 AM Ravishankar N >> wrote: >> >>&

<    3   4   5   6   7   8