Re: [Gluster-devel] fallocate behavior in glusterfs

2019-07-03 Thread Pranith Kumar Karampuri
On Wed, Jul 3, 2019 at 10:59 PM FNU Raghavendra Manjunath wrote: > > > On Wed, Jul 3, 2019 at 3:28 AM Pranith Kumar Karampuri < > pkara...@redhat.com> wrote: > >> >> >> On Wed, Jul 3, 2019 at 10:14 AM Ravishankar N >> wrote: >> >>&

Re: [Gluster-devel] fallocate behavior in glusterfs

2019-07-03 Thread Pranith Kumar Karampuri
On Wed, Jul 3, 2019 at 10:14 AM Ravishankar N wrote: > > On 02/07/19 8:52 PM, FNU Raghavendra Manjunath wrote: > > > Hi All, > > In glusterfs, there is an issue regarding the fallocate behavior. In > short, if someone does fallocate from the mount point with some size that > is greater than the

Re: [Gluster-devel] making frame->root->unique more effective in debugging hung frames

2019-05-27 Thread Pranith Kumar Karampuri
On Sat, May 25, 2019 at 10:22 AM Pranith Kumar Karampuri < pkara...@redhat.com> wrote: > > > On Fri, May 24, 2019 at 10:57 PM FNU Raghavendra Manjunath < > rab...@redhat.com> wrote: > >> >> The idea looks OK. One of the things that probably need to be c

Re: [Gluster-devel] making frame->root->unique more effective in debugging hung frames

2019-05-24 Thread Pranith Kumar Karampuri
transport_t structure (and from my understanding, the transport->xid is > just incremented by everytime a > new rpc request is created). > > Overall the suggestion looks fine though. > I am planning to do the same thing transport->xid does. I will send out the patch > Regards

[Gluster-devel] making frame->root->unique more effective in debugging hung frames

2019-05-24 Thread Pranith Kumar Karampuri
Hi, At the moment new stack doesn't populate frame->root->unique in all cases. This makes it difficult to debug hung frames by examining successive state dumps. Fuse and server xlator populate it whenever they can, but other xlators won't be able to assign one when they need to create a

Re: [Gluster-devel] questions on callstubs and "link-count" in index translator

2019-04-28 Thread Pranith Kumar Karampuri
On Fri, Apr 26, 2019 at 10:55 PM Junsong Li wrote: > Hello list, > > > > I have a couple of questions on index translator implementation. > >- Why does gluster need callstub and a different worker queue (and >thread) to process those call stubs? Is it just to lower the priority of >

Re: [Gluster-devel] Release 6.1: Expected tagging on April 10th

2019-04-16 Thread Pranith Kumar Karampuri
patch and then get it merged again. > > -Amar > > On Wed, Apr 17, 2019 at 8:04 AM Atin Mukherjee > wrote: > >> >> >> On Wed, Apr 17, 2019 at 12:33 AM Pranith Kumar Karampuri < >> pkara...@redhat.com> wrote: >> >>> >>> >>

Re: [Gluster-devel] test failure reports for last 15 days

2019-04-15 Thread Pranith Kumar Karampuri
e first problem is why the client gets disconnected and the server > doesn't get any notification. The script is stopping bricks 2 and 3 when > this happens. Brick 0 shouldn't fail here. It seems related to the > > The second problem is that when we receive a new connection from a cli

Re: [Gluster-devel] Issue with posix locks

2019-03-31 Thread Pranith Kumar Karampuri
On Sun, Mar 31, 2019 at 11:29 PM Soumya Koduri wrote: > > > On 3/29/19 11:55 PM, Xavi Hernandez wrote: > > Hi all, > > > > there is one potential problem with posix locks when used in a > > replicated or dispersed volume. > > > > Some background: > > > > Posix locks allow any process to lock a

Re: [Gluster-devel] [Gluster-users] POSIX locks and disconnections between clients and bricks

2019-03-27 Thread Pranith Kumar Karampuri
On Wed, Mar 27, 2019 at 8:38 PM Xavi Hernandez wrote: > On Wed, Mar 27, 2019 at 2:20 PM Pranith Kumar Karampuri < > pkara...@redhat.com> wrote: > >> >> >> On Wed, Mar 27, 2019 at 6:38 PM Xavi Hernandez >> wrote: >> >>> On Wed, Mar 2

Re: [Gluster-devel] [Gluster-users] POSIX locks and disconnections between clients and bricks

2019-03-27 Thread Pranith Kumar Karampuri
On Wed, Mar 27, 2019 at 6:38 PM Xavi Hernandez wrote: > On Wed, Mar 27, 2019 at 1:13 PM Pranith Kumar Karampuri < > pkara...@redhat.com> wrote: > >> >> >> On Wed, Mar 27, 2019 at 5:13 PM Xavi Hernandez >> wrote: >> >>> On Wed, Mar

Re: [Gluster-devel] [Gluster-users] POSIX locks and disconnections between clients and bricks

2019-03-27 Thread Pranith Kumar Karampuri
On Wed, Mar 27, 2019 at 5:13 PM Xavi Hernandez wrote: > On Wed, Mar 27, 2019 at 11:52 AM Raghavendra Gowdappa > wrote: > >> >> >> On Wed, Mar 27, 2019 at 12:56 PM Xavi Hernandez >> wrote: >> >>> Hi Raghavendra, >>> >>> On Wed, Mar 27, 2019 at 2:49 AM Raghavendra Gowdappa < >>>

Re: [Gluster-devel] The state of lock heal and inodelk/entrylk heal ?

2019-03-21 Thread Pranith Kumar Karampuri
On Thu, Mar 21, 2019 at 11:50 AM Pranith Kumar Karampuri < pkara...@redhat.com> wrote: > > > On Thu, Mar 21, 2019 at 9:15 AM Kinglong Mee > wrote: > >> Hello folks, >> >> Lock self healing (recovery or replay) is added at >> ht

Re: [Gluster-devel] The state of lock heal and inodelk/entrylk heal ?

2019-03-21 Thread Pranith Kumar Karampuri
On Thu, Mar 21, 2019 at 9:15 AM Kinglong Mee wrote: > Hello folks, > > Lock self healing (recovery or replay) is added at > https://review.gluster.org/#/c/glusterfs/+/2766/ > > But it is removed at > https://review.gluster.org/#/c/glusterfs/+/12363/ > > I found some information about it at >

Re: [Gluster-devel] POC- Distributed regression testing framework

2018-10-04 Thread Pranith Kumar Karampuri
On Thu, Oct 4, 2018 at 2:15 PM Xavi Hernandez wrote: > On Thu, Oct 4, 2018 at 9:47 AM Amar Tumballi wrote: > >> >> >> On Thu, Oct 4, 2018 at 12:54 PM Xavi Hernandez >> wrote: >> >>> On Wed, Oct 3, 2018 at 11:57 AM Deepshikha Khandelwal < >>> dkhan...@redhat.com> wrote: >>> Hello folks,

Re: [Gluster-devel] index_lookup segfault in glusterfsd brick process

2018-10-04 Thread Pranith Kumar Karampuri
On Wed, Oct 3, 2018 at 11:20 PM 김경표 wrote: > Hello folks. > > Few days ago I found my EC(4+2) volume was degraded. > I am using 3.12.13-1.el7.x86_64. > One brick was down, below is bricklog > I am suspicious loc->inode bug in index.c (see attached picture) > In GDB, loc->inode is null > >>

Re: [Gluster-devel] [Gluster-users] Crash in glusterfs!!!

2018-09-25 Thread Pranith Kumar Karampuri
t in between. > But the crash happened inside exit() code for which will be in libc which doesn't access any data structures in glusterfs. > > Regards, > Abhishek > > On Mon, Sep 24, 2018 at 9:11 PM Pranith Kumar Karampuri < > pkara...@redhat.com> wrote: > >>

Re: [Gluster-devel] [Gluster-users] Crash in glusterfs!!!

2018-09-24 Thread Pranith Kumar Karampuri
that the RC is correct and then I will send out the fix. > > Regards, > Abhishek > > On Mon, Sep 24, 2018 at 3:12 PM Pranith Kumar Karampuri < > pkara...@redhat.com> wrote: > >> >> >> On Mon, Sep 24, 2018 at 2:09 PM ABHISHEK PALIWAL >> wr

Re: [Gluster-devel] fedora smoke failure on 3.12

2018-09-05 Thread Pranith Kumar Karampuri
Thanks a lot! On Wed, Sep 5, 2018 at 4:55 PM Anoop C S wrote: > On Wed, 2018-09-05 at 16:08 +0530, Anoop C S wrote: > > On Wed, 2018-09-05 at 15:44 +0530, Pranith Kumar Karampuri wrote: > > > It also failed on 4.1 > https://build.gluster.org/job/fedora-smoke/1665/console &

Re: [Gluster-devel] fedora smoke failure on 3.12

2018-09-05 Thread Pranith Kumar Karampuri
It also failed on 4.1 https://build.gluster.org/job/fedora-smoke/1665/console Looks like quite a few changes need to be ported for them to pass? On Wed, Sep 5, 2018 at 3:41 PM Pranith Kumar Karampuri wrote: > https://build.gluster.org/job/fedora-smoke/1668/console > > I think it is

[Gluster-devel] fedora smoke failure on 3.12

2018-09-05 Thread Pranith Kumar Karampuri
https://build.gluster.org/job/fedora-smoke/1668/console I think it is happening because of missing tirpc changes in configure.ac? There are a series of patches for libtirpc starting with https://review.gluster.org/c/glusterfs/+/19235, I am not very good at reading configure.ac except for the

Re: [Gluster-devel] remaining entry in gluster volume heal info command even after reboot

2018-09-05 Thread Pranith Kumar Karampuri
nt/export/testdir/common.txt on sn-0 node > > 8> Gluster v heal export info will show following and keep for long time > > # gluster v heal export info > > Brick sn-0.local:/mnt/bricks/export/brick > > /testdir > > Status: Connected > > Number of entries: 1 &g

Re: [Gluster-devel] remaining entry in gluster volume heal info command even after reboot

2018-09-05 Thread Pranith Kumar Karampuri
On Wed, Sep 5, 2018 at 1:27 PM Zhou, Cynthia (NSB - CN/Hangzhou) < cynthia.z...@nokia-sbell.com> wrote: > Hi glusterfs experts: > >Good day! > >Recently when I do some test on my gluster env, I found that there > are some remaining entries in command “gluster v heal mstate info”

Re: [Gluster-devel] remaining entry in gluster volume heal info command even after reboot

2018-09-05 Thread Pranith Kumar Karampuri
Which version of gluster is this? On Wed, Sep 5, 2018 at 1:27 PM Zhou, Cynthia (NSB - CN/Hangzhou) < cynthia.z...@nokia-sbell.com> wrote: > Hi glusterfs experts: > >Good day! > >Recently when I do some test on my gluster env, I found that there > are some remaining entries in

Re: [Gluster-devel] [Gluster-Maintainers] Master branch lock down for stabilization (unlocking the same)

2018-08-13 Thread Pranith Kumar Karampuri
On Mon, Aug 13, 2018 at 10:55 PM Shyam Ranganathan wrote: > On 08/13/2018 02:20 AM, Pranith Kumar Karampuri wrote: > > - At the end of 2 weeks, reassess master and nightly test status, and > > see if we need another drive towards stabilizing master by locking > dow

Re: [Gluster-devel] [Gluster-Maintainers] Master branch lock down for stabilization (unlocking the same)

2018-08-13 Thread Pranith Kumar Karampuri
On Mon, Aug 13, 2018 at 6:05 AM Shyam Ranganathan wrote: > Hi, > > So we have had master locked down for a week to ensure we only get fixes > for failing tests in order to stabilize the code base, partly for > release-5 branching as well. > > As of this weekend, we (Atin and myself) have been

Re: [Gluster-devel] [Gluster-Maintainers] Master branch lock down status (Wed, August 08th)

2018-08-10 Thread Pranith Kumar Karampuri
On Thu, Aug 9, 2018 at 4:02 PM Pranith Kumar Karampuri wrote: > > > On Thu, Aug 9, 2018 at 6:34 AM Shyam Ranganathan > wrote: > >> Today's patch set 7 [1], included fixes provided till last evening IST, >> and its runs can be seen here [2] (yay! we can link t

Re: [Gluster-devel] [Gluster-Maintainers] Master branch lock down status (Thu, August 09th)

2018-08-10 Thread Pranith Kumar Karampuri
On Fri, Aug 10, 2018 at 6:34 AM Shyam Ranganathan wrote: > Today's test results are updated in the spreadsheet in sheet named "Run > patch set 8". > > I took in patch https://review.gluster.org/c/glusterfs/+/20685 which > caused quite a few failures, so not updating new failures as issue yet. >

Re: [Gluster-devel] Spurious smoke failure in build rpms

2018-08-09 Thread Pranith Kumar Karampuri
On Thu, Aug 9, 2018 at 4:19 PM Nigel Babu wrote: > Infra issue. Please file a bug. > https://bugzilla.redhat.com/show_bug.cgi?id=1614631 Thanks! > On Thu, Aug 9, 2018 at 3:57 PM Pranith Kumar Karampuri < > pkara...@redhat.com> wrote: > >> https://build.glust

Re: [Gluster-devel] ./tests/basic/afr/metadata-self-heal.t core dumped

2018-08-09 Thread Pranith Kumar Karampuri
On Fri, Aug 10, 2018 at 8:54 AM Raghavendra Gowdappa wrote: > All, > > Details can be found at: > https://build.gluster.org/job/centos7-regression/2190/console > > Process that core dumped: glfs_shdheal > > Note that the patch on which this regression failures is on readdir-ahead > which is not

Re: [Gluster-devel] [Gluster-Maintainers] Master branch lock down status (Wed, August 08th)

2018-08-09 Thread Pranith Kumar Karampuri
On Thu, Aug 9, 2018 at 6:34 AM Shyam Ranganathan wrote: > Today's patch set 7 [1], included fixes provided till last evening IST, > and its runs can be seen here [2] (yay! we can link to comments in > gerrit now). > > New failures: (added to the spreadsheet) >

[Gluster-devel] Spurious smoke failure in build rpms

2018-08-09 Thread Pranith Kumar Karampuri
https://build.gluster.org/job/devrpm-el7/10441/console *10:12:42* Wrote: /home/jenkins/root/workspace/devrpm-el7/extras/LinuxRPM/rpmbuild/SRPMS/glusterfs-4.2dev-0.240.git4657137.el7.src.rpm*10:12:42* mv rpmbuild/SRPMS/* .*10:12:44* INFO: mock.py version 1.4.11 starting (python version =

Re: [Gluster-devel] [Gluster-Maintainers] Master branch lock down status

2018-08-09 Thread Pranith Kumar Karampuri
On Wed, Aug 8, 2018 at 5:08 AM Shyam Ranganathan wrote: > Deserves a new beginning, threads on the other mail have gone deep enough. > > NOTE: (5) below needs your attention, rest is just process and data on > how to find failures. > > 1) We are running the tests using the patch [2]. > > 2) Run

Re: [Gluster-devel] [Gluster-Maintainers] Release 5: Master branch health report (Week of 30th July)

2018-08-02 Thread Pranith Kumar Karampuri
On Thu, Aug 2, 2018 at 10:03 PM Pranith Kumar Karampuri wrote: > > > On Thu, Aug 2, 2018 at 7:19 PM Atin Mukherjee wrote: > >> New addition - tests/basic/volume.t - failed twice atleast with shd core. >> >> One such ref - >> https://build.gluster.org/job/cent

Re: [Gluster-devel] [Gluster-Maintainers] Release 5: Master branch health report (Week of 30th July)

2018-08-02 Thread Pranith Kumar Karampuri
On Thu, Aug 2, 2018 at 7:19 PM Atin Mukherjee wrote: > New addition - tests/basic/volume.t - failed twice atleast with shd core. > > One such ref - > https://build.gluster.org/job/centos7-regression/2058/console > I will take a look. > > > On Thu, Aug 2, 2018 at 6:28 PM Sankarshan

Re: [Gluster-devel] How long should metrics collection on a cluster take?

2018-07-26 Thread Pranith Kumar Karampuri
I think EC/AFR/Quota components will definitely be affected with this approach. CCing them. Please feel free to CC anyone who works on commands that require a mount to give status. > > On Thu, Jul 26, 2018 at 12:57 AM Pranith Kumar Karampuri < > pkara...@redhat.com> wrote: > &

Re: [Gluster-devel] How long should metrics collection on a cluster take?

2018-07-25 Thread Pranith Kumar Karampuri
On Thu, Jul 26, 2018 at 9:59 AM, Pranith Kumar Karampuri < pkara...@redhat.com> wrote: > > > On Wed, Jul 25, 2018 at 10:48 PM, John Strunk wrote: > >> I have not put together a list. Perhaps the following will help w/ the >> context though... >> >> The

Re: [Gluster-devel] How long should metrics collection on a cluster take?

2018-07-25 Thread Pranith Kumar Karampuri
n that we haven't designed (or even listed) all the potential > action()s, I can't give you a list of everything to query. I guarantee > we'll need to know the up/down status, heal counts, and free capacity for > each brick and node. > Thanks for the detailed explanation. This h

Re: [Gluster-devel] How long should metrics collection on a cluster take?

2018-07-25 Thread Pranith Kumar Karampuri
understand what this might entail already... > > -John > > > On Wed, Jul 25, 2018 at 5:45 AM Pranith Kumar Karampuri < > pkara...@redhat.com> wrote: > >> >> >> On Tue, Jul 24, 2018 at 10:10 PM, Sankarshan Mukhopadhyay < >> sankarshan.muk

Re: [Gluster-devel] How long should metrics collection on a cluster take?

2018-07-25 Thread Pranith Kumar Karampuri
On Tue, Jul 24, 2018 at 10:10 PM, Sankarshan Mukhopadhyay < sankarshan.mukhopadh...@gmail.com> wrote: > On Tue, Jul 24, 2018 at 9:48 PM, Pranith Kumar Karampuri > wrote: > > hi, > > Quite a few commands to monitor gluster at the moment take almost a &

[Gluster-devel] How long should metrics collection on a cluster take?

2018-07-24 Thread Pranith Kumar Karampuri
hi, Quite a few commands to monitor gluster at the moment take almost a second to give output. Some categories of these commands: 1) Any command that needs to do some sort of mount/glfs_init. Examples: 1) heal info family of commands 2) statfs to find space-availability etc (On my

Re: [Gluster-devel] The ctime of fstat is not correct which lead to "tar" utility error

2018-07-19 Thread Pranith Kumar Karampuri
+Ravi On Thu, Jul 19, 2018 at 2:29 PM, Lian, George (NSB - CN/Hangzhou) < george.l...@nokia-sbell.com> wrote: > Hi, Gluster Experts, > > > > In glusterfs version 3.12.3, There seems a “fstat” issue for ctime after > we use fsync, > > We have a demo execute binary which write some data and then

Re: [Gluster-devel] [features/locks] Fetching lock info in lookup

2018-06-20 Thread Pranith Kumar Karampuri
On Thu, Jun 21, 2018 at 7:14 AM, Raghavendra Gowdappa wrote: > > > On Thu, Jun 21, 2018 at 6:55 AM, Raghavendra Gowdappa > wrote: > >> >> >> On Wed, Jun 20, 2018 at 9:09 PM, Xavi Hernandez >> wrote: >> >>> On Wed, Jun 20, 2018 at 4:29 PM Raghavendra Gowdappa < >>> rgowd...@redhat.com> wrote:

Re: [Gluster-devel] Split brain after replacing a brick

2018-04-06 Thread Pranith Kumar Karampuri
Hey Manu, Long time! Sorry for the delay. On Sat, Mar 31, 2018 at 8:12 PM, Emmanuel Dreyfus wrote: > Hello > > After doing a replace-brick and a full heal, I am left with: > > Brick bidon:/export/wd0e > Status: Connected > Number of entries: 0 > > Brick

Re: [Gluster-devel] [Gluster-users] Any feature allow to add lock on a file between different apps?

2018-04-06 Thread Pranith Kumar Karampuri
You can use posix-locks i.e. fnctl based advisory locks on glusterfs just like any other fs. On Wed, Apr 4, 2018 at 8:30 AM, Lei Gong wrote: > Hello there, > > > > I want to know if there is a feature allow user to add lock on a file when > their app is modifying that file, so

Re: [Gluster-devel] [Gluster-Maintainers] Release 4.1: LTM release targeted for end of May

2018-03-15 Thread Pranith Kumar Karampuri
On Wed, Mar 14, 2018 at 8:27 PM, Amye Scavarda <a...@redhat.com> wrote: > Responding on the architects question: > > On Tue, Mar 13, 2018 at 9:57 PM, Pranith Kumar Karampuri < > pkara...@redhat.com> wrote: > >> >> >> On Tue, Mar 13, 2018 a

Re: [Gluster-devel] [Gluster-Maintainers] Release 4.1: LTM release targeted for end of May

2018-03-13 Thread Pranith Kumar Karampuri
On Tue, Mar 13, 2018 at 4:26 PM, Pranith Kumar Karampuri < pkara...@redhat.com> wrote: > > > On Tue, Mar 13, 2018 at 1:51 PM, Amar Tumballi <atumb...@redhat.com> > wrote: > >> >> >>> >> Further, as we hit end of March, we would make it m

Re: [Gluster-devel] [Gluster-Maintainers] Release 4.1: LTM release targeted for end of May

2018-03-13 Thread Pranith Kumar Karampuri
On Tue, Mar 13, 2018 at 7:07 AM, Shyam Ranganathan wrote: > Hi, > > As we wind down on 4.0 activities (waiting on docs to hit the site, and > packages to be available in CentOS repositories before announcing the > release), it is time to start preparing for the 4.1 release.

Re: [Gluster-devel] tests/bugs/core/bug-1432542-mpx-restart-crash.t generated core

2018-03-11 Thread Pranith Kumar Karampuri
Thanks Atin! On Mon, Mar 12, 2018 at 9:49 AM, Atin Mukherjee <amukh...@redhat.com> wrote: > Mohit is aware of this issue and currently working on a patch. > > On Mon, Mar 12, 2018 at 9:47 AM, Pranith Kumar Karampuri < > pkara...@redhat.com> wrote: > >> hi, >

[Gluster-devel] tests/bugs/core/bug-1432542-mpx-restart-crash.t generated core

2018-03-11 Thread Pranith Kumar Karampuri
hi, In https://build.gluster.org/job/centos7-regression/274/consoleFull, the test in $SUBJECT generated core. It seems to be segfaulting in quota. But I didn't take a closer look. -- Pranith ___ Gluster-devel mailing list Gluster-devel@gluster.org

Re: [Gluster-devel] Removal of use-compound-fops option in afr

2018-03-05 Thread Pranith Kumar Karampuri
On Mon, Mar 5, 2018 at 7:10 PM, Shyam Ranganathan <srang...@redhat.com> wrote: > On 03/04/2018 10:15 PM, Pranith Kumar Karampuri wrote: > > Shyam, > > Do let me know if there is anything that needs to be done on the > > process front. > > I see that use-compo

Re: [Gluster-devel] Removal of use-compound-fops option in afr

2018-03-04 Thread Pranith Kumar Karampuri
On Mon, Mar 5, 2018 at 9:19 AM, Amar Tumballi wrote: > Pranith, > > > >> We found that compound fops is not giving better performance in >> replicate and I am thinking of removing that code. Sent the patch at >> https://review.gluster.org/19655 >> >> > If I understand

Re: [Gluster-devel] Removal of use-compound-fops option in afr

2018-03-04 Thread Pranith Kumar Karampuri
Shyam, Do let me know if there is anything that needs to be done on the process front. On Mon, Mar 5, 2018 at 8:18 AM, Pranith Kumar Karampuri <pkara...@redhat.com > wrote: > hi, > We found that compound fops is not giving better performance in > replicate and I am think

[Gluster-devel] Removal of use-compound-fops option in afr

2018-03-04 Thread Pranith Kumar Karampuri
hi, We found that compound fops is not giving better performance in replicate and I am thinking of removing that code. Sent the patch at https://review.gluster.org/19655 -- Pranith ___ Gluster-devel mailing list Gluster-devel@gluster.org

Re: [Gluster-devel] [Gluster-Maintainers] Release 4.0: RC1 tagged

2018-02-28 Thread Pranith Kumar Karampuri
I found the following memory leak present in 3.13, 4.0 and master: https://bugzilla.redhat.com/show_bug.cgi?id=1550078 I will clone/port to 4.0 as soon as the patch is merged. On Wed, Feb 28, 2018 at 5:55 PM, Javier Romero wrote: > Hi all, > > Have tested on CentOS Linux

Re: [Gluster-devel] [Gluster-Maintainers] Release 4.0: Release notes (please read and contribute)

2018-02-09 Thread Pranith Kumar Karampuri
On Tue, Jan 30, 2018 at 3:40 AM, Shyam Ranganathan wrote: > Hi, > > I have posted an initial draft version of the release notes here [1]. > > I would like to *suggest* the following contributors to help improve and > finish the release notes by 06th Feb, 2017. As you read

Re: [Gluster-devel] Glusterfs and Structured data

2018-02-09 Thread Pranith Kumar Karampuri
On Thu, Feb 8, 2018 at 12:05 PM, Raghavendra G wrote: > > > On Tue, Feb 6, 2018 at 8:15 PM, Vijay Bellur wrote: > >> >> >> On Sun, Feb 4, 2018 at 3:39 AM, Raghavendra Gowdappa > > wrote: >> >>> All, >>> >>> One of our users

Re: [Gluster-devel] Glusterfs and Structured data

2018-02-06 Thread Pranith Kumar Karampuri
On Sun, Feb 4, 2018 at 5:09 PM, Raghavendra Gowdappa wrote: > All, > > One of our users pointed out to the documentation that glusterfs is not > good for storing "Structured data" [1], while discussing an issue [2]. Does > any of you have more context on the feasibility of

Re: [Gluster-devel] a link issue maybe introduced in a bug fix " Don't let NFS cache stat after writes"

2018-01-28 Thread Pranith Kumar Karampuri
+Ravi, +Raghavendra G On 25 Jan 2018 8:49 am, "Pranith Kumar Karampuri" <pkara...@redhat.com> wrote: > > > On 25 Jan 2018 8:43 am, "Lian, George (NSB - CN/Hangzhou)" < > george.l...@nokia-sbell.com> wrote: > > Hi, > > I suppose

Re: [Gluster-devel] [FAILED][master] tests/basic/afr/durability-off.t

2018-01-25 Thread Pranith Kumar Karampuri
On Thu, Jan 25, 2018 at 3:09 PM, Milind Changire wrote: > could AFR engineers check why tests/basic/afr/durability-off.t fails in > brick-mux mode; > Issue seems to be something with connections to the bricks at the time of mount. *09:30:04* dd: opening

Re: [Gluster-devel] a link issue maybe introduced in a bug fix " Don't let NFS cache stat after writes"

2018-01-24 Thread Pranith Kumar Karampuri
s Sent: Wednesday, January 24, 2018 7:43 PM To: Pranith Kumar Karampuri <pkara...@redhat.com> Cc: Lian, George (NSB - CN/Hangzhou) <george.l...@nokia-sbell.com>; Zhou, Cynthia (NSB - CN/Hangzhou) <cynthia.z...@nokia-sbell.com>; Li, Deqian (NSB - CN/Hangzhou) <deqian...@nokia-s

Re: [Gluster-devel] a link issue maybe introduced in a bug fix " Don't let NFS cache stat after writes"

2018-01-24 Thread Pranith Kumar Karampuri
On Wed, Jan 24, 2018 at 2:24 PM, Pranith Kumar Karampuri < pkara...@redhat.com> wrote: > hi, >In the same commit you mentioned earlier, there was this code > earlier: > -/* Returns 1 if the stat seems to be filled with zeroes. */ > -int > -nfs_zero_fille

Re: [Gluster-devel] a link issue maybe introduced in a bug fix " Don't let NFS cache stat after writes"

2018-01-24 Thread Pranith Kumar Karampuri
if ((buf->ia_nlink == 0) && (buf->ia_ctime == 0)) > > if (buf->ia_ctime == ) > > return 1; > > > > return 0; > > } > > > > void > > gf_zero_fill_stat (struct iatt *buf) > > { > > // buf->ia

Re: [Gluster-devel] a link issue maybe introduced in a bug fix " Don't let NFS cache stat after writes"

2018-01-24 Thread Pranith Kumar Karampuri
void gf_zero_fill_stat (struct iatt *buf) { // buf->ia_nlink = 0; buf->ia_ctime = 0; } Thanks & Best Regards George *From:* Lian, George (NSB - CN/Hangzhou) *Sent:* Friday, January 19, 2018 10:03 AM *To:* Pranith Kumar Karampuri <pkara...@redhat.com>; Zhou, Cynth

Re: [Gluster-devel] Release 3.13.2: Planned for 19th of Jan, 2018

2018-01-19 Thread Pranith Kumar Karampuri
On Fri, Jan 19, 2018 at 6:19 AM, Shyam Ranganathan wrote: > On 01/18/2018 07:34 PM, Ravishankar N wrote: > > > > > > On 01/18/2018 11:53 PM, Shyam Ranganathan wrote: > >> On 01/02/2018 11:08 AM, Shyam Ranganathan wrote: > >>> Hi, > >>> > >>> As release 3.13.1 is announced,

Re: [Gluster-devel] cluster/dht: restrict migration of opened files

2018-01-18 Thread Pranith Kumar Karampuri
On Tue, Jan 16, 2018 at 2:52 PM, Raghavendra Gowdappa wrote: > All, > > Patch [1] prevents migration of opened files during rebalance operation. > If patch [1] affects you, please voice out your concerns. [1] is a stop-gap > fix for the problem discussed in issues [2][3] >

Re: [Gluster-devel] a link issue maybe introduced in a bug fix " Don't let NFS cache stat after writes"

2018-01-18 Thread Pranith Kumar Karampuri
*ls: cannot access '/mnt/test': No such file or directory* > > *root@ubuntu:~# mkdir -p /mnt/test* > > *root@ubuntu:~# mount -t glusterfs ubuntu:/test /mnt/test* > > > > *root@ubuntu:~# cd /mnt/test* > > *root@ubuntu:/mnt/test# echo "abc">aaa* > > *

Re: [Gluster-devel] a link issue maybe introduced in a bug fix " Don't let NFS cache stat after writes"

2018-01-17 Thread Pranith Kumar Karampuri
-- > -- > > There are no active volume tasks > > > > root@dhcp35-190 - ~ > > 16:44:38 :) ⚡ kill -9 5309 5351 5393 > > > > Best Regards, > > George > > *From:* gluster-devel-boun...@gluster.org [mailto:gluster-d

Re: [Gluster-devel] a link issue maybe introduced in a bug fix " Don't let NFS cache stat after writes"

2018-01-17 Thread Pranith Kumar Karampuri
On Mon, Jan 15, 2018 at 1:55 PM, Pranith Kumar Karampuri < pkara...@redhat.com> wrote: > > > On Mon, Jan 15, 2018 at 8:46 AM, Lian, George (NSB - CN/Hangzhou) < > george.l...@nokia-sbell.com> wrote: > >> Hi, >> >> >> >> Have you reprod

Re: [Gluster-devel] a link issue maybe introduced in a bug fix " Don't let NFS cache stat after writes"

2018-01-15 Thread Pranith Kumar Karampuri
egards, > > George > > > > *From:* Lian, George (NSB - CN/Hangzhou) > *Sent:* Thursday, January 11, 2018 2:01 PM > *To:* Pranith Kumar Karampuri <pkara...@redhat.com> > *Cc:* Zhou, Cynthia (NSB - CN/Hangzhou) <cynthia.z...@nokia-sbell.com>; > Gluster-devel@g

Re: [Gluster-devel] a link issue maybe introduced in a bug fix " Don't let NFS cache stat after writes"

2018-01-10 Thread Pranith Kumar Karampuri
*From:* gluster-devel-boun...@gluster.org [mailto:gluster-devel-bounces@ > gluster.org] *On Behalf Of *Pranith Kumar Karampuri > *Sent:* Wednesday, January 10, 2018 8:08 PM > *To:* Lian, George (NSB - CN/Hangzhou) <george.l...@nokia-sbell.com> > *Cc:* Zhou, Cynthia (NSB - CN/

Re: [Gluster-devel] a link issue maybe introduced in a bug fix " Don't let NFS cache stat after writes"

2018-01-10 Thread Pranith Kumar Karampuri
On Wed, Jan 10, 2018 at 11:09 AM, Lian, George (NSB - CN/Hangzhou) < george.l...@nokia-sbell.com> wrote: > Hi, Pranith Kumar, > > > > I has create a bug on Bugzilla https://bugzilla.redhat.com/ > show_bug.cgi?id=1531457 > > After my investigation for this link issue, I suppose your changes on >

Re: [Gluster-devel] reflink support for glusterfs and gluster-block using it for taking snapshots

2017-12-08 Thread Pranith Kumar Karampuri
On Thu, Nov 9, 2017 at 8:26 PM, Niels de Vos <nde...@redhat.com> wrote: > On Tue, Nov 07, 2017 at 05:59:32PM +0530, Pranith Kumar Karampuri wrote: > > On Tue, Nov 7, 2017 at 5:16 PM, Niels de Vos <nde...@redhat.com> wrote: > > > > > On Tue, Nov 07, 201

Re: [Gluster-devel] Need help figuring out the reason for test failure

2017-11-27 Thread Pranith Kumar Karampuri
hanks for the pointer. Able to get the logs now! > > On Tue, Nov 28, 2017 at 8:06 AM, Pranith Kumar Karampuri < > pkara...@redhat.com> wrote: > >> One of my patches(https://review.gluster.org/18857) is consistently >> leading to a failure for the test: >> >> tes

[Gluster-devel] Need help figuring out the reason for test failure

2017-11-27 Thread Pranith Kumar Karampuri
One of my patches(https://review.gluster.org/18857) is consistently leading to a failure for the test: tests/bugs/core/bug-1432542-mpx-restart-crash.t https://build.gluster.org/job/centos6-regression/7676/consoleFull Jeff/Atin, Do you know anything about these kinds of failures for this

Re: [Gluster-devel] Tie-breaker logic for blocking inodelks/entrylks

2017-11-09 Thread Pranith Kumar Karampuri
:56 AM, Pranith Kumar Karampuri <pkara...@redhat.com > wrote: > This github issue <https://github.com/gluster/glusterfs/issues/354> talks > about the logic for implementing tie-breaker logic for blocking > inodelks/entrylks. Your comments are welcome on the issue. > > --

[Gluster-devel] Tie-breaker logic for blocking inodelks/entrylks

2017-11-09 Thread Pranith Kumar Karampuri
This github issue talks about the logic for implementing tie-breaker logic for blocking inodelks/entrylks. Your comments are welcome on the issue. -- Pranith ___ Gluster-devel mailing list

[Gluster-devel] Thin-arbiter design to solve stretch cluster usecase

2017-11-07 Thread Pranith Kumar Karampuri
hi, I created a new issue which shares a link of design document for this feature. Please add comments to the design document itself rather than github issue, so that all comments are in one place. Thanks in advance. -- Pranith

Re: [Gluster-devel] reflink support for glusterfs and gluster-block using it for taking snapshots

2017-11-07 Thread Pranith Kumar Karampuri
On Tue, Nov 7, 2017 at 5:16 PM, Niels de Vos <nde...@redhat.com> wrote: > On Tue, Nov 07, 2017 at 07:43:17AM +0530, Pranith Kumar Karampuri wrote: > > hi, > > I just created a github issue for reflink support > > <https://github.com/gluster/glusterfs/issu

[Gluster-devel] reflink support for glusterfs and gluster-block using it for taking snapshots

2017-11-06 Thread Pranith Kumar Karampuri
hi, I just created a github issue for reflink support (#349) in glusterfs. We are intending to use this feature to do block snapshots in gluster-block. Please let us know your comments on the github issue. I have added the changes we may need

Re: [Gluster-devel] tests/basic/pump.t - what is it used for?

2017-09-08 Thread Pranith Kumar Karampuri
It was used for testing pump xlator functionality. When replace-brick is done on a distribute volume, it would lead to pump xlator migrating data to the destination brick from source. I guess we can delete this test. I don't think we support pump xlator anymore. On Fri, Sep 8, 2017 at 10:02 AM,

Re: [Gluster-devel] Need inputs on patch #17985

2017-08-23 Thread Pranith Kumar Karampuri
gt; > *To: *"Ashish Pandey" <aspan...@redhat.com> > *Cc: *"Pranith Kumar Karampuri" <pkara...@redhat.com>, "Xavier Hernandez" > <xhernan...@datalab.es>, "Gluster Devel" <gluster-devel@gluster.org> > *Sent: *Wednesday, August

Re: [Gluster-devel] Changing the relative order of read-ahead and open-behind

2017-07-24 Thread Pranith Kumar Karampuri
On Mon, Jul 24, 2017 at 5:11 PM, Raghavendra G wrote: > > > On Fri, Jul 21, 2017 at 6:39 PM, Vijay Bellur wrote: > >> >> On Fri, Jul 21, 2017 at 3:26 AM, Raghavendra Gowdappa < >> rgowd...@redhat.com> wrote: >> >>> Hi all, >>> >>> We've a bug [1],

Re: [Gluster-devel] [Gluster-Maintainers] Release 3.12: Status of features (Require responses!)

2017-07-24 Thread Pranith Kumar Karampuri
On Sat, Jul 22, 2017 at 1:36 AM, Shyam wrote: > Hi, > > Prepare for a lengthy mail, but needed for the 3.12 release branching, so > here is a key to aid the impatient, > > Key: > 1) If you asked for an exception to a feature (meaning delayed backport to > 3.12 branch post

Re: [Gluster-devel] Error while mounting gluster volume

2017-07-20 Thread Pranith Kumar Karampuri
The following generally means it is not able to connect to any of the glusterds in the cluster. [1970-01-02 10:54:04.420406] E [glusterfsd-mgmt.c:1818:mgmt_rpc_notify] 0-glusterfsd-mgmt: failed to connect with remote-host: 128.224.95.140 (Success) [1970-01-02 10:54:04.420422] I [MSGID: 101190]

Re: [Gluster-devel] GlusterFS v3.12 - Nearing deadline for branch out

2017-07-17 Thread Pranith Kumar Karampuri
hi, Status of the following features targeted for 3.12: 1) Need a way to resolve split-brain (#135) : Mostly will be merged in a day. 2) Halo Hybrid mode (#217): Unfortunately didn't get time to follow up on this, so will not make it to the release. 3) Implement heal throttling (#255):

Re: [Gluster-devel] [Gluster-Maintainers] Build failed in Jenkins: regression-test-with-multiplex #162

2017-07-17 Thread Pranith Kumar Karampuri
Ah! sorry, I need to keep this in mind next time when I review. On Mon, Jul 17, 2017 at 6:30 PM, Atin Mukherjee wrote: > kill $pid0 > > kill $pid1 > > EXPECT_WITHIN $CHILD_UP_TIMEOUT "4" ec_child_up_count $V0 0 > > *21:03:33* not ok 17 Got "0" instead of "4",

Re: [Gluster-devel] Regarding positioning of nl-cache in gluster client stack

2017-07-17 Thread Pranith Kumar Karampuri
On Mon, Jul 17, 2017 at 11:31 AM, Krutika Dhananjay wrote: > Hi Poornima and Pranith, > > I see that currently glusterd loads nl-cache between stat-prefetch and > open-behind on the client stack. Were there any specific considerations for > selecting this position for

Re: [Gluster-devel] [Gluster-users] Replicated volume, one slow brick

2017-07-15 Thread Pranith Kumar Karampuri
Adding gluster-devel Raghavendra, I remember we discussing about handling these kinds of errors by ping-timer expiry? I may have missed the final decision on how this was decided to be handled. So asking you again ;-) On Thu, Jul 13, 2017 at 2:14 PM, Øyvind Krosby wrote:

Re: [Gluster-devel] Forgot how to download NetBSD logfiles

2017-07-14 Thread Pranith Kumar Karampuri
On Fri, Jul 14, 2017 at 9:22 PM, Pranith Kumar Karampuri < pkara...@redhat.com> wrote: > hi, >I am not able to download the "Logs archived in > http://nbslave74.cloud.gluster.org/archives/logs/ > glusterfs-logs-20170713080233.tgz" > > I get the followin

[Gluster-devel] Forgot how to download NetBSD logfiles

2017-07-14 Thread Pranith Kumar Karampuri
hi, I am not able to download the "Logs archived in http://nbslave74.cloud.gluster.org/archives/logs/glusterfs-logs-20170713080233.tgz " I get the following error: archives/logs/glusterfs-logs-20170713080233.tgz We had to modify something in the URL to get the correct link but now either

[Gluster-devel] crash in tests/bugs/core/bug-1432542-mpx-restart-crash.t

2017-07-13 Thread Pranith Kumar Karampuri
I just observed that https://build.gluster.org/job/centos6-regression/5433/consoleFull failed because of this .t failure. -- Pranith ___ Gluster-devel mailing list Gluster-devel@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] gfid and volume-id extended attributes lost

2017-07-13 Thread Pranith Kumar Karampuri
this out and let you know. > > > > Thanks and Regards, > > Ram > > *From:* Pranith Kumar Karampuri [mailto:pkara...@redhat.com] > *Sent:* Monday, July 10, 2017 8:31 AM > *To:* Sanoj Unnikrishnan > *Cc:* Ankireddypalle Reddy; Gluster Devel (gluster-devel@gluster.org)

Re: [Gluster-devel] create restrictions xlator

2017-07-13 Thread Pranith Kumar Karampuri
> - > Taehwa Lee > Gluesys Co.,Ltd. > alghost@gmail.com > +82-10-3420-6114, +82-70-8785-6591 > ----- > > 2017. 7. 13. 오후 1:06, Pranith Kumar Karampuri <pkara...@redhat.com> 작성: > > hey, &g

Re: [Gluster-devel] create restrictions xlator

2017-07-12 Thread Pranith Kumar Karampuri
hey, I went through the patch. I see that statfs is always wound for create fop. So number of network operations increase and performance will be less even in normal case. I think similar functionality is in DHT, may be you should take a look at that? Check dht_get_du_info() which is used

Re: [Gluster-devel] gfid and volume-id extended attributes lost

2017-07-07 Thread Pranith Kumar Karampuri
glusterfs6sds.commvault.com:/ws/disk8/ws_brick > > Brick58: glusterfs4sds.commvault.com:/ws/disk9/ws_brick > > Brick59: glusterfs5sds.commvault.com:/ws/disk9/ws_brick > > Brick60: glusterfs6sds.commvault.com:/ws/disk9/ws_brick > > Options Reconfigured: > > perf

Re: [Gluster-devel] gfid and volume-id extended attributes lost

2017-07-07 Thread Pranith Kumar Karampuri
e issue. The bricks were > mounted after the reboot. One more thing that I noticed was when the > attributes were manually set when glusterd was up then on starting the > volume the attributes were again lost. Had to stop glusterd set attributes > and then start glusterd. After that the volume sta

Re: [Gluster-devel] gfid and volume-id extended attributes lost

2017-07-07 Thread Pranith Kumar Karampuri
when glusterd was up then on starting the > volume the attributes were again lost. Had to stop glusterd set attributes > and then start glusterd. After that the volume start succeeded. > Which version is this? > > > Thanks and Regards, > > Ram > > > > *From:* Pra

Re: [Gluster-devel] gfid and volume-id extended attributes lost

2017-07-07 Thread Pranith Kumar Karampuri
On Fri, Jul 7, 2017 at 9:15 PM, Pranith Kumar Karampuri <pkara...@redhat.com > wrote: > Did anything special happen on these two bricks? It can't happen in the > I/O path: > posix_removexattr() has: > 0 if (!strcmp (GFID_XATTR_KEY, name)) > { > > > 1

Re: [Gluster-devel] gfid and volume-id extended attributes lost

2017-07-07 Thread Pranith Kumar Karampuri
Did anything special happen on these two bricks? It can't happen in the I/O path: posix_removexattr() has: 0 if (!strcmp (GFID_XATTR_KEY, name)) { 1 gf_msg (this->name, GF_LOG_WARNING, 0, P_MSG_XATTR_NOT_REMOVED, 2 "Remove xattr called on gfid

  1   2   3   4   5   6   7   8   >