On Thu, Jun 21, 2018 at 7:14 AM, Raghavendra Gowdappa
wrote:
>
>
> On Thu, Jun 21, 2018 at 6:55 AM, Raghavendra Gowdappa > wrote:
>
>>
>>
>> On Wed, Jun 20, 2018 at 9:09 PM, Xavi Hernandez
>> wrote:
>>
>>> On Wed, Jun 20, 2018 at 4:29 PM Raghavendra Gowdappa <
>>> rgowd...@redhat.com> wrote:
*From:* gluster-devel-boun...@gluster.org [mailto:gluster-devel-bounces@
> gluster.org] *On Behalf Of *Pranith Kumar Karampuri
> *Sent:* Wednesday, January 10, 2018 8:08 PM
> *To:* Lian, George (NSB - CN/Hangzhou) <george.l...@nokia-sbell.com>
> *Cc:* Zhou, Cynthia (NSB - CN/
On Wed, Jan 10, 2018 at 11:09 AM, Lian, George (NSB - CN/Hangzhou) <
george.l...@nokia-sbell.com> wrote:
> Hi, Pranith Kumar,
>
>
>
> I has create a bug on Bugzilla https://bugzilla.redhat.com/
> show_bug.cgi?id=1531457
>
> After my investigation for this link issue, I suppose your changes on
>
On Tue, Jan 16, 2018 at 2:52 PM, Raghavendra Gowdappa
wrote:
> All,
>
> Patch [1] prevents migration of opened files during rebalance operation.
> If patch [1] affects you, please voice out your concerns. [1] is a stop-gap
> fix for the problem discussed in issues [2][3]
>
On Fri, Jan 19, 2018 at 6:19 AM, Shyam Ranganathan
wrote:
> On 01/18/2018 07:34 PM, Ravishankar N wrote:
> >
> >
> > On 01/18/2018 11:53 PM, Shyam Ranganathan wrote:
> >> On 01/02/2018 11:08 AM, Shyam Ranganathan wrote:
> >>> Hi,
> >>>
> >>> As release 3.13.1 is announced,
--
> --
>
> There are no active volume tasks
>
>
>
> root@dhcp35-190 - ~
>
> 16:44:38 :) ⚡ kill -9 5309 5351 5393
>
>
>
> Best Regards,
>
> George
>
> *From:* gluster-devel-boun...@gluster.org [mailto:gluster-d
*ls: cannot access '/mnt/test': No such file or directory*
>
> *root@ubuntu:~# mkdir -p /mnt/test*
>
> *root@ubuntu:~# mount -t glusterfs ubuntu:/test /mnt/test*
>
>
>
> *root@ubuntu:~# cd /mnt/test*
>
> *root@ubuntu:/mnt/test# echo "abc">aaa*
>
> *
egards,
>
> George
>
>
>
> *From:* Lian, George (NSB - CN/Hangzhou)
> *Sent:* Thursday, January 11, 2018 2:01 PM
> *To:* Pranith Kumar Karampuri <pkara...@redhat.com>
> *Cc:* Zhou, Cynthia (NSB - CN/Hangzhou) <cynthia.z...@nokia-sbell.com>;
> Gluster-devel@g
s
Sent: Wednesday, January 24, 2018 7:43 PM
To: Pranith Kumar Karampuri <pkara...@redhat.com>
Cc: Lian, George (NSB - CN/Hangzhou) <george.l...@nokia-sbell.com>; Zhou,
Cynthia (NSB - CN/Hangzhou) <cynthia.z...@nokia-sbell.com>; Li, Deqian (NSB
- CN/Hangzhou) <deqian...@nokia-s
On Thu, Jan 25, 2018 at 3:09 PM, Milind Changire
wrote:
> could AFR engineers check why tests/basic/afr/durability-off.t fails in
> brick-mux mode;
>
Issue seems to be something with connections to the bricks at the time of
mount.
*09:30:04* dd: opening
+Ravi, +Raghavendra G
On 25 Jan 2018 8:49 am, "Pranith Kumar Karampuri" <pkara...@redhat.com>
wrote:
>
>
> On 25 Jan 2018 8:43 am, "Lian, George (NSB - CN/Hangzhou)" <
> george.l...@nokia-sbell.com> wrote:
>
> Hi,
>
> I suppose
On Sun, Feb 4, 2018 at 5:09 PM, Raghavendra Gowdappa
wrote:
> All,
>
> One of our users pointed out to the documentation that glusterfs is not
> good for storing "Structured data" [1], while discussing an issue [2]. Does
> any of you have more context on the feasibility of
On Thu, Feb 8, 2018 at 12:05 PM, Raghavendra G
wrote:
>
>
> On Tue, Feb 6, 2018 at 8:15 PM, Vijay Bellur wrote:
>
>>
>>
>> On Sun, Feb 4, 2018 at 3:39 AM, Raghavendra Gowdappa > > wrote:
>>
>>> All,
>>>
>>> One of our users
On Tue, Jan 30, 2018 at 3:40 AM, Shyam Ranganathan
wrote:
> Hi,
>
> I have posted an initial draft version of the release notes here [1].
>
> I would like to *suggest* the following contributors to help improve and
> finish the release notes by 06th Feb, 2017. As you read
On Thu, Aug 2, 2018 at 7:19 PM Atin Mukherjee wrote:
> New addition - tests/basic/volume.t - failed twice atleast with shd core.
>
> One such ref -
> https://build.gluster.org/job/centos7-regression/2058/console
>
I will take a look.
>
>
> On Thu, Aug 2, 2018 at 6:28 PM Sankarshan
On Thu, Aug 2, 2018 at 10:03 PM Pranith Kumar Karampuri
wrote:
>
>
> On Thu, Aug 2, 2018 at 7:19 PM Atin Mukherjee wrote:
>
>> New addition - tests/basic/volume.t - failed twice atleast with shd core.
>>
>> One such ref -
>> https://build.gluster.org/job/cent
n that we haven't designed (or even listed) all the potential
> action()s, I can't give you a list of everything to query. I guarantee
> we'll need to know the up/down status, heal counts, and free capacity for
> each brick and node.
>
Thanks for the detailed explanation. This h
On Thu, Jul 26, 2018 at 9:59 AM, Pranith Kumar Karampuri <
pkara...@redhat.com> wrote:
>
>
> On Wed, Jul 25, 2018 at 10:48 PM, John Strunk wrote:
>
>> I have not put together a list. Perhaps the following will help w/ the
>> context though...
>>
>> The
I think EC/AFR/Quota components will definitely be affected with this
approach. CCing them.
Please feel free to CC anyone who works on commands that require a mount to
give status.
>
> On Thu, Jul 26, 2018 at 12:57 AM Pranith Kumar Karampuri <
> pkara...@redhat.com> wrote:
>
&
https://build.gluster.org/job/devrpm-el7/10441/console
*10:12:42* Wrote:
/home/jenkins/root/workspace/devrpm-el7/extras/LinuxRPM/rpmbuild/SRPMS/glusterfs-4.2dev-0.240.git4657137.el7.src.rpm*10:12:42*
mv rpmbuild/SRPMS/* .*10:12:44* INFO: mock.py version 1.4.11 starting
(python version =
On Thu, Aug 9, 2018 at 6:34 AM Shyam Ranganathan
wrote:
> Today's patch set 7 [1], included fixes provided till last evening IST,
> and its runs can be seen here [2] (yay! we can link to comments in
> gerrit now).
>
> New failures: (added to the spreadsheet)
>
On Wed, Aug 8, 2018 at 5:08 AM Shyam Ranganathan
wrote:
> Deserves a new beginning, threads on the other mail have gone deep enough.
>
> NOTE: (5) below needs your attention, rest is just process and data on
> how to find failures.
>
> 1) We are running the tests using the patch [2].
>
> 2) Run
On Mon, Aug 13, 2018 at 10:55 PM Shyam Ranganathan
wrote:
> On 08/13/2018 02:20 AM, Pranith Kumar Karampuri wrote:
> > - At the end of 2 weeks, reassess master and nightly test status, and
> > see if we need another drive towards stabilizing master by locking
> dow
On Mon, Aug 13, 2018 at 6:05 AM Shyam Ranganathan
wrote:
> Hi,
>
> So we have had master locked down for a week to ensure we only get fixes
> for failing tests in order to stabilize the code base, partly for
> release-5 branching as well.
>
> As of this weekend, we (Atin and myself) have been
On Fri, Aug 10, 2018 at 6:34 AM Shyam Ranganathan
wrote:
> Today's test results are updated in the spreadsheet in sheet named "Run
> patch set 8".
>
> I took in patch https://review.gluster.org/c/glusterfs/+/20685 which
> caused quite a few failures, so not updating new failures as issue yet.
>
On Thu, Aug 9, 2018 at 4:19 PM Nigel Babu wrote:
> Infra issue. Please file a bug.
>
https://bugzilla.redhat.com/show_bug.cgi?id=1614631
Thanks!
> On Thu, Aug 9, 2018 at 3:57 PM Pranith Kumar Karampuri <
> pkara...@redhat.com> wrote:
>
>> https://build.glust
On Fri, Aug 10, 2018 at 8:54 AM Raghavendra Gowdappa
wrote:
> All,
>
> Details can be found at:
> https://build.gluster.org/job/centos7-regression/2190/console
>
> Process that core dumped: glfs_shdheal
>
> Note that the patch on which this regression failures is on readdir-ahead
> which is not
On Thu, Aug 9, 2018 at 4:02 PM Pranith Kumar Karampuri
wrote:
>
>
> On Thu, Aug 9, 2018 at 6:34 AM Shyam Ranganathan
> wrote:
>
>> Today's patch set 7 [1], included fixes provided till last evening IST,
>> and its runs can be seen here [2] (yay! we can link t
+Ravi
On Thu, Jul 19, 2018 at 2:29 PM, Lian, George (NSB - CN/Hangzhou) <
george.l...@nokia-sbell.com> wrote:
> Hi, Gluster Experts,
>
>
>
> In glusterfs version 3.12.3, There seems a “fstat” issue for ctime after
> we use fsync,
>
> We have a demo execute binary which write some data and then
On Tue, Jul 24, 2018 at 10:10 PM, Sankarshan Mukhopadhyay <
sankarshan.mukhopadh...@gmail.com> wrote:
> On Tue, Jul 24, 2018 at 9:48 PM, Pranith Kumar Karampuri
> wrote:
> > hi,
> > Quite a few commands to monitor gluster at the moment take almost a
&
understand
what this might entail already...
>
> -John
>
>
> On Wed, Jul 25, 2018 at 5:45 AM Pranith Kumar Karampuri <
> pkara...@redhat.com> wrote:
>
>>
>>
>> On Tue, Jul 24, 2018 at 10:10 PM, Sankarshan Mukhopadhyay <
>> sankarshan.muk
hi,
Quite a few commands to monitor gluster at the moment take almost a
second to give output.
Some categories of these commands:
1) Any command that needs to do some sort of mount/glfs_init.
Examples: 1) heal info family of commands 2) statfs to find
space-availability etc (On my
Which version of gluster is this?
On Wed, Sep 5, 2018 at 1:27 PM Zhou, Cynthia (NSB - CN/Hangzhou) <
cynthia.z...@nokia-sbell.com> wrote:
> Hi glusterfs experts:
>
>Good day!
>
>Recently when I do some test on my gluster env, I found that there
> are some remaining entries in
On Wed, Sep 5, 2018 at 1:27 PM Zhou, Cynthia (NSB - CN/Hangzhou) <
cynthia.z...@nokia-sbell.com> wrote:
> Hi glusterfs experts:
>
>Good day!
>
>Recently when I do some test on my gluster env, I found that there
> are some remaining entries in command “gluster v heal mstate info”
nt/export/testdir/common.txt on sn-0 node
>
> 8> Gluster v heal export info will show following and keep for long time
>
> # gluster v heal export info
>
> Brick sn-0.local:/mnt/bricks/export/brick
>
> /testdir
>
> Status: Connected
>
> Number of entries: 1
&g
Thanks a lot!
On Wed, Sep 5, 2018 at 4:55 PM Anoop C S wrote:
> On Wed, 2018-09-05 at 16:08 +0530, Anoop C S wrote:
> > On Wed, 2018-09-05 at 15:44 +0530, Pranith Kumar Karampuri wrote:
> > > It also failed on 4.1
> https://build.gluster.org/job/fedora-smoke/1665/console
&
https://build.gluster.org/job/fedora-smoke/1668/console
I think it is happening because of missing tirpc changes in configure.ac?
There are a series of patches for libtirpc starting with
https://review.gluster.org/c/glusterfs/+/19235, I am not very good at
reading configure.ac except for the
It also failed on 4.1
https://build.gluster.org/job/fedora-smoke/1665/console
Looks like quite a few changes need to be ported for them to pass?
On Wed, Sep 5, 2018 at 3:41 PM Pranith Kumar Karampuri
wrote:
> https://build.gluster.org/job/fedora-smoke/1668/console
>
> I think it is
if ((buf->ia_nlink == 0) && (buf->ia_ctime == 0))
>
> if (buf->ia_ctime == )
>
> return 1;
>
>
>
> return 0;
>
> }
>
>
>
> void
>
> gf_zero_fill_stat (struct iatt *buf)
>
> {
>
> // buf->ia
On Wed, Jan 24, 2018 at 2:24 PM, Pranith Kumar Karampuri <
pkara...@redhat.com> wrote:
> hi,
>In the same commit you mentioned earlier, there was this code
> earlier:
> -/* Returns 1 if the stat seems to be filled with zeroes. */
> -int
> -nfs_zero_fille
void
gf_zero_fill_stat (struct iatt *buf)
{
// buf->ia_nlink = 0;
buf->ia_ctime = 0;
}
Thanks & Best Regards
George
*From:* Lian, George (NSB - CN/Hangzhou)
*Sent:* Friday, January 19, 2018 10:03 AM
*To:* Pranith Kumar Karampuri <pkara...@redhat.com>; Zhou, Cynth
On Mon, Jan 15, 2018 at 1:55 PM, Pranith Kumar Karampuri <
pkara...@redhat.com> wrote:
>
>
> On Mon, Jan 15, 2018 at 8:46 AM, Lian, George (NSB - CN/Hangzhou) <
> george.l...@nokia-sbell.com> wrote:
>
>> Hi,
>>
>>
>>
>> Have you reprod
Thanks Atin!
On Mon, Mar 12, 2018 at 9:49 AM, Atin Mukherjee <amukh...@redhat.com> wrote:
> Mohit is aware of this issue and currently working on a patch.
>
> On Mon, Mar 12, 2018 at 9:47 AM, Pranith Kumar Karampuri <
> pkara...@redhat.com> wrote:
>
>> hi,
>
hi,
In https://build.gluster.org/job/centos7-regression/274/consoleFull,
the test in $SUBJECT generated core. It seems to be segfaulting in quota.
But I didn't take a closer look.
--
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
On Tue, Mar 13, 2018 at 7:07 AM, Shyam Ranganathan
wrote:
> Hi,
>
> As we wind down on 4.0 activities (waiting on docs to hit the site, and
> packages to be available in CentOS repositories before announcing the
> release), it is time to start preparing for the 4.1 release.
On Tue, Mar 13, 2018 at 4:26 PM, Pranith Kumar Karampuri <
pkara...@redhat.com> wrote:
>
>
> On Tue, Mar 13, 2018 at 1:51 PM, Amar Tumballi <atumb...@redhat.com>
> wrote:
>
>> >>
>>> >> Further, as we hit end of March, we would make it m
On Wed, Mar 14, 2018 at 8:27 PM, Amye Scavarda <a...@redhat.com> wrote:
> Responding on the architects question:
>
> On Tue, Mar 13, 2018 at 9:57 PM, Pranith Kumar Karampuri <
> pkara...@redhat.com> wrote:
>
>>
>>
>> On Tue, Mar 13, 2018 a
Hey Manu,
Long time! Sorry for the delay.
On Sat, Mar 31, 2018 at 8:12 PM, Emmanuel Dreyfus wrote:
> Hello
>
> After doing a replace-brick and a full heal, I am left with:
>
> Brick bidon:/export/wd0e
> Status: Connected
> Number of entries: 0
>
> Brick
You can use posix-locks i.e. fnctl based advisory locks on glusterfs just
like any other fs.
On Wed, Apr 4, 2018 at 8:30 AM, Lei Gong wrote:
> Hello there,
>
>
>
> I want to know if there is a feature allow user to add lock on a file when
> their app is modifying that file, so
Shyam,
Do let me know if there is anything that needs to be done on the
process front.
On Mon, Mar 5, 2018 at 8:18 AM, Pranith Kumar Karampuri <pkara...@redhat.com
> wrote:
> hi,
> We found that compound fops is not giving better performance in
> replicate and I am think
hi,
We found that compound fops is not giving better performance in
replicate and I am thinking of removing that code. Sent the patch at
https://review.gluster.org/19655
--
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
On Mon, Mar 5, 2018 at 9:19 AM, Amar Tumballi wrote:
> Pranith,
>
>
>
>> We found that compound fops is not giving better performance in
>> replicate and I am thinking of removing that code. Sent the patch at
>> https://review.gluster.org/19655
>>
>>
> If I understand
On Mon, Mar 5, 2018 at 7:10 PM, Shyam Ranganathan <srang...@redhat.com>
wrote:
> On 03/04/2018 10:15 PM, Pranith Kumar Karampuri wrote:
> > Shyam,
> > Do let me know if there is anything that needs to be done on the
> > process front.
>
> I see that use-compo
I found the following memory leak present in 3.13, 4.0 and master:
https://bugzilla.redhat.com/show_bug.cgi?id=1550078
I will clone/port to 4.0 as soon as the patch is merged.
On Wed, Feb 28, 2018 at 5:55 PM, Javier Romero wrote:
> Hi all,
>
> Have tested on CentOS Linux
On Thu, Oct 4, 2018 at 2:15 PM Xavi Hernandez wrote:
> On Thu, Oct 4, 2018 at 9:47 AM Amar Tumballi wrote:
>
>>
>>
>> On Thu, Oct 4, 2018 at 12:54 PM Xavi Hernandez
>> wrote:
>>
>>> On Wed, Oct 3, 2018 at 11:57 AM Deepshikha Khandelwal <
>>> dkhan...@redhat.com> wrote:
>>>
Hello folks,
On Wed, Oct 3, 2018 at 11:20 PM 김경표 wrote:
> Hello folks.
>
> Few days ago I found my EC(4+2) volume was degraded.
> I am using 3.12.13-1.el7.x86_64.
> One brick was down, below is bricklog
> I am suspicious loc->inode bug in index.c (see attached picture)
> In GDB, loc->inode is null
>
>>
that the RC is correct
and then I will send out the fix.
>
> Regards,
> Abhishek
>
> On Mon, Sep 24, 2018 at 3:12 PM Pranith Kumar Karampuri <
> pkara...@redhat.com> wrote:
>
>>
>>
>> On Mon, Sep 24, 2018 at 2:09 PM ABHISHEK PALIWAL
>> wr
t in between.
>
But the crash happened inside exit() code for which will be in libc which
doesn't access any data structures in glusterfs.
>
> Regards,
> Abhishek
>
> On Mon, Sep 24, 2018 at 9:11 PM Pranith Kumar Karampuri <
> pkara...@redhat.com> wrote:
>
>>
On Sun, Mar 31, 2019 at 11:29 PM Soumya Koduri wrote:
>
>
> On 3/29/19 11:55 PM, Xavi Hernandez wrote:
> > Hi all,
> >
> > there is one potential problem with posix locks when used in a
> > replicated or dispersed volume.
> >
> > Some background:
> >
> > Posix locks allow any process to lock a
On Thu, Mar 21, 2019 at 9:15 AM Kinglong Mee wrote:
> Hello folks,
>
> Lock self healing (recovery or replay) is added at
> https://review.gluster.org/#/c/glusterfs/+/2766/
>
> But it is removed at
> https://review.gluster.org/#/c/glusterfs/+/12363/
>
> I found some information about it at
>
On Thu, Mar 21, 2019 at 11:50 AM Pranith Kumar Karampuri <
pkara...@redhat.com> wrote:
>
>
> On Thu, Mar 21, 2019 at 9:15 AM Kinglong Mee
> wrote:
>
>> Hello folks,
>>
>> Lock self healing (recovery or replay) is added at
>> ht
On Wed, Mar 27, 2019 at 5:13 PM Xavi Hernandez wrote:
> On Wed, Mar 27, 2019 at 11:52 AM Raghavendra Gowdappa
> wrote:
>
>>
>>
>> On Wed, Mar 27, 2019 at 12:56 PM Xavi Hernandez
>> wrote:
>>
>>> Hi Raghavendra,
>>>
>>> On Wed, Mar 27, 2019 at 2:49 AM Raghavendra Gowdappa <
>>>
On Wed, Mar 27, 2019 at 6:38 PM Xavi Hernandez wrote:
> On Wed, Mar 27, 2019 at 1:13 PM Pranith Kumar Karampuri <
> pkara...@redhat.com> wrote:
>
>>
>>
>> On Wed, Mar 27, 2019 at 5:13 PM Xavi Hernandez
>> wrote:
>>
>>> On Wed, Mar
On Wed, Mar 27, 2019 at 8:38 PM Xavi Hernandez wrote:
> On Wed, Mar 27, 2019 at 2:20 PM Pranith Kumar Karampuri <
> pkara...@redhat.com> wrote:
>
>>
>>
>> On Wed, Mar 27, 2019 at 6:38 PM Xavi Hernandez
>> wrote:
>>
>>> On Wed, Mar 2
e first problem is why the client gets disconnected and the server
> doesn't get any notification. The script is stopping bricks 2 and 3 when
> this happens. Brick 0 shouldn't fail here. It seems related to the
>
> The second problem is that when we receive a new connection from a cli
patch and
then get it merged again.
>
> -Amar
>
> On Wed, Apr 17, 2019 at 8:04 AM Atin Mukherjee
> wrote:
>
>>
>>
>> On Wed, Apr 17, 2019 at 12:33 AM Pranith Kumar Karampuri <
>> pkara...@redhat.com> wrote:
>>
>>>
>>>
>>
On Sat, May 25, 2019 at 10:22 AM Pranith Kumar Karampuri <
pkara...@redhat.com> wrote:
>
>
> On Fri, May 24, 2019 at 10:57 PM FNU Raghavendra Manjunath <
> rab...@redhat.com> wrote:
>
>>
>> The idea looks OK. One of the things that probably need to be c
transport_t structure (and from my understanding, the transport->xid is
> just incremented by everytime a
> new rpc request is created).
>
> Overall the suggestion looks fine though.
>
I am planning to do the same thing transport->xid does. I will send out the
patch
> Regards
Hi,
At the moment new stack doesn't populate frame->root->unique in all
cases. This makes it difficult to debug hung frames by examining successive
state dumps. Fuse and server xlator populate it whenever they can, but
other xlators won't be able to assign one when they need to create a
On Fri, Apr 26, 2019 at 10:55 PM Junsong Li wrote:
> Hello list,
>
>
>
> I have a couple of questions on index translator implementation.
>
>- Why does gluster need callstub and a different worker queue (and
>thread) to process those call stubs? Is it just to lower the priority of
>
On Wed, Jul 3, 2019 at 10:14 AM Ravishankar N
wrote:
>
> On 02/07/19 8:52 PM, FNU Raghavendra Manjunath wrote:
>
>
> Hi All,
>
> In glusterfs, there is an issue regarding the fallocate behavior. In
> short, if someone does fallocate from the mount point with some size that
> is greater than the
On Wed, Jul 3, 2019 at 10:59 PM FNU Raghavendra Manjunath
wrote:
>
>
> On Wed, Jul 3, 2019 at 3:28 AM Pranith Kumar Karampuri <
> pkara...@redhat.com> wrote:
>
>>
>>
>> On Wed, Jul 3, 2019 at 10:14 AM Ravishankar N
>> wrote:
>>
>>&
701 - 772 of 772 matches
Mail list logo