On 03/08/2016 08:09 PM, Pranith Kumar Karampuri wrote:
Sorry for the delay in responding. I am looking at this core. Will
update with my findings/patches.
I think this is happening because dict-data is not guaranteed to have
refs at the time of accessing it just because we have a ref
On 03/11/2016 10:16 PM, Jeff Darcy wrote:
Tier does send lookups serially, which fail on the hashed subvolumes of
dhts. Both of them trigger lookup_everywhere which is executed in epoll
threads, thus the they are executed in parallel.
According to your earlier description, items are being
On 03/09/2016 10:40 AM, Kaushal M wrote:
On Tue, Mar 8, 2016 at 11:58 PM, Atin Mukherjee <amukh...@redhat.com> wrote:
On 03/08/2016 07:32 PM, Pranith Kumar Karampuri wrote:
hi,
Late last week I sent a solution for how to achieve
subdirectory-mount support with access-co
hi,
I think this is the RCA for the issue:
Basically with distributed ec + disctributed replicate as cold, hot
tiers. tier
sends a lookup which fails on ec. (By this time dict already
contains ec
xattrs) After this lookup_everywhere code path is hit in tier which
triggers
hi Raghavendra,
Krutika showed me this code in write-behind about not honoring
O_DIRECT. 1) Is there any reason why we do flush-behind by default and
2) Not honor O_DIRECT in write-behind by default (strict-O_DIRECT option)?
Pranith
___
flush-behind default is probably fine as without fsync there is no
guarantee of file contents syncing to disk.
Pranith
On 03/16/2016 04:16 PM, Pranith Kumar Karampuri wrote:
hi Raghavendra,
Krutika showed me this code in write-behind about not honoring
O_DIRECT. 1) Is there any reason
On 04/13/2016 11:58 AM, Pranith Kumar Karampuri wrote:
On 04/13/2016 09:15 AM, Vijay Bellur wrote:
On 04/08/2016 10:25 PM, Vijay Bellur wrote:
Hey Pranith, Ashish -
We have broken support for group virt after the following commit in
release-3.7:
commit
On 04/13/2016 05:10 PM, Vijay Bellur wrote:
On 04/13/2016 03:20 AM, Pranith Kumar Karampuri wrote:
On 04/13/2016 11:58 AM, Pranith Kumar Karampuri wrote:
On 04/13/2016 09:15 AM, Vijay Bellur wrote:
On 04/08/2016 10:25 PM, Vijay Bellur wrote:
Hey Pranith, Ashish -
We have broken
On 04/12/2016 05:54 PM, Jeff Darcy wrote:
tier can lead to parallel lookups in two different epoll threads on
hot/cold tiers. The race-window to hit the common-dictionary in lookup
use-after-free is too low without dict_copy_with_ref() in either ec/afr.
In either afr/ec side one thread should
Sorry for the delay in responding. I am looking at this core. Will
update with my findings/patches.
Pranith
On 03/08/2016 12:29 PM, Kotresh Hiremath Ravishankar wrote:
Hi All,
The regression run has caused the core to generate for below patch.
hi,
Late last week I sent a solution for how to achieve
subdirectory-mount support with access-controls
(http://www.gluster.org/pipermail/gluster-devel/2016-March/048537.html).
What follows here is a short description of how other features of
gluster volumes are implemented for
hi,
So far default quorum for 2-way replication is 'none' (i.e.
files/directories may go into split-brain) and for 3-way replication and
arbiter based replication it is 'auto' (files/directories won't go into
split-brain). There are requests to make default as 'auto' for 2-way
hat others have to say about this as well. If majority of
users say they need it to be auto, you will definitely see a patch :-).
Pranith
Thanks,
Bipin Kunal
On Fri, Mar 4, 2016 at 5:43 PM, Ravishankar N <ravishan...@redhat.com> wrote:
On 03/04/2016 05:26 PM, Pranith Kumar Karampuri wrot
Hi,
This mail explains the initial design about how this will happen.
Administrators are going to create a directory on the volume with normal
fuse-mount(Or any other mounts) let's call it 'subdir1'.
Administrator will create auth-allow/reject options with the
ip/addresses he chooses to
On 03/04/2016 10:05 PM, Diego Remolina wrote:
I run a few two node glusterfs instances, but always have a third
machine acting as an arbiter. I am with Jeff on this one, better safe
than sorry.
Setting up a 3rd system without bricks to achieve quorum is very easy.
This is server side
On 03/04/2016 08:36 PM, Shyam wrote:
On 03/04/2016 07:30 AM, Pranith Kumar Karampuri wrote:
On 03/04/2016 05:47 PM, Bipin Kunal wrote:
HI Pranith,
Thanks for starting this mail thread.
Looking from a user perspective most important is to get a "good copy"
of data. I agree t
On 03/04/2016 09:10 PM, Jeff Darcy wrote:
I like the default to be 'none'. Reason: If we have 'auto' as quorum for
2-way replication and first brick dies, there is no HA. If users are
fine with it, it is better to use plain distribute volume
"Availability" is a tricky word. Does it mean
On tests/bugs/tier/bug-1286974.t, I see the following crash for the run:
https://build.gluster.org/job/rackspace-regression-2GB-triggered/19421/consoleFull
#1 0x7f41e15ad2bb in syncenv_task (proc=0xca5540) at
I meant *Do* we. Sorry for the typo!
On 04/05/2016 11:33 AM, Pranith Kumar Karampuri wrote:
On tests/bugs/tier/bug-1286974.t, I see the following crash for the
run:
https://build.gluster.org/job/rackspace-regression-2GB-triggered/19421/consoleFull
#1 0x7f41e15ad2bb in syncenv_task (proc
+Krutika
- Original Message -
> From: "Anoop C S" <anoo...@redhat.com>
> To: "Atin Mukherjee" <amukh...@redhat.com>
> Cc: "Pranith Kumar Karampuri" <pkara...@redhat.com>, "Ravishankar N"
> <ravishan...@redhat.com&g
That sounds better. I will wait till evening for more suggestions and
change the name :-).
Pranith
On Thu, May 12, 2016 at 8:38 AM, Paul Cuzner <pcuz...@redhat.com> wrote:
> cluster.shd-max-heals ... would work for me :)
>
> On Thu, May 12, 2016 at 3:04 PM, Pranith Kumar Kara
hi
For multi-threaded self-heal, we have introduced new option called
cluster.shd-max-threads, which is confusing people who think as many new
threads are going to be launched to perform heals where as all it does is
increase number of parallel heals in multi-tasking by syncop framework. So
On Fri, Apr 29, 2016 at 12:37 PM, Kotresh Hiremath Ravishankar <
khire...@redhat.com> wrote:
> Hi Pranith,
>
> You had a concern of consuming I/O threads when bit-rot uses rchecksum
> interface to
> signing, normal scrubbing and on-demand scrubbing with tiering.
>
>
>
On Fri, Jul 22, 2016 at 7:39 PM, Nithya Balachandran
wrote:
>
>
> On Fri, Jul 22, 2016 at 7:31 PM, Jeff Darcy wrote:
>
>> > I attempted to get us more space on NetBSD by creating a new partition
>> called
>> > /data and putting /build as a symlink to
On Fri, Jul 22, 2016 at 4:46 PM, Nigel Babu wrote:
> Hello,
>
> I attempted to get us more space on NetBSD by creating a new partition
> called
> /data and putting /build as a symlink to /data/build. This has caused
> problems
> with tests/basic/quota.t. It's marked as bad for
On Fri, Jul 22, 2016 at 8:12 PM, Pranith Kumar Karampuri <
pkara...@redhat.com> wrote:
> I am playing with the following diff, let me see.
>
> diff --git a/tests/volume.rc b/tests/volume.rc
> index 331a802..b288508 100644
> --- a/tests/volume.rc
> +++ b/tests/volu
On Fri, Jul 22, 2016 at 7:42 PM, Pranith Kumar Karampuri <
pkara...@redhat.com> wrote:
>
>
> On Fri, Jul 22, 2016 at 7:39 PM, Nithya Balachandran <nbala...@redhat.com>
> wrote:
>
>>
>>
>> On Fri, Jul 22, 2016 at 7:31 PM, Jeff Darcy <jda...@redhat
16 at 7:44 PM, Nithya Balachandran <nbala...@redhat.com>
wrote:
>
>
> On Fri, Jul 22, 2016 at 7:42 PM, Pranith Kumar Karampuri <
> pkara...@redhat.com> wrote:
>
>>
>>
>> On Fri, Jul 22, 2016 at 7:39 PM, Nithya Balachandran <nbala...@redhat.com
>>
On Fri, Jul 22, 2016 at 7:07 PM, Jeff Darcy wrote:
> > Gah! sorry sorry, I meant to send the mail as SIGTERM. Not SIGKILL. So
> xavi
> > and I were wondering why cleanup_and_exit() is not sending GF_PARENT_DOWN
> > event.
>
> OK, then that grinding sound you hear is my brain
hi,
Does anyone have complete understanding of keepalive timeout vs TCP
User timeout (UTO) options? For both afr and EC when the server reboots it
takes 42 seconds for the fops to fail with ENOTCONN
(saved_frames_unwind()). I am wondering if there is any way to reduce this
time by playing
h a
> ref in the options dictionary of old xlator.
>
> Regards
> Rafi KC
>
>
> On 07/14/2016 08:28 AM, Pranith Kumar Karampuri wrote:
>
> hi,
> I wanted to remove 'get_new_dict()', 'dict_destroy()' usage
> through out the code base to prevent people from using
On Fri, Jul 8, 2016 at 7:27 PM, Jeff Darcy wrote:
> (combining replies to multiple people)
>
> Pranith:
> > I agree about encouraging specific kind of review. At the same time we
> need
> > to make reviewing, helping users in the community as important as sending
> > patches
On Thu, Jul 14, 2016 at 11:29 PM, Joe Julian <j...@julianfamily.org> wrote:
> On 07/07/2016 08:58 PM, Pranith Kumar Karampuri wrote:
>
>
>
> On Fri, Jul 8, 2016 at 8:40 AM, Jeff Darcy <jda...@redhat.com> wrote:
>
>> > What gets measured gets man
On Fri, Jul 15, 2016 at 1:09 AM, Jeff Darcy wrote:
> > The feedback I got is, "it is not motivating to review patches that are
> > already merged by maintainer."
>
> I can totally understand that. I've been pretty active reviewing lately,
> and it's an *awful* demotivating
On Fri, Jul 15, 2016 at 1:25 AM, Jeff Darcy wrote:
> > I absolutely hate what '-1' means though, it says 'I would prefer you
> > didn't submit this'. Somebody who doesn't know what he/she is doing still
> > goes ahead and sends his/her first patch and we say 'I would prefer
On Fri, Jul 22, 2016 at 10:25 PM, Jeff Darcy wrote:
> > Based on what I saw in code, this seems to get the job done. Comments
> > welcome:
> > http://review.gluster.org/14988
>
> Good thinking. Thanks, Pranith!
>
Nitya clarified my doubts as well on IRC :-).
--
Pranith
On Sat, Jul 23, 2016 at 8:02 PM, Emmanuel Dreyfus <m...@netbsd.org> wrote:
> Pranith Kumar Karampuri <pkara...@redhat.com> wrote:
>
> > So should we do readdir() with external locks for everything instead?
>
> readdir() with a per-directory lock is safe. However, it
Thanks Atin
On Sat, Jul 23, 2016 at 7:29 PM, Atin Mukherjee <amukh...@redhat.com> wrote:
>
>
> On Saturday 23 July 2016, Pranith Kumar Karampuri <pkara...@redhat.com>
> wrote:
>
>> If someone could give +1 on 3.7 backport
>> http://review.gluster.org/#/c/14
On Sat, Jul 23, 2016 at 10:17 AM, Nithya Balachandran <nbala...@redhat.com>
wrote:
>
>
> On Sat, Jul 23, 2016 at 9:45 AM, Nithya Balachandran <nbala...@redhat.com>
> wrote:
>
>>
>>
>> On Fri, Jul 22, 2016 at 9:07 PM, Pranith Kumar Karampuri <
>
Emmanuel,
I procrastinated too long on this :-/, It is July already :-(. I
just looked at the man page in Linux and it is a bit confusing, so I am not
sure how to go ahead.
For readdir_r(), I see:
DESCRIPTION
This function is deprecated; use readdir(3) instead.
The
I see both of your names in git blame output.
https://build.gluster.org/job/rackspace-regression-2GB-triggered/22439/console
has more information about the failure. This failure happened on
http://review.gluster.org/#/c/14985/ which changes only .t files so I
believe the reason for the failure to
On Sat, Jul 30, 2016 at 7:05 AM, Kaushal Madappa <kaus...@redhat.com> wrote:
> On 29 Jul 2016 23:16, "Pranith Kumar Karampuri" <pkara...@redhat.com>
> wrote:
> >
> > Krutika RC'd that I missed a patch which broke virt usecase.
> http://review.gluster.org
On Tue, Aug 2, 2016 at 4:57 PM, Aravinda wrote:
> Hi,
>
> As many of you aware, Gluster Eventing feature is available in Master. To
> add support to listen to the Events from GlusterFS Clients following
> changes are identified
>
> - Change in Eventsd to listen to tcp socket
hi Nigel,
When we rebase the patch by just changing the
commit-message/description it rightly doesn't re-trigger the regression,
but the regression results are still coming for patchset before rebase so
the +1s are not appearing. You can see http://review.gluster.org/#/c/15070/
as an
On Tue, Aug 2, 2016 at 8:21 PM, Vijay Bellur wrote:
> On 08/02/2016 07:27 AM, Aravinda wrote:
>
>> Hi,
>>
>> As many of you aware, Gluster Eventing feature is available in Master.
>> To add support to listen to the Events from GlusterFS Clients following
>> changes are
e problem, so I am not sure by when it
will be ready. Let me build the rpms, I have a meeting now for around an
hour. I will start building rpms after that.
>
> Serkan
>
> On Thu, Aug 11, 2016 at 5:52 AM, Pranith Kumar Karampuri
> <pkara...@redhat.com> wrote:
> >
> >
I will inform you after finding that out. Give me a day.
>
> On Wed, Aug 3, 2016 at 11:19 PM, Pranith Kumar Karampuri
> <pkara...@redhat.com> wrote:
> >
> >
> > On Thu, Aug 4, 2016 at 1:47 AM, Pranith Kumar Karampuri
> > <pkara...@redhat.com> wrote:
&g
c configuration.
>
> On Wed, Aug 3, 2016 at 9:44 PM, Pranith Kumar Karampuri
> <pkara...@redhat.com> wrote:
> >
> >
> > On Wed, Aug 3, 2016 at 6:01 PM, Serkan Çoban <cobanser...@gmail.com>
> wrote:
> >>
> >> Hi,
> >>
> >
On Wed, Aug 3, 2016 at 6:01 PM, Serkan Çoban wrote:
> Hi,
>
> May I ask if multi-threaded self heal for distributed disperse volumes
> implemented in this release?
>
Serkan,
At the moment I am a bit busy with different work, Is it possible
for you to help test the
, Aug 3, 2016 at 9:59 PM, Pranith Kumar Karampuri
> <pkara...@redhat.com> wrote:
> >
> >
> > On Thu, Aug 4, 2016 at 12:23 AM, Serkan Çoban <cobanser...@gmail.com>
> wrote:
> >>
> >> I have two of my storage servers free, I think I can use them fo
We would definitely love patches. I think you should try to follow how
ftruncate fop is implemented for a similar fop implementation.
Take a look at:
1) ec_gf_ftruncate
2) ec_ftruncate
3) ec_wind_ftruncate
4) ec_manager_truncate(truncate, ftruncate reuse this function)
Feel free to send any more
RCA for this crawl problem. Let me know your decision. If you
are okay with testing progressive versions of this feature, that would be
great. We can compare how each patch improved the performance.
Pranith
>
> On Thu, Aug 4, 2016 at 10:16 AM, Pranith Kumar Karampuri
> <pkara...@redhat.
hi,
I wanted to remove 'get_new_dict()', 'dict_destroy()' usage through
out the code base to prevent people from using it wrong. Regression for
that patch http://review.gluster.org/13183 kept failing and I found that
the 'xl->options' dictionary is created using get_new_dict() i.e. it
On Mon, Jul 18, 2016 at 4:18 PM, Kotresh Hiremath Ravishankar <
khire...@redhat.com> wrote:
> Hi,
>
> The above mentioned test has failed for the patch
> http://review.gluster.org/#/c/14927/1
> and is not related to my patch. Can someone from AFR team look into it?
>
>
>
Just went through the commit message. I think similar to attaching if we
also have detaching, then we can simulate killing of bricks in afr using
this approach may be? Even remove brick can do the same I guess.
On Sat, Jul 16, 2016 at 12:09 AM, Jeff Darcy wrote:
> For those
Cool
On Sat, Jul 16, 2016 at 8:13 AM, Jeff Darcy wrote:
> > Just went through the commit message. I think similar to attaching if we
> also
> > have detaching, then we can simulate killing of bricks in afr using this
> > approach may be?
>
> Yes, that's pretty much the plan.
On Mon, Jun 27, 2016 at 2:38 PM, Manoj Pillai <mpil...@redhat.com> wrote:
>
>
> - Original Message -
> > From: "Raghavendra Gowdappa" <rgowd...@redhat.com>
> > To: "Pranith Kumar Karampuri" <pkara...@redhat.com>
> > Cc: &quo
Is it possible to share the test you are running? As per your volume,
o-direct is not enabled on your volume, i.e. the file shouldn't be opened
with o-direct but as per the logs it is giving Invalid Argument as if there
is something wrong with the arguments when we do o-direct write with wrong
+Nigel
On Fri, Jul 8, 2016 at 7:42 AM, Pranith Kumar Karampuri <pkara...@redhat.com
> wrote:
> What gets measured gets managed. It is good that you started this thread.
> Problem is two fold. We need a way to first find people who are reviewing a
> lot and give them mo
What gets measured gets managed. It is good that you started this thread.
Problem is two fold. We need a way to first find people who are reviewing a
lot and give them more karma points in the community by encouraging that
behaviour(making these stats known to public lets say in monthly news
On Fri, Jul 8, 2016 at 8:40 AM, Jeff Darcy wrote:
> > What gets measured gets managed.
>
> Exactly. Reviewing is part of everyone's job, but reviews aren't tracked
> in any way that matters. Contrast that with the *enormous* pressure most
> of us are under to get our own
Could you take in http://review.gluster.org/#/c/14598/ as well? It is ready
for merge.
On Thu, Jul 7, 2016 at 3:02 PM, Atin Mukherjee wrote:
> Can you take in http://review.gluster.org/#/c/14861 ?
>
>
> On Thursday 7 July 2016, Kaushal M wrote:
>
>> On
On Fri, Jul 8, 2016 at 11:23 AM, Poornima Gurusiddaiah
wrote:
>
> Completely agree with your concern here. Keeping aside the regression
> part, few observations and suggestions:
> As per the Maintainers guidelines (
>
On Wed, Jul 6, 2016 at 12:24 AM, Shyam wrote:
> On 07/01/2016 01:45 AM, B.K.Raghuram wrote:
>
>> I have not gone through this implementation nor the new iscsi
>> implementation being worked on for 3.9 but I thought I'd share the
>> design behind a distributed iscsi
chandran <
>>> nbala...@redhat.com> wrote:
>>>
>>>>
>>>>
>>>> On Fri, Jul 22, 2016 at 9:07 PM, Pranith Kumar Karampuri <
>>>> pkara...@redhat.com> wrote:
>>>>
>>>>>
>>>>>
>>&
Does anyone know why GF_PARENT_DOWN is not triggered on SIGKILL? It will
give a chance for xlators to do any cleanup they need to do. For example ec
can complete the delayed xattrops.
--
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
Gah! sorry sorry, I meant to send the mail as SIGTERM. Not SIGKILL. So xavi
and I were wondering why cleanup_and_exit() is not sending GF_PARENT_DOWN
event.
On Fri, Jul 22, 2016 at 6:24 PM, Jeff Darcy wrote:
> > Does anyone know why GF_PARENT_DOWN is not triggered on SIGKILL?
It is only calling fini() apart from that not much.
On Fri, Jul 22, 2016 at 6:36 PM, Pranith Kumar Karampuri <
pkara...@redhat.com> wrote:
> Gah! sorry sorry, I meant to send the mail as SIGTERM. Not SIGKILL. So
> xavi and I were wondering why cleanup_and_exit() is not sending
>
found that in cleanup_and_exit() we don't send this event. We are only
calling 'fini()'. So wondering if any one knows why this is so.
On Fri, Jul 22, 2016 at 6:37 PM, Pranith Kumar Karampuri <
pkara...@redhat.com> wrote:
> It is only calling fini() apart from that not much.
>
> On Fr
I'm interested in this as well.
On Wed, Aug 17, 2016 at 12:00 AM, Amye Scavarda wrote:
> Hi all,
> As we get closer to the CfP wrapping up (August 31, per
> http://www.gluster.org/pipermail/gluster-users/2016-August/028002.html) -
> we'll be looking for 3-4 people for the
volume start $V0 force
>> > 82 EXPECT_WITHIN $CHILD_UP_TIMEOUT "3" ec_child_up_count $V0 0
>> > 83 TEST chown root:root $M0/{a,b,c,d}
>> > 84 TEST $CLI volume set $V0 disperse.background-heals 0
>> > 85 EXPECT_NOT "0" mo
It is a bug in EC name heal code path. I sent a fix but review.gluster.org
is not accessible now to paste the link here. Will send a mail again once
it is accessible.
On Fri, Jan 27, 2017 at 5:41 PM, Jeff Darcy wrote:
> > Few of the failure links:
> >
> >
Xavi, Ashish,
https://review.gluster.org/#/c/16468/ is the patch. I found that
ec_need_heal is not considering size/permission changes in backends, that
is causing spurious failures as well. I will be sending out a patch to fix
them all.
On Sat, Jan 28, 2017 at 3:56 PM, Pranith Kumar
On Mon, Feb 20, 2017 at 10:26 PM, Shyam <srang...@redhat.com> wrote:
> On 02/20/2017 11:29 AM, Pranith Kumar Karampuri wrote:
>
>>
>>
>> On Mon, Feb 20, 2017 at 7:57 PM, Pranith Kumar Karampuri
>> <pkara...@redhat.com <mailto:pkara...@redhat.com>>
On Mon, Feb 20, 2017 at 7:57 PM, Pranith Kumar Karampuri <
pkara...@redhat.com> wrote:
>
>
> On Mon, Feb 20, 2017 at 8:25 AM, Shyam <srang...@redhat.com> wrote:
>
>> Hi,
>>
>> RC1 tagging is *tentatively* scheduled for 21st Feb, 2017
>>
>> T
On Mon, Feb 20, 2017 at 8:25 AM, Shyam wrote:
> Hi,
>
> RC1 tagging is *tentatively* scheduled for 21st Feb, 2017
>
> The intention is that RC1 becomes the release, hence we would like to
> chase down all blocker bugs [1] and get them fixed before RC1 is tagged.
>
> This
On Thu, Feb 16, 2017 at 2:13 PM, Xavier Hernandez
wrote:
> Hi everyone,
>
> I would need some reviews if you have some time:
>
> A memory leak fix in fuse:
> * Patch already merged in master and 3.10
> * Backport to 3.9: https://review.gluster.org/16402
> *
Proposals:
1) Design of glfstrace tool
What happens in a File operation has been a bit difficult to figure out
looking at the workload, so we need a tool similar to strace which shows
the fops that are being wound/unwound though the clients and servers. We
can use the eventing infra by Aravinda
On Mon, Aug 22, 2016 at 5:15 PM, Jeff Darcy wrote:
> Two proposals, both pretty developer-focused.
>
> (1) Gluster: The Ugly Parts
> Like any code base its size and age, Gluster has accumulated its share of
> dead, redundant, or simply inelegant code. This code makes us more
This is being tracked @ https://bugzilla.redhat.com/show_bug.cgi?id=1427404,
krutika posted a patch to move it to bad tests until we find why a lookup
on one file is leading to lookup on the hardlink instead.
On Tue, Feb 28, 2017 at 2:56 PM, Susant Palai wrote:
> Hi,
>
>>
>> I've posted https://review.gluster.org/#/c/16787 to revert the change.
>> Can this be merged once it passes regression?
>>
>>
>>>
>>> On Tue, Feb 28, 2017 at 4:16 PM, Atin Mukherjee <amukh...@redhat.com>
>>> wrote:
>>>
>>
For some reason Ravi's mail is not coming on the lists, not sure why. Here
is his mail:
Hello,
Here is a proposal I'd like to make.
Title: Throttling in gluster (https://github.com/gluster/gl
usterfs-specs/blob/master/accepted/throttling.md)
Theme: Performance and scalability.
The talk/
Just resending in case you missed this mail.
On Tue, Aug 23, 2016 at 2:31 AM, Pranith Kumar Karampuri <
pkara...@redhat.com> wrote:
> hi Aravinda,
>I was wondering what is your opinion in sending selected logs as
> events instead of treating them specially. Is this som
for others inputs here as well
>
> regards
> Aravinda
>
> On Wednesday 24 August 2016 05:15 PM, Pranith Kumar Karampuri wrote:
>
> Just resending in case you missed this mail.
>
> On Tue, Aug 23, 2016 at 2:31 AM, Pranith Kumar Karampuri <
> pkara...@redhat.com> wrote
I am seeing a pause when the .t runs that seem to last close to how much
ever time we put in EXPECT_WITHIN
[2016-09-01 03:24:21.852744] I
[common.c:1134:pl_does_monkey_want_stuck_lock] 0-patchy-locks: stuck lock
[2016-09-01 03:24:21.852775] W [inodelk.c:659:pl_inode_setlk]
0-patchy-locks: MONKEY
hi Raghavendra,
I feel running
https://github.com/avati/perf-test/blob/master/perf-test.sh is good enough
for testing these. Do you feel anything more needs to be done before the
release?
I can update it at
https://public.pad.fsfe.org/p/gluster-component-release-checklist
--
Pranith
On Fri, Sep 2, 2016 at 11:39 PM, Pranith Kumar Karampuri <
pkara...@redhat.com> wrote:
> hi,
> Did you get a chance to decide on the tests that need to be done
> before doing a release for Bitrot component? Could you let me know who will
> be providing with the list
hi,
Did you get a chance to decide on the tests that need to be done
before doing a release for FUSE bridge component? Could you let me know who
will be providing with the list?
I can update it at https://public.pad.fsfe.org/p/
gluster-component-release-checklist
--
Aravinda & Pranith
hi,
Did you get a chance to decide on the tests that need to be done
before doing a release for glusterd component? Could you let me know who
will be providing with the list?
I think just the cases that cover the infra part should be good enough.
Component based commands should come in
hi,
Did you get a chance to decide on the tests that need to be done
before doing a release for Quota+Marker component?
I can update it at https://public.pad.fsfe.org/p/
gluster-component-release-checklist
--
Aravinda & Pranith
___
hi,
Did you get a chance to decide on the tests that need to be
done before doing a release for Tier component? Could you let me know who
will be providing with the list?
I can update it at https://public.pad.fsfe.org/p/
gluster-component-release-checklist
--
Pranith
hi,
Did you get a chance to decide on the tests that need to be
done before doing a release for georep family of components? Could you let
me know who will be providing with the list?
I think changelog, marker, georep are the features that should come under
this bucket right? Are
hi,
I think most of this testing will be covered in nfsv4, smb testing.
But I could be wrong. Could you let me know who will be providing with the
list if you think there are more tests that need to be run?
I can update it at https://public.pad.fsfe.org/p/
hi,
Did you get a chance to decide on the tests that need to be done
before doing a release for Bitrot component? Could you let me know who will
be providing with the list?
I can update it at https://public.pad.fsfe.org/p/
gluster-component-release-checklist
--
Pranith
On Fri, Sep 2, 2016 at 11:52 PM, Pranith Kumar Karampuri <
pkara...@redhat.com> wrote:
> hi,
>Did you get a chance to decide on the tests that need to be
> done before doing a release for Tier component? Could you let me know who
> will be providing with the list
On Fri, Sep 2, 2016 at 11:42 PM, Pranith Kumar Karampuri <
pkara...@redhat.com> wrote:
> hi Raghavendra,
>I feel running https://github.com/avati/perf-
> test/blob/master/perf-test.sh is good enough for testing these. Do you
> feel anything more needs to be done before
On Fri, Sep 2, 2016 at 11:31 PM, Pranith Kumar Karampuri <
pkara...@redhat.com> wrote:
> hi,
> Did you get a chance to decide on the tests that need to be done
> before doing a release for DHT component? Could you let me know who will be
> providing with the list
On Fri, Sep 2, 2016 at 11:36 PM, Pranith Kumar Karampuri <
pkara...@redhat.com> wrote:
> hi,
> Did you get a chance to decide on the tests that need to be
> done before doing a release for georep family of components? Could you let
> me know who will be providing wi
hi,
Did you get a chance to decide on the tests that need to be done
before doing a release for Upcall component? Could you let me know who will
be providing with the list?
I can update it at https://public.pad.fsfe.org/p/
gluster-component-release-checklist
--
Aravinda & Pranith
hi,
I think this should be covered as part of other component testing,
but if you think any more tests need to be added, please let us know.
I can update it at
https://public.pad.fsfe.org/p/gluster-component-release-checklist
--
Aravinda & Pranith
401 - 500 of 772 matches
Mail list logo