On 04/08/2015 07:08 PM, Justin Clift wrote:
On 8 Apr 2015, at 14:13, Pranith Kumar Karampuri pkara...@redhat.com wrote:
On 04/08/2015 06:20 PM, Justin Clift wrote:
snip
Hagarth mentioned in the weekly IRC meeting that you have an
idea what might be causing the regression tests to generate
On 04/08/2015 06:20 PM, Justin Clift wrote:
Hi Pranith,
Hagarth mentioned in the weekly IRC meeting that you have an
idea what might be causing the regression tests to generate
cores?
Can you outline that quickly, as Jeff has some time and might
be able to help narrow it down further. :)
hi,
As I am not able to spend much time on sharding, Kritika is the
handling it completely now. I am only doing reviews. Just letting
everyone know so that future communication will happen directly with the
active developer :-).
Pranith
___
On 04/01/2015 03:27 PM, Emmanuel Dreyfus wrote:
Hi
crypt.t was recently broken in NetBSD regression. The glusterfs returns
a node with file type invalid to FUSE, and that breaks the test.
After running a git bisect, I found the offending commit after which
this behavior appeared:
On 04/02/2015 12:27 AM, Raghavendra Talur wrote:
On Wed, Apr 1, 2015 at 10:34 PM, Justin Clift jus...@gluster.org
mailto:jus...@gluster.org wrote:
On 1 Apr 2015, at 10:57, Emmanuel Dreyfus m...@netbsd.org
mailto:m...@netbsd.org wrote:
Hi
crypt.t was recently broken
On 04/02/2015 07:27 PM, Raghavendra Bhat wrote:
On Thursday 02 April 2015 05:50 PM, Jeff Darcy wrote:
I think, crypt xlator should do a mem_put of local after doing
STACK_UNWIND
like other xlators which also use mem_get for local (such as AFR). I am
suspecting crypt not doing mem_put might be
On 03/30/2015 06:01 PM, Emmanuel Dreyfus wrote:
On Mon, Mar 30, 2015 at 05:44:23PM +0530, Pranith Kumar Karampuri wrote:
Problem here is that ' inode_forget' is coming even before it gets to
inspect the file. We initially thought we should 'ref' the inode when the
user specifies the choice
On 03/30/2015 06:34 PM, Emmanuel Dreyfus wrote:
Pranith Kumar Karampuri pkara...@redhat.com wrote:
Since spb_choice is not saved as an attribute for the file on the
bricks, it cannot be recovered when the context is reallocated. Either
that save feature has been forgotten, or going
On 03/28/2015 02:08 PM, Emmanuel Dreyfus wrote:
Pranith Kumar Karampuri pkara...@redhat.com wrote:
Emmanuel,
What can we do to make it vote -2 when it fails? Things will
automatically fall in place if it gives -2.
I will do this once I will have recovered. The changelog change
On 03/28/2015 09:51 PM, Emmanuel Dreyfus wrote:
Hi
I see split-brain-resolution.t uses attribute replica.split-brain-choice
to choose what replica should be used. This attribute is not in
privilegied space (trusted. prefixed). Is it on purpose?
Yes, these are used as internal commands to make
On 03/25/2015 07:18 PM, Emmanuel Dreyfus wrote:
On Wed, Mar 25, 2015 at 02:04:10PM +0100, Niels de Vos wrote:
1. Who is going to maintain the new features?
2. Maintainers should be active in responding to users
3. What about reported bugs, there is the Bug Triaging in place?
4. Maintainers
On 03/04/2015 09:57 AM, Justin Clift wrote:
Ran 20 x regression tests on our GlusterFS master branch code
as of a few hours ago, commit 95d5e60afb29aedc29909340e7564d54a6a247c2.
5 of them were successful (25%), 15 of them failed in various ways
(75%).
We need to get this down to about 5% or
On 02/26/2015 02:54 AM, Justin Clift wrote:
Anyone have an interest in a regression test VM that's (presently) hung on
tests/basic/afr/split-brain-healing.t? Likely to be a spurious error.
I can either reboot the VM and put it back into service, or I can leave it
for someone to log into and
or upgrade/installation process better.
On Thu, Feb 19, 2015 at 12:26:33PM +0530, Pranith Kumar Karampuri wrote:
https://bugzilla.redhat.com/show_bug.cgi?id=1113778
https://bugzilla.redhat.com/show_bug.cgi?id=1191176
How can we make the process of giving good packages for things other than
RPMs
On 02/12/2015 08:15 PM, Xavier Hernandez wrote:
I've made some more investigation and the problem seems worse.
It seems that NFS sends a huge amount of requests without waiting for
answers (I've had more than 1400 requests ongoing). Probably there
will be many factors that can influence on
On 02/12/2015 11:34 PM, Pranith Kumar Karampuri wrote:
On 02/12/2015 08:15 PM, Xavier Hernandez wrote:
I've made some more investigation and the problem seems worse.
It seems that NFS sends a huge amount of requests without waiting for
answers (I've had more than 1400 requests ongoing
On 02/12/2015 03:05 PM, Pranith Kumar Karampuri wrote:
On 02/12/2015 09:14 AM, Justin Clift wrote:
On 12 Feb 2015, at 03:02, Shyam srang...@redhat.com wrote:
On 02/11/2015 08:28 AM, David F. Robinson wrote:
My base filesystem has 40-TB and the tar takes 19 minutes. I copied
over 10-TB
On 02/12/2015 09:14 AM, Justin Clift wrote:
On 12 Feb 2015, at 03:02, Shyam srang...@redhat.com wrote:
On 02/11/2015 08:28 AM, David F. Robinson wrote:
My base filesystem has 40-TB and the tar takes 19 minutes. I copied over 10-TB
and it took the tar extraction from 1-minute to 7-minutes.
On 02/13/2015 12:07 AM, Niels de Vos wrote:
On Thu, Feb 12, 2015 at 11:39:51PM +0530, Pranith Kumar Karampuri wrote:
On 02/12/2015 11:34 PM, Pranith Kumar Karampuri wrote:
On 02/12/2015 08:15 PM, Xavier Hernandez wrote:
I've made some more investigation and the problem seems worse.
It seems
On 02/10/2015 03:42 AM, Jeff Darcy wrote:
Interest in 4.0 seems to be increasing. So is developer activity, but
all of the developers involved in 4.0 are stretched a bit thin. As a
result, some sub-projects still don't have anyone who's working on them
often enough to make significant
19 2014 CURRENT STANDARD
ARMORING.one
-- Original Message --
From: Xavier Hernandez xhernan...@datalab.es
To: David F. Robinson david.robin...@corvidtec.com; Benjamin
Turner bennytu...@gmail.com; Pranith Kumar Karampuri
pkara...@redhat.com
Cc: gluster-us...@gluster.org gluster-us
are not compatible with versions = 3.5.3 and 3.6.1 that is the
reason. From 3.5.4 and releases =3.6.2 it should work fine.
Pranith
David
-- Original Message --
From: Pranith Kumar Karampuri pkara...@redhat.com
mailto:pkara...@redhat.com
To: David F. Robinson david.robin...@corvidtec.com
On 02/02/2015 03:34 AM, David F. Robinson wrote:
I have several files that gluster says it cannot heal. I deleted the
files from all of the bricks
(/data/brick0*/hpc_shared/motorsports/gmics/Raven/p3/*) and ran a full
heal using 'gluster volume heal homegfs full'. Even after the full
heal,
On 02/03/2015 12:13 PM, Raghavendra Bhat wrote:
On Monday 02 February 2015 09:07 PM, David F. Robinson wrote:
I upgraded one of my bricks from 3.6.1 to 3.6.2 and I can no longer
do a 'gluster volume heal homegfs info'. It hangs and never returns
any information.
I was trying to ensure that
hi,
I get following errors, when I try to do git fetch
pk1@localhost - ~/workspace/gerrit-repo (ec-notify-1)
15:48:44 :( ⚡ git fetch git rebase origin/master
fatal: internal server error
remote: internal server error
fatal: protocol error: bad pack header
What is worrisome is that when I click
On 01/31/2015 10:49 PM, Dennis Schafroth wrote:
Hi
when compiling dht-common.c with clang (on mac, but I dont think that
matters) some warnings seem to reveal an error:
CC dht-common.lo
*dht-common.c:2997:57: **warning: **size argument in 'strncmp' call is
a comparison
Thanks vijay, it works now.
Pranith
On 02/01/2015 04:27 PM, Vijay Bellur wrote:
On 02/01/2015 11:24 AM, Pranith Kumar Karampuri wrote:
hi,
I get following errors, when I try to do git fetch
pk1@localhost - ~/workspace/gerrit-repo (ec-notify-1)
15:48:44 :( ⚡ git fetch git rebase origin/master
On 01/26/2015 09:41 PM, Justin Clift wrote:
On 26 Jan 2015, at 14:50, David F. Robinson david.robin...@corvidtec.com
wrote:
I have a server with v3.6.2 from which I cannot mount using NFS. The FUSE
mount works, however, I cannot get the NFS mount to work. From /var/log/message:
Jan 26
On 01/27/2015 07:33 AM, Pranith Kumar Karampuri wrote:
On 01/26/2015 09:41 PM, Justin Clift wrote:
On 26 Jan 2015, at 14:50, David F. Robinson
david.robin...@corvidtec.com wrote:
I have a server with v3.6.2 from which I cannot mount using NFS.
The FUSE mount works, however, I cannot get
On 01/15/2015 10:53 PM, Xavier Hernandez wrote:
Hi,
currently eager locking is implemented by checking the open-fd-count
special xattr for each write. If there's more than one open on the
same file, eager locking is disabled to avoid starvation.
This works quite well for file writes, but
On 01/07/2015 12:48 PM, Raghavendra Bhat wrote:
Hi,
As per the design dicussion it was mentioned that, there will be one
BitD running per node which will take care of all the bricks of all
the volumes running on that node. But, here once thing that becomes
important is doing graph changes
On 11/21/2014 09:04 PM, Nux! wrote:
Hi,
I deleted a file by mistake in a brick. I never managed to find out its gfid so
now I have a rogue symlink in .glusterfs pointing to it (if I got how it works).
Any way I can discover which is this file and get rid of it?
symlinks exist in .glusterfs
On 11/21/2014 09:50 PM, Ben England wrote:
Nux,
Those thousands of entries all would match -links 2 but not -links 1 The only entry in
.glusterfs that would match is the entry where you deleted the file from the brick. That's how hardlinks work - when
you create a regular file, the link
On 11/22/2014 12:10 PM, Pranith Kumar Karampuri wrote:
On 11/21/2014 09:50 PM, Ben England wrote:
Nux,
Those thousands of entries all would match -links 2 but not -links
1 The only entry in .glusterfs that would match is the entry where
you deleted the file from the brick. That's how
On 11/12/2014 04:52 PM, Kiran Patil wrote:
I have create zpool with name d and mnt and they appear in filesystem
as follows.
d on /d type zfs (rw,xattr)
mnt on /mnt type zfs (rw,xattr)
Debug enabled output of quota.t testcase is at http://ur1.ca/irbt1.
CC vijaikumar
On Wed, Nov 12, 2014
On 11/19/2014 10:30 AM, Atin Mukherjee wrote:
On 11/18/2014 10:35 PM, Pranith Kumar Karampuri wrote:
On 11/12/2014 04:52 PM, Kiran Patil wrote:
I have create zpool with name d and mnt and they appear in filesystem
as follows.
d on /d type zfs (rw,xattr)
mnt on /mnt type zfs (rw,xattr
Kumar Karampuri
pkara...@redhat.com mailto:pkara...@redhat.com wrote:
On 11/11/2014 03:13 PM, Kiran Patil wrote:
Test Summary Report
--
./tests/basic/quota-anon-fd-nfs.t(Wstat: 0 Tests: 16 Failed: 1)
Failed test: 16
This is a spurious
On 11/13/2014 03:23 AM, Justin Clift wrote:
Hi all,
At the moment, our smoke tests in Jenkins only run on a
replicated volume. Extending that out to other volume types
should (in theory :) help catch other simple gotchas.
Xavi has put together a patch for doing just this, which I'd
like to
On 11/10/2014 11:21 AM, Vijay Bellur wrote:
On 11/08/2014 08:19 AM, Peter Auyeung wrote:
I have a node down while gfs still open for writing.
Got tons of heal-failed on a replicated volume showing as gfid.
Tried gfid-resolver and got the following:
# ./gfid-resolver.sh /brick02/gfs/
On 11/10/2014 01:04 AM, Emmanuel Dreyfus wrote:
Justin Clift jus...@gluster.org wrote:
I've just used that page to disconnect slave25, so you're fine to
investigate there (same login credentials as before). Please reconnect
it when you're done. :)
Since I could spot nothing from, I
On 11/10/2014 10:58 AM, Emmanuel Dreyfus wrote:
Pranith Kumar Karampuri pkara...@redhat.com wrote:
Since I could spot nothing from, I reconnected it. I will try by
submitting a change with set -x for that script.
It was consistently happening with my change just on regression machine.
So I
hi,
The following tests keep failing spuriously nowadays. I CCed glusterd
folks and original author(Kritika) and Last change author (Emmanuel).
You can check
http://build.gluster.org/job/rackspace-regression-2GB-triggered/2497/consoleFull
for full logs.
volume create: patchy: failed:
On 11/04/2014 03:30 PM, Anders Blomdell wrote:
On 2014-11-04 10:38, Emmanuel Dreyfus wrote:
Hi
FWIW, there are still memory leaks in glusterfs 3.6.0. My favourite test is
building NetBSD on a replicated volume, and it fails because the machine
runs out of swap.
After building for 14 hours
hi Kaleb,
I went through the logs. I don't see anything significant. What
is the test case that recreates the mem-leak? May be I can try it on my
setup and get back to you?
Pranith
On 10/15/2014 08:57 PM, Kaleb S. KEITHLEY wrote:
As mentioned in the Gluster Community Meeting on irc
On 10/08/2014 02:15 PM, justgluste...@gmail.com wrote:
Hi all:
I do the following test:
I create a glusterfs replica volume (replica count is 2 ) with two
server node(server A and server B),use XFS as the underlying
filesystem, then mount the volume in client node,
then, I shut
On 10/08/2014 07:50 PM, Joe Julian wrote:
To the author: You're cross posting user questions in the devel
mailing list. You're not asking development questions. Please don't do
that.
To Pranith et al:
On 10/8/2014 1:45 AM, justgluste...@gmail.com wrote:
* then I config :*
On 10/13/2014 01:14 PM, Emmanuel Dreyfus wrote:
On Mon, Oct 13, 2014 at 12:39:33PM +0530, Pranith Kumar Karampuri wrote:
Op_errno is valid only if 'op_ret 0'. so that doesn't say much. After the
last readdir call with op_ret 0, there will be one more readdir call for
which op_ret will come
On 10/13/2014 02:27 PM, Emmanuel Dreyfus wrote:
On Mon, Oct 13, 2014 at 01:42:38PM +0530, Pranith Kumar Karampuri wrote:
No bug here, just suboptimal behavior, both in glusterfs and NetBSD FUSE.
oh!, but shouldn't it get op_ret = 0 instead of op_ret -1, op_errno EINVAL?
It happens because
On 10/13/2014 02:37 PM, Pranith Kumar Karampuri wrote:
On 10/13/2014 02:27 PM, Emmanuel Dreyfus wrote:
On Mon, Oct 13, 2014 at 01:42:38PM +0530, Pranith Kumar Karampuri wrote:
No bug here, just suboptimal behavior, both in glusterfs and NetBSD
FUSE.
oh!, but shouldn't it get op_ret = 0
On 10/13/2014 02:45 PM, Emmanuel Dreyfus wrote:
On Mon, Oct 13, 2014 at 02:37:12PM +0530, Pranith Kumar Karampuri wrote:
I am not aware of backend filesystems that much, may be someone with that
knowledge can comment here, what happens when new entries are created in the
directory after
On 10/13/2014 07:27 PM, Shyam wrote:
On 10/13/2014 08:01 AM, Pranith Kumar Karampuri wrote:
hi,
Why are we moving away from this coding style?:
if (x) {
/*code*/
} else {
/* code */
}
This patch (in master) introduces the same and explains why,
commit
On 10/13/2014 07:43 PM, Shyam wrote:
On 10/13/2014 10:08 AM, Pranith Kumar Karampuri wrote:
On 10/13/2014 07:27 PM, Shyam wrote:
On 10/13/2014 08:01 AM, Pranith Kumar Karampuri wrote:
hi,
Why are we moving away from this coding style?:
if (x) {
/*code*/
} else {
/* code
On 10/13/2014 09:45 AM, Emmanuel Dreyfus wrote:
Emmanuel Dreyfus m...@netbsd.org wrote:
Erratum: it happens because it attempts to seekdir to the offset
obtained for last record. But I still have to find what code is sending
the request.
This is a find(1) probably forked by quota-crawld
On 09/18/2014 03:13 PM, Niels de Vos wrote:
On Thu, Sep 18, 2014 at 07:43:00AM +0530, Pranith Kumar Karampuri wrote:
hi,
Till now the only method I used to find ref leaks effectively is to find
what operation is causing ref leaks and read the code to find if there is a
ref-leak somewhere
On 09/18/2014 07:48 PM, Shyam wrote:
On 09/17/2014 10:13 PM, Pranith Kumar Karampuri wrote:
hi,
Till now the only method I used to find ref leaks effectively is to
find what operation is causing ref leaks and read the code to find if
there is a ref-leak somewhere. Valgrind doesn't solve
processname)
- Original Message -
From: Pranith Kumar Karampuri pkara...@redhat.com
To: Shyam srang...@redhat.com, gluster-devel@gluster.org
Sent: Thursday, September 18, 2014 11:34:28 AM
Subject: Re: [Gluster-devel] how do you debug ref leaks?
On 09/18/2014 07:48 PM, Shyam wrote:
On 09/17
hi,
Till now the only method I used to find ref leaks effectively is to
find what operation is causing ref leaks and read the code to find if
there is a ref-leak somewhere. Valgrind doesn't solve this problem
because it is reachable memory from inode-table etc. I am just wondering
if
be helpful is allocator info for generic objects
like dict, inode, fd etc. That way we wouldn't have to sift through large
amount of code.
Could you elaborate the idea please.
Pranith
- Original Message -
From: Pranith Kumar Karampuri pkara...@redhat.com
To: Gluster Devel gluster-devel
On 09/14/2014 10:39 AM, Emmanuel Dreyfus wrote:
Pranith Kumar Karampuri pkara...@redhat.com wrote:
Just to make sure I understand the problem, the issue is happening
because self-heal-daemon uses anonymous fds to perform readdirs? i.e.
there is no explicit opendir on the directory. Everytime
On 09/14/2014 10:41 PM, Emmanuel Dreyfus wrote:
'Pranith Kumar Karampuri pkara...@redhat.com wrote:
I can do that.
That will teach me about that anonymous fd. Reading the code it seems
afr-self-heald.c code does opendir and use the fd for readdir syncop,
which suggest underlying xlator
On 09/14/2014 12:32 AM, Emmanuel Dreyfus wrote:
In 1lrx1si.n8tms1igmi5pm%m...@netbsd.org I explained why NetBSD
currently fails self-heald.t, but since the subjet is burried deep in a
thread, it might be worth starting a new one to talk about how to fix.
In 3 places within glusterfs code
On 09/05/2014 03:51 PM, Kaushal M wrote:
GlusterD performs the following functions as the management daemon for
GlusterFS:
- Peer membership management
- Maintains consistency of configuration data across nodes
(distributed configuration store)
- Distributed command execution (orchestration)
Guys who work with glfs_*, could you guys reply to this question.
Pranith
On 08/27/2014 03:16 PM, ABC-new wrote:
hi~:
while i run the glusterfs example via libgfapi, gcc -c
glusterfs_example -o glfs -luuid
the method glfs_creat hang up.
I want to generate the uuid for
CC new gluster-devel@gluster.org mailing list.
Pranith
On 08/21/2014 12:42 PM, Lalatendu Mohanty wrote:
I hope the subject line have increased your curiosity to go through
the email :).
As a community, we are looking for contributors for GlusterFS bug
triage and hopefully this mail will give
On 08/12/2014 11:29 AM, Harshavardhana wrote:
This is a standard problem where there are split-brains in distributed
systems. For example even in git there are cases where it gives up asking
users to fix the file i.e. merge conflicts. If the user doesn't want
split-brains they should move to
hi,
Welcome to glusterfs :-). Could you please follow the instructions
here to send the patch.
http://www.gluster.org/community/documentation/index.php/Simplified_dev_workflow
Pranith.
On 08/12/2014 08:52 AM, Ruoyu wrote:
fix dangerous usage of volname because strncpy does not always
On 08/07/2014 02:05 PM, Ravishankar N wrote:
Manual resolution of split-brains [1] has been a tedious task
involving understanding and modifying AFR's changelog extended
attributes. To simplify and to an extent automate this task, we are
proposing a new CLI command with which the user can
/homegfs
Brick2: gfsib01b.corvidtec.com:/data/brick01b/homegfs
Brick3: gfsib01a.corvidtec.com:/data/brick02a/homegfs
Brick4: gfsib01b.corvidtec.com:/data/brick02b/homegfs/
David
-- Forwarded Message --
From: Pranith Kumar Karampuri pkara...@redhat.com
mailto:pkara
On 08/07/2014 06:48 AM, Anand Avati wrote:
On Wed, Aug 6, 2014 at 6:05 PM, Pranith Kumar Karampuri
pkara...@redhat.com mailto:pkara...@redhat.com wrote:
We checked this performance with plain distribute as well and on
nfs it gave 25 minutes where as on nfs it gave around 90
PM, Pranith Kumar Karampuri
pkara...@redhat.com mailto:pkara...@redhat.com wrote:
On 08/07/2014 06:48 AM, Anand Avati wrote:
On Wed, Aug 6, 2014 at 6:05 PM, Pranith Kumar Karampuri
pkara...@redhat.com mailto:pkara...@redhat.com wrote:
We checked this performance
hi,
Could you guys review http://review.gluster.com/#/c/8402. This fixes
crash reported by JoeJulian. We are yet to find why fd-migration failed.
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
lookup.
Anyway this is a different change compared to the one I sent in this patch.
Pranith.
Thanks!
On Wed, Aug 6, 2014 at 9:16 PM, Pranith Kumar Karampuri
pkara...@redhat.com mailto:pkara...@redhat.com wrote:
hi,
Could you guys review http://review.gluster.com/#/c/8402
hi,
Does anyone know why there is different code for resolution in
fuse vs server? There are some differences too, like server asserts
about the resolution types like RESOLVE_MUST/RESOLVE_NOT etc where as
fuse doesn't do any such thing. Wondering if there is any reason why the
code is
Yes, even I saw the following leaks, when I tested it a week back. These
were the leaks:
You should probably take a statedump and see what datatypes are leaking.
root@localhost - /usr/local/var/run/gluster
14:10:26 ? awk -f /home/pk1/mem-leaks.awk glusterdump.22412.dump.1406174043
hi,
If there are no more comments, could we take
http://review.gluster.com/#/c/8343 in.
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel
hi Anders,
Generally new test cases are submitted along with the fix to prevent
this situation. You should either submit the fix along with the test
case or wait until the fix is submitted by someone else in case you are
not actively working on it and then we can re-trigger the regression
On 07/26/2014 11:06 AM, Pranith Kumar Karampuri wrote:
On 07/26/2014 03:06 AM, Joe Julian wrote:
How can it come about? Is this from replacing a brick days ago? Can I
prevent it from happening?
[2014-07-25 07:00:29.287680] W [fuse-resolve.c:546:fuse_resolve_fd]
0-fuse-resolve: migration
On 07/26/2014 03:06 AM, Joe Julian wrote:
How can it come about? Is this from replacing a brick days ago? Can I
prevent it from happening?
[2014-07-25 07:00:29.287680] W [fuse-resolve.c:546:fuse_resolve_fd]
0-fuse-resolve: migration of basefd
(ptr:0x7f17cb846444
On 07/23/2014 02:44 PM, Anders Blomdell wrote:
When migrating approx 1 GB of data data by doing
gluster volume add-brick test new-host1:/path/to/new/brick ...
gluster volume remove-brick old-host1:/path/to/old/brick ... start
... wait for removal to finish
gluster volume
On 07/22/2014 11:56 AM, Joe Julian wrote:
On 07/21/2014 11:20 PM, Pranith Kumar Karampuri wrote:
On 07/22/2014 11:39 AM, Joe Julian wrote:
On 07/17/2014 07:30 PM, Pranith Kumar Karampuri wrote:
On 07/18/2014 03:05 AM, Joe Julian wrote:
What impact, if any, does starting profiling
Here is my first draft of mem-pool data structure for review:
http://review.gluster.org/8343
Please don't laugh at the ascii art ;-).
Pranith
On 07/17/2014 04:10 PM, Ravishankar N wrote:
On 07/15/2014 04:39 PM, Pranith Kumar Karampuri wrote:
hi,
Please respond if you guys volunteer
On 07/21/2014 02:08 PM, Jiri Moskovcak wrote:
On 07/19/2014 08:58 AM, Pranith Kumar Karampuri wrote:
On 07/19/2014 11:25 AM, Andrew Lau wrote:
On Sat, Jul 19, 2014 at 12:03 AM, Pranith Kumar Karampuri
pkara...@redhat.com mailto:pkara...@redhat.com wrote:
On 07/18/2014 05:43 PM
On 07/21/2014 05:03 PM, Anders Blomdell wrote:
On 2014-07-19 04:43, Pranith Kumar Karampuri wrote:
On 07/18/2014 07:57 PM, Anders Blomdell wrote:
During testing of a 3*4 gluster (from master as of yesterday), I encountered
two major weirdnesses:
1. A 'rm -rf some_dir' needed several
On 07/21/2014 05:17 PM, Anders Blomdell wrote:
On 2014-07-21 13:36, Pranith Kumar Karampuri wrote:
On 07/21/2014 05:03 PM, Anders Blomdell wrote:
On 2014-07-19 04:43, Pranith Kumar Karampuri wrote:
On 07/18/2014 07:57 PM, Anders Blomdell wrote:
During testing of a 3*4 gluster (from master
On 07/19/2014 11:25 AM, Andrew Lau wrote:
On Sat, Jul 19, 2014 at 12:03 AM, Pranith Kumar Karampuri
pkara...@redhat.com mailto:pkara...@redhat.com wrote:
On 07/18/2014 05:43 PM, Andrew Lau wrote:
On Fri, Jul 18, 2014 at 10:06 PM, Vijay Bellur
vbel...@redhat.com
On 07/18/2014 07:57 PM, Anders Blomdell wrote:
During testing of a 3*4 gluster (from master as of yesterday), I encountered
two major weirdnesses:
1. A 'rm -rf some_dir' needed several invocations to finish, each time
reporting a number of lines like these:
rm: cannot remove
On 07/18/2014 03:05 AM, Joe Julian wrote:
What impact, if any, does starting profiling (gluster volume profile
$vol start) have on performance?
Joe,
According to the code the only extra things it does is calling
gettimeofday() call at the beginning and end of the FOP to calculate
On 07/17/2014 07:25 PM, Kaushal M wrote:
I came across mediawiki's developer documentation and guides when
browsing. These docs felt really good to me, and easy to approach.
I feel that we should take inspiration from them and start enhancing
our docs. (Outright copying with modifications as
runner framework, I can handle that too.
- Original Message -
From: Krutika Dhananjay kdhan...@redhat.com
To: Pranith Kumar Karampuri pkara...@redhat.com
Cc: Gluster Devel gluster-devel@gluster.org
Sent: Wednesday, July 16, 2014 10:41:28 AM
Subject: Re: [Gluster-devel] Developer Documentation
hi,
Please respond if you guys volunteer to add documentation for any
of the following things that are not already taken.
client_t - pranith
integration with statedump - pranith
mempool - Pranith
event-hostory + circ-buff - Raghavendra Bhat
inode - Raghavendra Bhat
call-stub
fd
iobuf
, 2014 at 4:39 PM, Pranith Kumar Karampuri
pkara...@redhat.com wrote:
hi,
Please respond if you guys volunteer to add documentation for any of
the following things that are not already taken.
client_t - pranith
integration with statedump - pranith
mempool - Pranith
event-hostory + circ-buff
On 07/15/2014 07:22 PM, Niels de Vos wrote:
On Tue, Jul 15, 2014 at 08:45:45AM -0400, Jeff Darcy wrote:
Please respond if you guys volunteer to add documentation for any
of the following things that are not already taken.
I think the most important thing to describe for each of these
hi,
We have 4 tests failing once in a while causing problems:
1) tests/bugs/bug-1087198.t - Author: Varun
2) tests/basic/mgmt_v3-locks.t - Author: Avra
3) tests/basic/fops-sanity.t - Author: Pranith
Please take a look at them and post updates.
Pranith
CC gluster-devel, Anuradha who committed the test.
Pranith
On 07/15/2014 01:58 AM, Harshavardhana wrote:
Mr Spurious is here again!
Patch Set 2: Verified-1
http://build.gluster.org/job/rackspace-regression-2GB-triggered/351/consoleFull
: FAILED
Test Summary Report
---
On 07/11/2014 07:05 PM, Justin Clift wrote:
On 11/07/2014, at 11:36 AM, Anders Blomdell wrote:
In
http://build.gluster.org/job/rackspace-regression-2GB-triggered/297/consoleFull,
I have
one failure:
No volumes present
read failed: No data available
read returning junk
fd based file
hi,
I wanted to document the core data structres and debugging infra
in gluster. This is the first patch in that series. Please review and
provide comments.
I am not very familiar with iobuf infra. Please feel free to provide
comments in the patch for that section as well. I can amend
hi Harsha,
Know anything about the following warnings on latest master?
In file included from msg-nfs3.h:20:0,
from msg-nfs3.c:22:
nlm4-xdr.h:6:14: warning: extra tokens at end of #ifndef directive
[enabled by default]
#ifndef _NLM4-XDR_H_RPCGEN
^
hi,
I sent the following patch to change the output of EXPECT_WITHIN:
http://review.gluster.org/8263
Patch got one +1 and regressions passed. Merge it please :-).
Test:
#!/bin/bash
. $(dirname $0)/../include.rc
EXPECT_WITHIN 10 abc echo def
EXPECT_WITHIN 10 def echo def
EXPECT_WITHIN 10 abc
On 07/06/2014 07:58 PM, Pranith Kumar Karampuri wrote:
On 07/06/2014 02:53 AM, Benjamin Turner wrote:
Hi all. I have been running FS sanity on daily builds(glusterfs
mounts only at this point) for a few days for a few days and I have
been hitting a couple of problems
On 07/07/2014 03:11 PM, Justin Clift wrote:
On 07/07/2014, at 2:50 AM, Pranith Kumar Karampuri wrote:
On 07/06/2014 11:05 PM, Vijay Bellur wrote:
On 07/06/2014 07:47 PM, Pranith Kumar Karampuri wrote:
hi Justin/Vijay,
I always felt '-1' saying 'I prefer you didn't submit this' is a
bit
601 - 700 of 772 matches
Mail list logo