/homegfs
Brick2: gfsib01b.corvidtec.com:/data/brick01b/homegfs
Brick3: gfsib01a.corvidtec.com:/data/brick02a/homegfs
Brick4: gfsib01b.corvidtec.com:/data/brick02b/homegfs/
David
-- Forwarded Message --
From: Pranith Kumar Karampuri pkara...@redhat.com
mailto:pkara
On 08/07/2014 06:48 AM, Anand Avati wrote:
On Wed, Aug 6, 2014 at 6:05 PM, Pranith Kumar Karampuri
pkara...@redhat.com mailto:pkara...@redhat.com wrote:
We checked this performance with plain distribute as well and on
nfs it gave 25 minutes where as on nfs it gave around 90
PM, Pranith Kumar Karampuri
pkara...@redhat.com mailto:pkara...@redhat.com wrote:
On 08/07/2014 06:48 AM, Anand Avati wrote:
On Wed, Aug 6, 2014 at 6:05 PM, Pranith Kumar Karampuri
pkara...@redhat.com mailto:pkara...@redhat.com wrote:
We checked this performance
hi,
Could you guys review http://review.gluster.com/#/c/8402. This fixes
crash reported by JoeJulian. We are yet to find why fd-migration failed.
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
lookup.
Anyway this is a different change compared to the one I sent in this patch.
Pranith.
Thanks!
On Wed, Aug 6, 2014 at 9:16 PM, Pranith Kumar Karampuri
pkara...@redhat.com mailto:pkara...@redhat.com wrote:
hi,
Could you guys review http://review.gluster.com/#/c/8402
On 08/07/2014 02:05 PM, Ravishankar N wrote:
Manual resolution of split-brains [1] has been a tedious task
involving understanding and modifying AFR's changelog extended
attributes. To simplify and to an extent automate this task, we are
proposing a new CLI command with which the user can
On 08/12/2014 11:29 AM, Harshavardhana wrote:
This is a standard problem where there are split-brains in distributed
systems. For example even in git there are cases where it gives up asking
users to fix the file i.e. merge conflicts. If the user doesn't want
split-brains they should move to
hi,
Welcome to glusterfs :-). Could you please follow the instructions
here to send the patch.
http://www.gluster.org/community/documentation/index.php/Simplified_dev_workflow
Pranith.
On 08/12/2014 08:52 AM, Ruoyu wrote:
fix dangerous usage of volname because strncpy does not always
CC new gluster-devel@gluster.org mailing list.
Pranith
On 08/21/2014 12:42 PM, Lalatendu Mohanty wrote:
I hope the subject line have increased your curiosity to go through
the email :).
As a community, we are looking for contributors for GlusterFS bug
triage and hopefully this mail will give
Guys who work with glfs_*, could you guys reply to this question.
Pranith
On 08/27/2014 03:16 PM, ABC-new wrote:
hi~:
while i run the glusterfs example via libgfapi, gcc -c
glusterfs_example -o glfs -luuid
the method glfs_creat hang up.
I want to generate the uuid for
On 09/05/2014 03:51 PM, Kaushal M wrote:
GlusterD performs the following functions as the management daemon for
GlusterFS:
- Peer membership management
- Maintains consistency of configuration data across nodes
(distributed configuration store)
- Distributed command execution (orchestration)
On 09/14/2014 12:32 AM, Emmanuel Dreyfus wrote:
In 1lrx1si.n8tms1igmi5pm%m...@netbsd.org I explained why NetBSD
currently fails self-heald.t, but since the subjet is burried deep in a
thread, it might be worth starting a new one to talk about how to fix.
In 3 places within glusterfs code
On 09/14/2014 10:39 AM, Emmanuel Dreyfus wrote:
Pranith Kumar Karampuri pkara...@redhat.com wrote:
Just to make sure I understand the problem, the issue is happening
because self-heal-daemon uses anonymous fds to perform readdirs? i.e.
there is no explicit opendir on the directory. Everytime
On 09/14/2014 10:41 PM, Emmanuel Dreyfus wrote:
'Pranith Kumar Karampuri pkara...@redhat.com wrote:
I can do that.
That will teach me about that anonymous fd. Reading the code it seems
afr-self-heald.c code does opendir and use the fd for readdir syncop,
which suggest underlying xlator
hi,
Till now the only method I used to find ref leaks effectively is to
find what operation is causing ref leaks and read the code to find if
there is a ref-leak somewhere. Valgrind doesn't solve this problem
because it is reachable memory from inode-table etc. I am just wondering
if
be helpful is allocator info for generic objects
like dict, inode, fd etc. That way we wouldn't have to sift through large
amount of code.
Could you elaborate the idea please.
Pranith
- Original Message -
From: Pranith Kumar Karampuri pkara...@redhat.com
To: Gluster Devel gluster-devel
On 09/18/2014 03:13 PM, Niels de Vos wrote:
On Thu, Sep 18, 2014 at 07:43:00AM +0530, Pranith Kumar Karampuri wrote:
hi,
Till now the only method I used to find ref leaks effectively is to find
what operation is causing ref leaks and read the code to find if there is a
ref-leak somewhere
On 09/18/2014 07:48 PM, Shyam wrote:
On 09/17/2014 10:13 PM, Pranith Kumar Karampuri wrote:
hi,
Till now the only method I used to find ref leaks effectively is to
find what operation is causing ref leaks and read the code to find if
there is a ref-leak somewhere. Valgrind doesn't solve
processname)
- Original Message -
From: Pranith Kumar Karampuri pkara...@redhat.com
To: Shyam srang...@redhat.com, gluster-devel@gluster.org
Sent: Thursday, September 18, 2014 11:34:28 AM
Subject: Re: [Gluster-devel] how do you debug ref leaks?
On 09/18/2014 07:48 PM, Shyam wrote:
On 09/17
On 10/13/2014 09:45 AM, Emmanuel Dreyfus wrote:
Emmanuel Dreyfus m...@netbsd.org wrote:
Erratum: it happens because it attempts to seekdir to the offset
obtained for last record. But I still have to find what code is sending
the request.
This is a find(1) probably forked by quota-crawld
On 10/13/2014 01:14 PM, Emmanuel Dreyfus wrote:
On Mon, Oct 13, 2014 at 12:39:33PM +0530, Pranith Kumar Karampuri wrote:
Op_errno is valid only if 'op_ret 0'. so that doesn't say much. After the
last readdir call with op_ret 0, there will be one more readdir call for
which op_ret will come
On 10/13/2014 02:27 PM, Emmanuel Dreyfus wrote:
On Mon, Oct 13, 2014 at 01:42:38PM +0530, Pranith Kumar Karampuri wrote:
No bug here, just suboptimal behavior, both in glusterfs and NetBSD FUSE.
oh!, but shouldn't it get op_ret = 0 instead of op_ret -1, op_errno EINVAL?
It happens because
On 10/13/2014 02:37 PM, Pranith Kumar Karampuri wrote:
On 10/13/2014 02:27 PM, Emmanuel Dreyfus wrote:
On Mon, Oct 13, 2014 at 01:42:38PM +0530, Pranith Kumar Karampuri wrote:
No bug here, just suboptimal behavior, both in glusterfs and NetBSD
FUSE.
oh!, but shouldn't it get op_ret = 0
On 10/13/2014 02:45 PM, Emmanuel Dreyfus wrote:
On Mon, Oct 13, 2014 at 02:37:12PM +0530, Pranith Kumar Karampuri wrote:
I am not aware of backend filesystems that much, may be someone with that
knowledge can comment here, what happens when new entries are created in the
directory after
On 10/13/2014 07:27 PM, Shyam wrote:
On 10/13/2014 08:01 AM, Pranith Kumar Karampuri wrote:
hi,
Why are we moving away from this coding style?:
if (x) {
/*code*/
} else {
/* code */
}
This patch (in master) introduces the same and explains why,
commit
On 10/13/2014 07:43 PM, Shyam wrote:
On 10/13/2014 10:08 AM, Pranith Kumar Karampuri wrote:
On 10/13/2014 07:27 PM, Shyam wrote:
On 10/13/2014 08:01 AM, Pranith Kumar Karampuri wrote:
hi,
Why are we moving away from this coding style?:
if (x) {
/*code*/
} else {
/* code
On 10/08/2014 02:15 PM, justgluste...@gmail.com wrote:
Hi all:
I do the following test:
I create a glusterfs replica volume (replica count is 2 ) with two
server node(server A and server B),use XFS as the underlying
filesystem, then mount the volume in client node,
then, I shut
On 10/08/2014 07:50 PM, Joe Julian wrote:
To the author: You're cross posting user questions in the devel
mailing list. You're not asking development questions. Please don't do
that.
To Pranith et al:
On 10/8/2014 1:45 AM, justgluste...@gmail.com wrote:
* then I config :*
hi Kaleb,
I went through the logs. I don't see anything significant. What
is the test case that recreates the mem-leak? May be I can try it on my
setup and get back to you?
Pranith
On 10/15/2014 08:57 PM, Kaleb S. KEITHLEY wrote:
As mentioned in the Gluster Community Meeting on irc
On 11/04/2014 03:30 PM, Anders Blomdell wrote:
On 2014-11-04 10:38, Emmanuel Dreyfus wrote:
Hi
FWIW, there are still memory leaks in glusterfs 3.6.0. My favourite test is
building NetBSD on a replicated volume, and it fails because the machine
runs out of swap.
After building for 14 hours
hi,
The following tests keep failing spuriously nowadays. I CCed glusterd
folks and original author(Kritika) and Last change author (Emmanuel).
You can check
http://build.gluster.org/job/rackspace-regression-2GB-triggered/2497/consoleFull
for full logs.
volume create: patchy: failed:
On 11/10/2014 01:04 AM, Emmanuel Dreyfus wrote:
Justin Clift jus...@gluster.org wrote:
I've just used that page to disconnect slave25, so you're fine to
investigate there (same login credentials as before). Please reconnect
it when you're done. :)
Since I could spot nothing from, I
On 11/10/2014 10:58 AM, Emmanuel Dreyfus wrote:
Pranith Kumar Karampuri pkara...@redhat.com wrote:
Since I could spot nothing from, I reconnected it. I will try by
submitting a change with set -x for that script.
It was consistently happening with my change just on regression machine.
So I
On 11/10/2014 11:21 AM, Vijay Bellur wrote:
On 11/08/2014 08:19 AM, Peter Auyeung wrote:
I have a node down while gfs still open for writing.
Got tons of heal-failed on a replicated volume showing as gfid.
Tried gfid-resolver and got the following:
# ./gfid-resolver.sh /brick02/gfs/
Kumar Karampuri
pkara...@redhat.com mailto:pkara...@redhat.com wrote:
On 11/11/2014 03:13 PM, Kiran Patil wrote:
Test Summary Report
--
./tests/basic/quota-anon-fd-nfs.t(Wstat: 0 Tests: 16 Failed: 1)
Failed test: 16
This is a spurious
On 11/13/2014 03:23 AM, Justin Clift wrote:
Hi all,
At the moment, our smoke tests in Jenkins only run on a
replicated volume. Extending that out to other volume types
should (in theory :) help catch other simple gotchas.
Xavi has put together a patch for doing just this, which I'd
like to
On 11/12/2014 04:52 PM, Kiran Patil wrote:
I have create zpool with name d and mnt and they appear in filesystem
as follows.
d on /d type zfs (rw,xattr)
mnt on /mnt type zfs (rw,xattr)
Debug enabled output of quota.t testcase is at http://ur1.ca/irbt1.
CC vijaikumar
On Wed, Nov 12, 2014
On 11/19/2014 10:30 AM, Atin Mukherjee wrote:
On 11/18/2014 10:35 PM, Pranith Kumar Karampuri wrote:
On 11/12/2014 04:52 PM, Kiran Patil wrote:
I have create zpool with name d and mnt and they appear in filesystem
as follows.
d on /d type zfs (rw,xattr)
mnt on /mnt type zfs (rw,xattr
On 11/21/2014 09:04 PM, Nux! wrote:
Hi,
I deleted a file by mistake in a brick. I never managed to find out its gfid so
now I have a rogue symlink in .glusterfs pointing to it (if I got how it works).
Any way I can discover which is this file and get rid of it?
symlinks exist in .glusterfs
On 11/21/2014 09:50 PM, Ben England wrote:
Nux,
Those thousands of entries all would match -links 2 but not -links 1 The only entry in
.glusterfs that would match is the entry where you deleted the file from the brick. That's how hardlinks work - when
you create a regular file, the link
On 11/22/2014 12:10 PM, Pranith Kumar Karampuri wrote:
On 11/21/2014 09:50 PM, Ben England wrote:
Nux,
Those thousands of entries all would match -links 2 but not -links
1 The only entry in .glusterfs that would match is the entry where
you deleted the file from the brick. That's how
On 02/02/2015 03:34 AM, David F. Robinson wrote:
I have several files that gluster says it cannot heal. I deleted the
files from all of the bricks
(/data/brick0*/hpc_shared/motorsports/gmics/Raven/p3/*) and ran a full
heal using 'gluster volume heal homegfs full'. Even after the full
heal,
On 02/03/2015 12:13 PM, Raghavendra Bhat wrote:
On Monday 02 February 2015 09:07 PM, David F. Robinson wrote:
I upgraded one of my bricks from 3.6.1 to 3.6.2 and I can no longer
do a 'gluster volume heal homegfs info'. It hangs and never returns
any information.
I was trying to ensure that
19 2014 CURRENT STANDARD
ARMORING.one
-- Original Message --
From: Xavier Hernandez xhernan...@datalab.es
To: David F. Robinson david.robin...@corvidtec.com; Benjamin
Turner bennytu...@gmail.com; Pranith Kumar Karampuri
pkara...@redhat.com
Cc: gluster-us...@gluster.org gluster-us
On 01/15/2015 10:53 PM, Xavier Hernandez wrote:
Hi,
currently eager locking is implemented by checking the open-fd-count
special xattr for each write. If there's more than one open on the
same file, eager locking is disabled to avoid starvation.
This works quite well for file writes, but
On 02/12/2015 08:15 PM, Xavier Hernandez wrote:
I've made some more investigation and the problem seems worse.
It seems that NFS sends a huge amount of requests without waiting for
answers (I've had more than 1400 requests ongoing). Probably there
will be many factors that can influence on
On 02/12/2015 11:34 PM, Pranith Kumar Karampuri wrote:
On 02/12/2015 08:15 PM, Xavier Hernandez wrote:
I've made some more investigation and the problem seems worse.
It seems that NFS sends a huge amount of requests without waiting for
answers (I've had more than 1400 requests ongoing
On 02/12/2015 03:05 PM, Pranith Kumar Karampuri wrote:
On 02/12/2015 09:14 AM, Justin Clift wrote:
On 12 Feb 2015, at 03:02, Shyam srang...@redhat.com wrote:
On 02/11/2015 08:28 AM, David F. Robinson wrote:
My base filesystem has 40-TB and the tar takes 19 minutes. I copied
over 10-TB
On 02/12/2015 09:14 AM, Justin Clift wrote:
On 12 Feb 2015, at 03:02, Shyam srang...@redhat.com wrote:
On 02/11/2015 08:28 AM, David F. Robinson wrote:
My base filesystem has 40-TB and the tar takes 19 minutes. I copied over 10-TB
and it took the tar extraction from 1-minute to 7-minutes.
On 02/13/2015 12:07 AM, Niels de Vos wrote:
On Thu, Feb 12, 2015 at 11:39:51PM +0530, Pranith Kumar Karampuri wrote:
On 02/12/2015 11:34 PM, Pranith Kumar Karampuri wrote:
On 02/12/2015 08:15 PM, Xavier Hernandez wrote:
I've made some more investigation and the problem seems worse.
It seems
or upgrade/installation process better.
On Thu, Feb 19, 2015 at 12:26:33PM +0530, Pranith Kumar Karampuri wrote:
https://bugzilla.redhat.com/show_bug.cgi?id=1113778
https://bugzilla.redhat.com/show_bug.cgi?id=1191176
How can we make the process of giving good packages for things other than
RPMs
On 01/26/2015 09:41 PM, Justin Clift wrote:
On 26 Jan 2015, at 14:50, David F. Robinson david.robin...@corvidtec.com
wrote:
I have a server with v3.6.2 from which I cannot mount using NFS. The FUSE
mount works, however, I cannot get the NFS mount to work. From /var/log/message:
Jan 26
On 01/27/2015 07:33 AM, Pranith Kumar Karampuri wrote:
On 01/26/2015 09:41 PM, Justin Clift wrote:
On 26 Jan 2015, at 14:50, David F. Robinson
david.robin...@corvidtec.com wrote:
I have a server with v3.6.2 from which I cannot mount using NFS.
The FUSE mount works, however, I cannot get
hi,
I get following errors, when I try to do git fetch
pk1@localhost - ~/workspace/gerrit-repo (ec-notify-1)
15:48:44 :( ⚡ git fetch git rebase origin/master
fatal: internal server error
remote: internal server error
fatal: protocol error: bad pack header
What is worrisome is that when I click
On 01/31/2015 10:49 PM, Dennis Schafroth wrote:
Hi
when compiling dht-common.c with clang (on mac, but I dont think that
matters) some warnings seem to reveal an error:
CC dht-common.lo
*dht-common.c:2997:57: **warning: **size argument in 'strncmp' call is
a comparison
Thanks vijay, it works now.
Pranith
On 02/01/2015 04:27 PM, Vijay Bellur wrote:
On 02/01/2015 11:24 AM, Pranith Kumar Karampuri wrote:
hi,
I get following errors, when I try to do git fetch
pk1@localhost - ~/workspace/gerrit-repo (ec-notify-1)
15:48:44 :( ⚡ git fetch git rebase origin/master
On 01/07/2015 12:48 PM, Raghavendra Bhat wrote:
Hi,
As per the design dicussion it was mentioned that, there will be one
BitD running per node which will take care of all the bricks of all
the volumes running on that node. But, here once thing that becomes
important is doing graph changes
On 02/10/2015 03:42 AM, Jeff Darcy wrote:
Interest in 4.0 seems to be increasing. So is developer activity, but
all of the developers involved in 4.0 are stretched a bit thin. As a
result, some sub-projects still don't have anyone who's working on them
often enough to make significant
are not compatible with versions = 3.5.3 and 3.6.1 that is the
reason. From 3.5.4 and releases =3.6.2 it should work fine.
Pranith
David
-- Original Message --
From: Pranith Kumar Karampuri pkara...@redhat.com
mailto:pkara...@redhat.com
To: David F. Robinson david.robin...@corvidtec.com
On 03/30/2015 06:01 PM, Emmanuel Dreyfus wrote:
On Mon, Mar 30, 2015 at 05:44:23PM +0530, Pranith Kumar Karampuri wrote:
Problem here is that ' inode_forget' is coming even before it gets to
inspect the file. We initially thought we should 'ref' the inode when the
user specifies the choice
On 03/30/2015 06:34 PM, Emmanuel Dreyfus wrote:
Pranith Kumar Karampuri pkara...@redhat.com wrote:
Since spb_choice is not saved as an attribute for the file on the
bricks, it cannot be recovered when the context is reallocated. Either
that save feature has been forgotten, or going
On 03/28/2015 02:08 PM, Emmanuel Dreyfus wrote:
Pranith Kumar Karampuri pkara...@redhat.com wrote:
Emmanuel,
What can we do to make it vote -2 when it fails? Things will
automatically fall in place if it gives -2.
I will do this once I will have recovered. The changelog change
On 03/28/2015 09:51 PM, Emmanuel Dreyfus wrote:
Hi
I see split-brain-resolution.t uses attribute replica.split-brain-choice
to choose what replica should be used. This attribute is not in
privilegied space (trusted. prefixed). Is it on purpose?
Yes, these are used as internal commands to make
On 02/26/2015 02:54 AM, Justin Clift wrote:
Anyone have an interest in a regression test VM that's (presently) hung on
tests/basic/afr/split-brain-healing.t? Likely to be a spurious error.
I can either reboot the VM and put it back into service, or I can leave it
for someone to log into and
On 03/04/2015 09:57 AM, Justin Clift wrote:
Ran 20 x regression tests on our GlusterFS master branch code
as of a few hours ago, commit 95d5e60afb29aedc29909340e7564d54a6a247c2.
5 of them were successful (25%), 15 of them failed in various ways
(75%).
We need to get this down to about 5% or
On 03/25/2015 07:18 PM, Emmanuel Dreyfus wrote:
On Wed, Mar 25, 2015 at 02:04:10PM +0100, Niels de Vos wrote:
1. Who is going to maintain the new features?
2. Maintainers should be active in responding to users
3. What about reported bugs, there is the Bug Triaging in place?
4. Maintainers
On 04/02/2015 12:27 AM, Raghavendra Talur wrote:
On Wed, Apr 1, 2015 at 10:34 PM, Justin Clift jus...@gluster.org
mailto:jus...@gluster.org wrote:
On 1 Apr 2015, at 10:57, Emmanuel Dreyfus m...@netbsd.org
mailto:m...@netbsd.org wrote:
Hi
crypt.t was recently broken
On 04/02/2015 07:27 PM, Raghavendra Bhat wrote:
On Thursday 02 April 2015 05:50 PM, Jeff Darcy wrote:
I think, crypt xlator should do a mem_put of local after doing
STACK_UNWIND
like other xlators which also use mem_get for local (such as AFR). I am
suspecting crypt not doing mem_put might be
On 04/01/2015 03:27 PM, Emmanuel Dreyfus wrote:
Hi
crypt.t was recently broken in NetBSD regression. The glusterfs returns
a node with file type invalid to FUSE, and that breaks the test.
After running a git bisect, I found the offending commit after which
this behavior appeared:
hi,
For implementing directory healing in ec I needed to generalize the
cluster syncop implementation done in afr-v2 which makes things easy for
implementing something like self-heal. The patch is at
http://review.gluster.org/10240
Please feel free to let me know your comments.
in 5-10 minutes. Very consistent
failure :-)
Pranith
Regards,
Nithya
- Original Message -
From: Pranith Kumar Karampuri pkara...@redhat.com
To: Shyam srang...@redhat.com, Raghavendra Gowdappa rgowd...@redhat.com,
Nithya Balachandran
nbala...@redhat.com, Susant Palai spa
hi,
As per the etherpad:
https://public.pad.fsfe.org/p/gluster-spurious-failures
* tests/basic/afr/sparse-file-self-heal.t (Wstat: 0 Tests: 64 Failed: 35)
* Failed tests: 1-6, 11, 20-30, 33-34, 36, 41, 50-61, 64
* Happens in master (Mon 30th March - git commit id
Seems like glusterd failure from the looks of it: +glusterd folks.
Running tests in file ./tests/basic/cdc.t
volume delete: patchy: failed: Another transaction is in progress for patchy.
Please try again after sometime.
[18:16:40] ./tests/basic/cdc.t ..
not ok 52
not ok 53 Got Started instead
On 05/02/2015 10:14 AM, Krishnan Parthasarathi wrote:
If glusterd itself fails to come up, of course the test will fail :-). Is it
still happening?
Pranith,
Did you get a chance to see glusterd logs and find why glusterd didn't come up?
Please paste the relevant logs in this thread.
No :-(.
hi,
Found the reason for this too:
ok 8
not ok 9 Got in instead of completed
FAILED COMMAND: completed remove_brick_status_completed_field patchy
pranithk-laptop:/d/backends/patchy0
volume remove-brick commit: failed: use 'force' option as migration is
in progress
not ok 10
FAILED
hi Rajesh/Avra,
I do not have good understanding of snapshot, so couldn't
investigate any of the snapshot related spurious failures present in
https://public.pad.fsfe.org/p/gluster-spurious-failures. Could you guys
help out?
Pranith
___
for around half hour.
Pranith
Regards,
Avra
On 05/04/2015 11:27 AM, Pranith Kumar Karampuri wrote:
hi Rajesh/Avra,
I do not have good understanding of snapshot, so couldn't
investigate any of the snapshot related spurious failures present in
https://public.pad.fsfe.org/p/gluster-spurious
hi Vijai,
I am not sure if you are maintaining this now, but git blame
gives your name, so sending the mail to you. Could you please take a
look at
http://build.gluster.org/job/rackspace-regression-2GB-triggered/8148/consoleFull
where the failure happened. If someone else is looking
hi,
I see that ec tests are failing because of 'df -h' test failure.
It failing because dh -h fails on stale quota aux mount from the looks
of it.
df: `/var/run/gluster/patchy': Transport endpoint is not connected
-
[07:38:37] ./tests/basic/ec/ec-3-1.t .. not ok 11
On 04/30/2015 08:44 AM, Emmanuel Dreyfus wrote:
Hi
Here is NetBSD regression status update for broken tests:
- tests/basic/afr/split-brain-resolution.t
Anuradha Talur is working on it, the change being still under review
http://review.gluster.org/10134
- tests/basic/ec/
This works but with
On 04/30/2015 10:02 AM, Emmanuel Dreyfus wrote:
Pranith Kumar Karampuri pkara...@redhat.com wrote:
On a related note, I see glupy is failing spuriously as well:
Know anything about it?
glupy.t used to work and was broken quite recenly. My investigation led
to a free on an invalid pointer
hi,
I see the following logs when the failure happens:
[2015-05-01 10:37:44.157477] E
[dht-helper.c:900:dht_migration_complete_check_task] 0-patchy-dht:
(null): failed to get the 'linkto' xattr No data avai
lable
[2015-05-01 10:37:44.157504] W [fuse-bridge.c:2190:fuse_readv_cbk]
On 05/05/2015 12:58 AM, Justin Clift wrote:
On 4 May 2015, at 08:06, Vijay Bellur vbel...@redhat.com wrote:
Hi All,
There has been a spate of regression test failures (due to broken tests or race
conditions showing up) in the recent past [1] and I am inclined to block 3.7.0
GA along with
On 05/05/2015 06:12 AM, Pranith Kumar Karampuri wrote:
On 05/05/2015 12:58 AM, Justin Clift wrote:
On 4 May 2015, at 08:06, Vijay Bellur vbel...@redhat.com wrote:
Hi All,
There has been a spate of regression test failures (due to broken
tests or race conditions showing up) in the recent
On 05/05/2015 08:10 AM, Jeff Darcy wrote:
Jeff's patch failed again with same problem:
http://build.gluster.org/job/rackspace-netbsd7-regression-triggered/4531/console
Wouldn't have expected anything different. This one looks like a
problem in the Jenkins/Gerrit infrastructure.
Sorry for the
hi Vijai/Sachin,
http://build.gluster.org/job/rackspace-regression-2GB-triggered/8268/console
Doesn't seem like an obvious failure. Know anything about it?
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
On 05/05/2015 08:39 AM, Pranith Kumar Karampuri wrote:
Vijai/Sachin,
Did you get a chance to work on this?
http://review.gluster.com/10166 failed Just now again in ec because
http://review.gluster.org/10069 is merged yesterday which can lead to
same problem. I sent http
hi,
Doesn't seem like obvious failure. It does say there is
version mismatch, I wonder how? Could you look into it.
Gluster version mismatch between master and slave.
Geo-replication session between master and slave21.cloud.gluster.org::slave
does not exist.
[08:27:15]
Just saw two more failures in the same place for netbsd regressions. I
am ignoring NetBSD status for the test fixes for now. I am not sure how
this needs to be fixed. Please help!
Pranith
On 05/05/2015 07:17 AM, Pranith Kumar Karampuri wrote:
On 05/05/2015 06:12 AM, Pranith Kumar Karampuri
into this.
Pranith
On 05/01/2015 11:30 AM, Pranith Kumar Karampuri wrote:
hi,
I see that ec tests are failing because of 'df -h' test failure.
It failing because dh -h fails on stale quota aux mount from the looks
of it.
df: `/var/run/gluster/patchy': Transport endpoint is not connected
hi,
I fixed it along with the patch on which this test failed
@http://review.gluster.org/10391. Letting everyone know in case they
face the same issue.
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
.
Is there any scenario the above command gives empty string?
Thanks and Regards,
Kotresh H R
- Original Message -
From: Pranith Kumar Karampuri pkara...@redhat.com
To: Aravinda Vishwanathapura Krishna Murthy avish...@redhat.com, Kotresh
Hiremath Ravishankar
khire...@redhat.com
Cc: Gluster
-clone.t
(Wstat: 0 Tests: 41 Failed: 3)
Failed tests: 36, 38, 40
Pranith
Regards,
Avra
On 05/05/2015 09:01 AM, Pranith Kumar Karampuri wrote:
hi Avra/Rajesh,
Any update on this test?
* tests/basic/volume-snapshot-clone.t
* http
On 05/05/2015 10:48 AM, Avra Sengupta wrote:
On 05/05/2015 10:43 AM, Pranith Kumar Karampuri wrote:
On 05/05/2015 10:32 AM, Avra Sengupta wrote:
Hi,
As already discussed, if you encounter this or any other snapshot
tests, it would be great to provide the regression run instance so
that we
hi,
Please backport the patches that fix spurious regressions to 3.7
as well. This is the status of regressions now:
* ./tests/bugs/quota/bug-1035576.t (Wstat: 0 Tests: 24 Failed: 2)
* Failed tests: 20-21
*
Looping in kotresh and aravinda
Pranith
On 05/06/2015 08:39 AM, Jeff Darcy wrote:
Could you please look at this issue:
http://build.gluster.org/job/rackspace-regression-2GB-triggered/8456/consoleFull
I looked at this one for a while. It looks like a brick failed to
start because
Niels,
Any ideas?
http://build.gluster.org/job/rackspace-regression-2GB-triggered/8462/consoleFull
mount.nfs: access denied by server while mounting
slave46.cloud.gluster.org:/patchy
mount.nfs: access denied by server while mounting
slave46.cloud.gluster.org:/patchy
mount.nfs: access
Execute the following command on replicate volume:
root@pranithk-laptop - ~
17:23:02 :( ⚡ gluster v set r2 cluster.client-log-level 0
Connection failed. Please check if gluster daemon is operational.
Program terminated with signal SIGSEGV, Segmentation fault.
#0 0x0038e480c860 in
Gaurav,
Please look into
http://build.gluster.org/job/rackspace-regression-2GB-triggered/8409/console
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel
hi,
http://build.gluster.org/job/rackspace-regression-2GB-triggered/8621/consoleFull
failed regression. Could you please look into it
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
101 - 200 of 772 matches
Mail list logo