Re: [Gluster-devel] HEAD of release-3.7 branch is broken

2015-06-04 Thread Raghavendra Gowdappa
Apologies. It was my mistake.

- Original Message -
> From: "Shyam" 
> To: gluster-devel@gluster.org
> Sent: Thursday, June 4, 2015 10:18:52 PM
> Subject: Re: [Gluster-devel] HEAD of release-3.7 branch is broken
> 
> On 06/04/2015 12:26 PM, Shyam wrote:
> > http://review.gluster.org/#/c/10967/ request is the one that has these
> > changes.
> 
> This is now merged and the compile issue should be resolved.
> 
> Patches affected by this would need to be rebased.
> (list that I see that have already failed)
> - http://review.gluster.org/#/c/11034/
> 
> Need to check the current run queue as well.
> 
> >
> > Doing a final review and merging the same.
> >
> > Shyam
> >
> > On 06/04/2015 12:22 PM, Kaleb KEITHLEY wrote:
> >>
> >> Recent commits to xlators/cluster/dht/src/dht-common.c calls functions
> >> which are not defined.
> >>
> >> Was a file with dht_inode_ctx_get_mig_info() and
> >> dht_mig_info_is_invalid() definitions omitted in a change set?
> >>
> >> Right now the release-3.7 branch does not compile in jenkins smoke tests
> >> and on fedora 22.
> >>
> >> --
> >>
> >> Kaleb
> >> ___
> >> Gluster-devel mailing list
> >> Gluster-devel@gluster.org
> >> http://www.gluster.org/mailman/listinfo/gluster-devel
> > ___
> > Gluster-devel mailing list
> > Gluster-devel@gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-devel
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
> 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] release-3.7 regression tests stability

2015-06-04 Thread Atin Mukherjee
I still see lot of patches in release-3.7 are failing regression, be it
Linux or Netbsd. Does this mean all the spurious fixes which we did in
mainline are yet to be in 3.7? If so, what are we waiting for?

[1] has failed in glupy.t in netbsd which used to be the case 1 month
ago, but I thought it was fixed. Any update on this?

[1]
http://build.gluster.org/job/rackspace-netbsd7-regression-triggered/6245/consoleFull



~Atin
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] spurious failure with sparse-file-heal.t test

2015-06-04 Thread Krishnan Parthasarathi


- Original Message -
> 
> > This seems to happen because of race between STACK_RESET and stack
> > statedump. Still thinking how to fix it without taking locks around
> > writing to file.
> 
> Why should we still keep the stack being reset as part of pending pool of
> frames? Even we if we had to (can't guess why?), when we remove we should do
> the following to prevent gf_proc_dump_pending_frames from crashing.
> 
> ...
> 
> call_frame_t *toreset = NULL;
> 
> LOCK (&stack->pool->lock)
> {
>   toreset = stack->frames;
>   stack->frames = NULL;
> }
> UNLOCK (&stack->pool->lock);
> 
> ...
> 
> Now, perform all operations that are done on stack->frames on toreset
> instead. Thoughts?

Is there a reason you want to avoid locks here? STACK_DESTROY uses the
call_pool lock to remove the stack from the list of pending frames.

> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
> 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] spurious failure with sparse-file-heal.t test

2015-06-04 Thread Krishnan Parthasarathi

> This seems to happen because of race between STACK_RESET and stack
> statedump. Still thinking how to fix it without taking locks around
> writing to file.

Why should we still keep the stack being reset as part of pending pool of
frames? Even we if we had to (can't guess why?), when we remove we should do
the following to prevent gf_proc_dump_pending_frames from crashing.

...

call_frame_t *toreset = NULL;

LOCK (&stack->pool->lock)
{
  toreset = stack->frames;
  stack->frames = NULL;
}
UNLOCK (&stack->pool->lock);

...

Now, perform all operations that are done on stack->frames on toreset
instead. Thoughts?
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Spurious failures? (master)

2015-06-04 Thread Pranith Kumar Karampuri



On 06/05/2015 02:12 AM, Shyam wrote:

Just checking,

This review request: http://review.gluster.org/#/c/11073/

Failed in the following tests:

1) Linux
[20:20:16] ./tests/bugs/replicate/bug-880898.t ..
not ok 4
This seems to be same RC as in self-heald.t where heal info is not 
failing sometimes when the brick is down.

Failed 1/4 subtests
[20:20:16]

http://build.gluster.org/job/rackspace-regression-2GB-triggered/10088/consoleFull 



2) NetBSD (Du seems to have faced the same)
[11:56:45] ./tests/basic/afr/sparse-file-self-heal.t ..
not ok 52 Got "" instead of "1"
not ok 53 Got "" instead of "1"
not ok 54
not ok 55 Got "2" instead of "0"
not ok 56 Got "d41d8cd98f00b204e9800998ecf8427e" instead of 
"b6d81b360a5672d80c27430f39153e2c"

not ok 60 Got "0" instead of "1"
Failed 6/64 subtests
[11:56:45]

http://build.gluster.org/job/rackspace-netbsd7-regression-triggered/6233/consoleFull 

There is a bug in statedump code path, If it races with STACK_RESET then 
shd seems to crash. I see the following output indicating the process died.


kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]



I have not done any analysis, and also the change request should not 
affect the paths that this test is failing on.


Checking the logs for Linux did not throw any more light on the cause, 
although the brick logs are not updated(?) to reflect the volume 
create and start as per the TC in (1).


Anyone know anything (more) about this?

Shyam
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Spurious failures? (master)

2015-06-04 Thread Shyam

Just checking,

This review request: http://review.gluster.org/#/c/11073/

Failed in the following tests:

1) Linux
[20:20:16] ./tests/bugs/replicate/bug-880898.t ..
not ok 4
Failed 1/4 subtests
[20:20:16]

http://build.gluster.org/job/rackspace-regression-2GB-triggered/10088/consoleFull

2) NetBSD (Du seems to have faced the same)
[11:56:45] ./tests/basic/afr/sparse-file-self-heal.t ..
not ok 52 Got "" instead of "1"
not ok 53 Got "" instead of "1"
not ok 54
not ok 55 Got "2" instead of "0"
not ok 56 Got "d41d8cd98f00b204e9800998ecf8427e" instead of 
"b6d81b360a5672d80c27430f39153e2c"

not ok 60 Got "0" instead of "1"
Failed 6/64 subtests
[11:56:45]

http://build.gluster.org/job/rackspace-netbsd7-regression-triggered/6233/consoleFull

I have not done any analysis, and also the change request should not 
affect the paths that this test is failing on.


Checking the logs for Linux did not throw any more light on the cause, 
although the brick logs are not updated(?) to reflect the volume create 
and start as per the TC in (1).


Anyone know anything (more) about this?

Shyam
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Stable releases continue, GlusterFS 3.5.4 is now available

2015-06-04 Thread Niels de Vos
In a few moments, http://planet.gluster.org will show the release notes
for GlusterFS 3.5.4.

You can wait for the blog to catch up, or you can read the original post
here:

http://blog.nixpanic.net/2015/06/stable-releases-continue-glusterfs-354.html

Including the contents for the lazy people that do not want to click a
link, but still are interested what this release brings. Note that the
included version does not have clickable links, for those you need to
visit one of the blog posts.

Thanks,
Niels


GlusterFS 3.5 is the oldest stable release that is still getting
updates. Yesterday GlusterFS 3.5.4 has been released, and the
volunteering packagers have already provided RPM packages for different
Fedora and EPEL versions. If you are running the 3.5 version on Fedora
20 or 21, you are encouraged to install the updates and provide karma.

Release Notes for GlusterFS 3.5.4

This is a bugfix release. The Release Notes for 3.5.0, 3.5.1, 3.5.2 and
3.5.3 contain a listing of all the new features that were added and bugs
fixed in the GlusterFS 3.5 stable release.

  Bugs Fixed:

 * 1092037: Issues reported by Cppcheck static analysis tool
 * 1101138: meta-data split-brain prevents entry/data self-heal of dir/file 
respectively
 * 1115197: Directory quota does not apply on it's sub-directories
 * 1159968: glusterfs.spec.in: deprecate *.logrotate files in dist-git in 
favor of the upstream logrotate files
 * 1160711: libgfapi: use versioned symbols in libgfapi.so for compatibility
 * 1161102: self heal info logs are filled up with messages reporting 
split-brain
 * 1162150: AFR gives EROFS when fop fails on all subvolumes when 
client-quorum is enabled
 * 1162226: bulk remove xattr should not fail if removexattr fails with 
ENOATTR/ENODATA
 * 1162230: quota xattrs are exposed in lookup and getxattr
 * 1162767: DHT: Rebalance- Rebalance process crash after remove-brick
 * 1166275: Directory fd leaks in index translator
 * 1168173: Regression tests fail in quota-anon-fs-nfs.t
 * 1173515: [HC] - mount.glusterfs fails to check return of mount command.
 * 1174250: Glusterfs outputs a lot of warnings and errors when quota is 
enabled
 * 1177339: entry self-heal in 3.5 and 3.6 are not compatible
 * 1177928: Directories not visible anymore after add-brick, new brick dirs 
not part of old bricks
 * 1184528: Some newly created folders have root ownership although created 
by unprivileged user
 * 1186121: tar on a gluster directory gives message "file changed as we 
read it" even though no updates to file in progress
 * 1190633: self-heal-algorithm with option "full" doesn't heal sparse 
files correctly
 * 1191006: Building argp-standalone breaks nightly builds on Fedora Rawhide
 * 1192832: log files get flooded when removexattr() can't find a specified 
key or value
 * 1200764: [AFR] Core dump and crash observed during disk replacement case
 * 1202675: Perf: readdirp in replicated volumes causes performance degrade
 * 1211841: glusterfs-api.pc versioning breaks QEMU
 * 1222150: readdirp return 64bits inodes even if enable-ino32 is set

  Known Issues:

 * The following configuration changes are necessary for 'qemu' and 'samba 
vfs plugin' integration with libgfapi to work seamlessly:
 1. gluster volume set  server.allow-insecure on
 2. restarting the volume is necessary

 gluster volume stop 
 gluster volume start 

 3. Edit /etc/glusterfs/glusterd.vol to contain this line:

 option rpc-auth-allow-insecure on

 4. restarting glusterd is necessary

 service glusterd restart

   More details are also documented in the Gluster Wiki on the Libgfapi 
with qemu libvirt page.
 * For Block Device translator based volumes open-behind translator at the 
client side needs to be disabled.

 gluster volume set  performance.open-behind disabled

 * libgfapi clients calling glfs_fini before a successful glfs_init will 
cause the client to hang as reported here. The workaround is NOT to call 
glfs_fini for error cases encountered before a successful glfs_init. This is 
being tracked in Bug 1134050 for glusterfs-3.5 and Bug 1093594 for mainline.
 * If the /var/run/gluster directory does not exist enabling quota will 
likely fail (Bug 1117888).


pgpGXsNFEQDIt.pgp
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] The Manila RFEs and why so

2015-06-04 Thread Vijay Bellur

On 06/03/2015 07:29 PM, Csaba Henk wrote:




FYI -- efforts so far and perspectives as of my
understanding:

As noted, the "Smart volume management" group is a
singleton, but that single element is tricky. We
have heard promises of a glusterd rewrite that would
include the intelligence / structure for such a feature;
also we toyed around implementing a partial version of
it with configuration management software (Ansible) but
that was too experimental (the whole concept) to dedicate
ourselves to it, so we discontinued that.

OTOH, the directory level features are many but can
possibly be addressed with a single well chosen volume
variant (something like lv-s for all top level
directories?) -- plus the UI would needed to be tailored
to them.

The query features are not vital but have the advantage
of being simpler (especially the size query) which would
be a reason for putting them on the schedule.

)

What we would like: prioritize between "Directory level
operations" and "Smart volume management" and make a plan
for that and execute that plan.




Hi Csaba - Thanks for the detailed write up!

Would you be able to provide more light on the nature of features/APIs 
planned to be exposed through Manila in Liberty? Having that information 
can play an important part in prioritizing and arriving at a decision.


Regards,
Vijay



___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] HEAD of release-3.7 branch is broken

2015-06-04 Thread Shyam

On 06/04/2015 12:26 PM, Shyam wrote:

http://review.gluster.org/#/c/10967/ request is the one that has these
changes.


This is now merged and the compile issue should be resolved.

Patches affected by this would need to be rebased.
(list that I see that have already failed)
- http://review.gluster.org/#/c/11034/

Need to check the current run queue as well.



Doing a final review and merging the same.

Shyam

On 06/04/2015 12:22 PM, Kaleb KEITHLEY wrote:


Recent commits to xlators/cluster/dht/src/dht-common.c calls functions
which are not defined.

Was a file with dht_inode_ctx_get_mig_info() and
dht_mig_info_is_invalid() definitions omitted in a change set?

Right now the release-3.7 branch does not compile in jenkins smoke tests
and on fedora 22.

--

Kaleb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] HEAD of release-3.7 branch is broken

2015-06-04 Thread Shyam
http://review.gluster.org/#/c/10967/ request is the one that has these 
changes.


Doing a final review and merging the same.

Shyam

On 06/04/2015 12:22 PM, Kaleb KEITHLEY wrote:


Recent commits to xlators/cluster/dht/src/dht-common.c calls functions
which are not defined.

Was a file with dht_inode_ctx_get_mig_info() and
dht_mig_info_is_invalid() definitions omitted in a change set?

Right now the release-3.7 branch does not compile in jenkins smoke tests
and on fedora 22.

--

Kaleb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] HEAD of release-3.7 branch is broken

2015-06-04 Thread Kaleb KEITHLEY


Recent commits to xlators/cluster/dht/src/dht-common.c calls functions 
which are not defined.


Was a file with dht_inode_ctx_get_mig_info() and 
dht_mig_info_is_invalid() definitions omitted in a change set?


Right now the release-3.7 branch does not compile in jenkins smoke tests 
and on fedora 22.


--

Kaleb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] tests/bugs/glusterd/bug-948686.t gave a core

2015-06-04 Thread Krutika Dhananjay
This crash gives me a strange sense of deja vu. I remember seeing an invalid 
(IA_INVAL) inode on which opendir() was being wound by SHD before. 
Will take a look into this. 

-Krutika 

- Original Message -

> From: "Pranith Kumar Karampuri" 
> To: "Ravishankar N" , "Krutika Dhananjay"
> , "Anuradha Talur" 
> Cc: "Gluster Devel" 
> Sent: Thursday, June 4, 2015 6:38:33 PM
> Subject: tests/bugs/glusterd/bug-948686.t gave a core

> Glustershd is crashing because afr wound xattrop with null gfid in loc.
> Could one of you look into this failure?
> http://build.gluster.org/job/rackspace-regression-2GB-triggered/10095/consoleFull

> Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] tests/bugs/glusterd/bug-948686.t gave a core

2015-06-04 Thread Pranith Kumar Karampuri
Glustershd is crashing because afr wound xattrop with null gfid in loc. 
Could one of you look into this failure? 
http://build.gluster.org/job/rackspace-regression-2GB-triggered/10095/consoleFull


Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] spurious failure with sparse-file-heal.t test

2015-06-04 Thread Pranith Kumar Karampuri
This seems to happen because of race between STACK_RESET and stack 
statedump. Still thinking how to fix it without taking locks around 
writing to file.


Pranith
On 06/04/2015 02:13 PM, Pranith Kumar Karampuri wrote:
I see that statedump is generating core because of which this test 
spuriously fails. I am looking into it.


Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] RFC: functions and structure for generic refcounting

2015-06-04 Thread Niels de Vos
On Mon, Jun 01, 2015 at 06:28:18AM -0400, Krishnan Parthasarathi wrote:
> 
> > While looking into some of the races that seem possible in the new
> > auth-cache and netgroup/export features for Gluster/NFS, I noticed that
> > we do not really have a common way to do reference counting. I could
> > really use that, so I've put something together and would like to
> > request some comments on it.
> > 
> > Patch: http://review.gluster.org/11022
> 
> Much needed unification of ref-counting mechanisms in our code base!
> 
> We should avoid exporting gf_ref_{alloc, free}. It would make different 
> life-times
> for the ref-counted object and the gf_ref_t object possible (without a 
> seg-fault).
> i.e, ref-counted object could be around even after the last put has been 
> called on
> the corresponding gf_ref_t object (if gf_release_t wasn't defined at all).
> It puts an extra burden on the reader trying to reason with ref-counted 
> objects' lifetime
> during coredump analysis. If we made embedded gf_ref_t objects the only 
> possible way
> to extend ref-counting to your favourite object, we would have the following 
> neat property.
> 
> - gf_ref_t->ref is "out of bounds" if the corresponding container is "out of 
> bounds".
> 
> and its converse. Two theorems for free, by construction.

Thanks for the comments. I've removed the gf_ref_alloc() and
gf_ref_free() functions in the updated change. Instead of a "struct
gf_ref", I made it a typedef "gf_ref_t" which hopefully is less likely
to get pointered at and more likely to get embedded in a struct.

For this change, and eventual follow up changes that rewrite existing
reference counters to use it, I have opened bug 1228157:
  https://bugzilla.redhat.com/show_bug.cgi?id=1228157
  Provide and use a common way to do reference counting of (internal) structures

More feedback on http://review.gluster.org/11022 is much appreciated.
Comments with "yes, I like the idea" are also good, if you do not feel
comfortable with reviewing the code.

Thanks,
Niels

> P.S This doesn't prevent consumers from defining their helpers for alloc/free
> where they absolutely need it. Niels' NFS netgroup fix didn't need it.
> 
>   
> > It can be used to automatically recylce a twig when all the grapes
> > attached to the twig have been plucked. For the record, we keep a
> > reference to all grapes in our 'growing_grapes' list too. Lets assume
> > our record keeping is *so* good, that we only need to go by the list to
> > know which grapes are ripe enough to pluck.
> > 
> > You might notice that I'm not a grape farmer ;-)
> 
> Grape collection is a more fruitful name to what we have known as
> garbage collection all these years. sigh!
>  
> > 
> > 
> > #include "refcount.h"
> > 
> > /* we track all the grapes of the plant, but only pluck the ripe ones */
> > list_head growing_grapes;
> > /* ripe grapes are removed from growing_grapes, and put in the basket */
> > list_head grape_basket;
> > 
> > 
> > /* a twig with grapes */
> > struct twig {
> > struct gf_ref *ref;  /* for refcounting */
> > 
> > struct plant *plant; /* plant attached to this twig */
> > }
> > 
> > /* a grape that is/was attached to a twig */
> > struct grape {
> > struct twig *twig;   /* twig when still growing */
> > unsigned int tasty;  /* mark from 1-10 on tastiness */
> > };
> > 
> > 
> > /* called by gf_ref_put() when all grapes have been plucked */
> > void
> > recycle_twig (void *to_free)
> > {
> > struct twig *twig = (struct twig *) to_free;
> > 
> > cut_from_plant (twig->plant);
> > GF_FREE (twig);
> > }
> > 
> > /* you'll execute this whenever you pass the grapes plant */
> > void
> > check_grapes ()
> > {
> > struct grape *grape = NULL;
> > 
> > foreach_grape (grape, growing_grapes) {
> > if (is_ripe (grape)) {
> > list_del (grape, growing_grapes);
> > 
> > /* the grape has been removed from the twig */
> > twig = grape->twig;
> > grape->twig = NULL;
> > gf_ref_put (&twig->ref);
> > /* the last gf_ref_put() will call recyle_twig() */
> > 
> > /* put the grape in the basket */
> > list_add (grape_basket, grape);
> > }
> > }
> > }
> > 
> > void
> > grow_new_grape (struct twig *twig)
> > {
> > struct grape *grape = GF_MALLOC (sizeof (struct grape), gf_mt_grapes);
> > 
> > if (gf_ref_get (&twig->ref) == 0) {
> > /* oops, something went wrong! no locking implemented yet? */
> > GF_FREE (grape);
> > return;
> > }
> > 
> > grape->twig = twig;
> > /* the grape has just started to grow */
> > list_add (growing_grapes, grape);
> > }
> > 
> > /* there are many twigs with grapes on this plant */
> > struct twig *
> > grow_twig_with_grapes (struct plant *plant, int grape_count)
> > {
> > int i = 0;
> > struct twig *twig = GF_MALLOC (sizeof (struct twig), gf_mt_grapes);
> > 
> > gf_ref_init (&twig->ref, recycle_twig

Re: [Gluster-devel] spurious failure of tests/bugs/glusterd/bug-1213295-snapd-svc-uninitialized.t

2015-06-04 Thread Atin Mukherjee


On 06/04/2015 02:11 PM, Pranith Kumar Karampuri wrote:
> hi Atin,
>   Could you help with this please:
> http://build.gluster.org/job/rackspace-regression-2GB-triggered/10096/consoleFull
I see glusterd failed to start test 6 with the following reason:

[2015-06-04 04:23:34.669874] E [MSGID: 100018
[glusterfsd.c:1929:glusterfs_pidfile_update] 0-glusterfsd: pidfile
/d/backends/2/glusterd.pid lock failed [Resource temporarily unavailable]
This indicates that lock has been already acquired which means at the
earlier cleanup_and_exit () it was not invoked due to some reason
because I don't see glusterd logging "shutting down" in the
kill_glusterd 2. I am trying to reproduce it and will keep you posted.

~Atin
> 
> 
> Pranith

-- 
~Atin
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Failure in volume-snapshot.t

2015-06-04 Thread Vijay Bellur

On 06/04/2015 11:44 AM, Avra Sengupta wrote:

Requesting again. Can I get access to one of the slave machines so that
I can investigate the failure in volume-snapshot.t



Hi Avra - I have reserved slave34 for your debugging. Please let us know 
once done.


Thanks,
Vijay

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] self-heald.t failures

2015-06-04 Thread Pranith Kumar Karampuri
Yeah, I am looking into it. Basically gluster volume heal  info 
must fail after volume stop. But sometimes it doesn't seem to :-(. Will 
need some time to RC. Will update the list.


Pranith
On 06/04/2015 02:19 PM, Vijay Bellur wrote:

On 06/03/2015 10:30 AM, Vijay Bellur wrote:

self-heald.t seems to fail intermittently.

One such instance was seen recently [1]. Can somebody look into this
please?

./tests/basic/afr/self-heald.t (Wstat: 0 Tests: 83 Failed: 1) Failed
test: 78

Thanks,
Vijay

http://build.gluster.org/job/rackspace-regression-2GB-triggered/10029/consoleFull 





One more failure of self-heald.t:

http://build.gluster.org/job/rackspace-regression-2GB-triggered/10092/consoleFull 



-Vijay

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] self-heald.t failures

2015-06-04 Thread Vijay Bellur

On 06/03/2015 10:30 AM, Vijay Bellur wrote:

self-heald.t seems to fail intermittently.

One such instance was seen recently [1]. Can somebody look into this
please?

./tests/basic/afr/self-heald.t (Wstat: 0 Tests: 83 Failed: 1) Failed
test: 78

Thanks,
Vijay

http://build.gluster.org/job/rackspace-regression-2GB-triggered/10029/consoleFull



One more failure of self-heald.t:

http://build.gluster.org/job/rackspace-regression-2GB-triggered/10092/consoleFull

-Vijay

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] spurious failure with sparse-file-heal.t test

2015-06-04 Thread Pranith Kumar Karampuri
I see that statedump is generating core because of which this test 
spuriously fails. I am looking into it.


Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] spurious failure of tests/bugs/glusterd/bug-1213295-snapd-svc-uninitialized.t

2015-06-04 Thread Pranith Kumar Karampuri

hi Atin,
  Could you help with this please: 
http://build.gluster.org/job/rackspace-regression-2GB-triggered/10096/consoleFull


Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel