Ignore this, it's just a test for measuring a delay issue with mailman.
+ Justin
--
"My grandfather once told me that there are two kinds of people: those
who work and those who take the credit. He told me to try to be in the
first group; there was less competition there."
- Indira Gandhi
On 23 Aug 2016, at 20:27, Justin Clift <jus...@postgresql.org> wrote:
> On 11 Aug 2016, at 21:23, Amye Scavarda wrote:
>> The Red Hat Gluster Storage documentation team and I had a conversation
>> about how we can our upstream documentation more consistent and impr
On 11 Aug 2016, at 21:23, Amye Scavarda wrote:
> The Red Hat Gluster Storage documentation team and I had a conversation
> about how we can our upstream documentation more consistent and improved
> for our users, and they're willing to work with us to find where the major
> gaps are in our
On 18 Jun 2015, at 16:57, Emmanuel Dreyfus m...@netbsd.org wrote:
Niels de Vos nde...@redhat.com wrote:
I'm not sure what limitation you mean. Did we reach the limit of slaves
that Jenkins can reasonably address?
No I mean its inability to catch a new DNS record.
Priority wise, my
On 18 Jun 2015, at 09:19, Niels de Vos nde...@redhat.com wrote:
On Thu, Jun 18, 2015 at 12:57:05AM +0100, Justin Clift wrote:
On 17 Jun 2015, at 20:14, Niels de Vos nde...@redhat.com wrote:
On Wed, Jun 17, 2015 at 03:14:31PM +0200, Michael Scherer wrote:
Le mercredi 17 juin 2015 à 11:58 +0100
On 17 Jun 2015, at 07:21, Kaushal M kshlms...@gmail.com wrote:
One more question. I keep hearing about QoS for volumes as a feature.
How will we guarantee service quality for all the bricks from a single
server? Even if we weren't doing QoS, we make sure that operations on
brick doesn't DOS
and best wishes,
Justin Clift
--
GlusterFS - http://www.gluster.org
An open source, distributed file system scaling to several
petabytes, and handling thousands of clients.
My personal twitter: twitter.com/realjustinclift
___
Gluster-devel mailing list
issue is likely related to the hardware
firewall in the iWeb infrastructure. It's probably acting up. :).
Regards and best wishes,
Justin Clift
--
GlusterFS - http://www.gluster.org
An open source, distributed file system scaling to several
petabytes, and handling thousands of clients.
My
That'd be Awesome. :)
+ Justin
On 15 Jun 2015, at 20:53, Richard Wareing rware...@fb.com wrote:
Hey Nithin,
We have IPv6 going as well (v3.4.x v3.6.x), so I might be able to help out
here and perhaps combine our efforts. We did something similar here, however
we also tackled the NFS
to how common IPv4 NAT does, but for gluster traffic
:)
Regards and best wishes,
Justin Clift
--
GlusterFS - http://www.gluster.org
An open source, distributed file system scaling to several
petabytes, and handling thousands of clients.
My personal twitter: twitter.com/realjustinclift
Potentially relevant to a GlusterD rewrite, since we've
mentioned Go as a possibility a few times:
https://vagabond.github.io/rants/2015/06/05/a-year-with-go/
https://news.ycombinator.com/item?id=9668302
+ Justin
--
GlusterFS - http://www.gluster.org
An open source, distributed file
was intending to turn them off today, but it sounds like they should
be left on for a while longer for people to investigate with.
Regards and best wishes,
Justin Clift
On 21 May 2015, at 14:22, Avra Sengupta aseng...@redhat.com wrote:
Hi,
Can I get access to a rackspace VM so that I can debug
. :)
Regards and best wishes,
Justin Clift
--
GlusterFS - http://www.gluster.org
An open source, distributed file system scaling to several
petabytes, and handling thousands of clients.
My personal twitter: twitter.com/realjustinclift
___
Gluster-devel
On 17 May 2015, at 13:36, Vijay Bellur vbel...@redhat.com wrote:
On 05/17/2015 02:32 PM, Vijay Bellur wrote:
[Adding gluster-devel]
On 05/16/2015 11:31 PM, Niels de Vos wrote:
On Sat, May 16, 2015 at 06:32:00PM +0200, Niels de Vos wrote:
It seems that many failures of the regression tests (at
that
blocked until 3.7.x so people on 3.6.3 aren't automatically
upgraded via package update.
Regards and best wishes,
Justin Clift
--
GlusterFS - http://www.gluster.org
An open source, distributed file system scaling to several
petabytes, and handling thousands of clients.
My personal twitter
On 8 May 2015, at 13:16, Jeff Darcy jda...@redhat.com wrote:
snip
Perhaps the change that's needed
is to make the fixing of likely-spurious test failures a higher
priority than adding new features.
YES! A million times Yes.
We need to move this project to operating with _0 regression
On 8 May 2015, at 16:19, Jeff Darcy jda...@redhat.com wrote:
snip
Proposal 2:
Use ip address instead of host name, because it takes some good amount
of time to resolve from host name, and even some times causes spurious
failure.
If resolution is taking a long time, that's probably fixable
On 8 May 2015, at 04:15, Pranith Kumar Karampuri pkara...@redhat.com wrote:
snip
2) If the same test fails on different patches more than 'x' number of times
we should do something drastic. Let us decide on 'x' and what the drastic
measure is.
Sure. That number is 0.
If it fails more than
On 8 May 2015, at 10:02, Mohammed Rafi K C rkavu...@redhat.com wrote:
Hi All,
As we all know, our regression tests are killing us. An average, one
regression will take approximately two and half hours to complete the
run. So i guess this is the right time to think about enhancing our
Thanks Paul. That's for an ancient series of GlusterFS (3.4.x) we're
not really looking to release further updates for.
If that's the version you guys are running in your production
environment, having you looked into moving to a newer release series?
+ Justin
On 8 May 2015, at 10:55, Paul
On 8 May 2015, at 18:41, Pranith Kumar Karampuri pkara...@redhat.com wrote:
snip
Break the regression tests into parts that can be run in parallel.
So, instead of the regression testing for a particular CR going from the
first test to the last in a serial sequence, we break it up into a
This could be really useful for us:
https://github.com/blog/1995-github-jupyter-notebooks-3
GitHub now supports Jupyter notebooks directly. Similar to how
Markdown (.md) files are displayed in their rendered format,
Jupyter notebook (.ipynb) files are now too.
Should make for better docs for
Fuzzy testing has been added to SQLite's standard testing strategy.
Wonder if it'd be useful for us too... ?
+ Justin
Begin forwarded message:
From: Simon Slavin slav...@bigfraud.org
Subject: Re: [sqlite] SQLite 3.8.10 enters testing
Date: 4 May 2015 22:03:59 BST
To: General Discussion of
On 5 May 2015, at 03:40, Jeff Darcy jda...@redhat.com wrote:
Jeff's patch failed again with same problem:
http://build.gluster.org/job/rackspace-netbsd7-regression-triggered/4531/console
Wouldn't have expected anything different. This one looks like a
problem in the Jenkins/Gerrit
On 4 May 2015, at 08:06, Vijay Bellur vbel...@redhat.com wrote:
Hi All,
There has been a spate of regression test failures (due to broken tests or
race conditions showing up) in the recent past [1] and I am inclined to block
3.7.0 GA along with acceptance of patches until we fix *all*
I'm hoping this us mostly due to bugs in the older version of Gerrit +
GitHub plugin we're using.
We'll upgrade in a few weeks, and see how it goes then... ;)
+ Justin
On 1 May 2015, at 03:38, Gaurav Garg gg...@redhat.com wrote:
Hi,
I was also having the same problems many times, i fixed
On 1 May 2015, at 16:08, Emmanuel Dreyfus m...@netbsd.org wrote:
Pranith Kumar Karampuri pkara...@redhat.com wrote:
I was not able to re-create glupy failure. I see that netbsd
is not archiving logs like the linux regression. Do you mind adding that
one? I think kaushal and Vijay
On 29 Apr 2015, at 08:05, Niels de Vos nde...@redhat.com wrote:
On Wed, Apr 29, 2015 at 02:40:54AM -0400, Jeff Darcy wrote:
label.Label-Name.copyAllScoresOnTrivialRebase
If true, all scores for the label are copied forward when a new patch
set is uploaded that is a trivial rebase. A new
, and be at
the meeting. :)
https://public.pad.fsfe.org/p/gluster-community-meetings
Regards and best wishes,
Justin Clift
--
GlusterFS - http://www.gluster.org
An open source, distributed file system scaling to several
petabytes, and handling thousands of clients.
My personal twitter: twitter.com/realjustinclift
On 29 Apr 2015, at 12:30, Justin Clift jus...@gluster.org wrote:
Reminder!!!
The weekly Gluster Community meeting is in 30 mins, in
#gluster-meeting on IRC.
This is a completely public meeting, everyone is encouraged
to attend and be a part of it. :)
Thanks for everyone for attending
Does this mean we're officially no longer supporting 32 bit architectures?
(or is that just on x86?)
+ Justin
On 28 Apr 2015, at 12:45, Kaushal M kshlms...@gmail.com wrote:
Found the problem. The NetBSD slaves are running a 32-bit kernel and
userspace.
```
nbslave7a# uname -p
i386
```
This sounds like it might be useful for us:
https://gerrit-documentation.storage.googleapis.com/Documentation/2.9.4/config-labels.html#label_copyAllScoresOnTrivialRebase
Yes/no/?
+ Justin
--
GlusterFS - http://www.gluster.org
An open source, distributed file system scaling to several
and best wishes,
Justin Clift
--
GlusterFS - http://www.gluster.org
An open source, distributed file system scaling to several
petabytes, and handling thousands of clients.
My personal twitter: twitter.com/realjustinclift
___
Gluster-devel mailing
On 23 Apr 2015, at 01:18, Jeff Darcy jda...@redhat.com wrote:
I just had to clean up a couple of these - 7327 and 7331. Fortunately,
they both seem to have gone on their merry way instead of dying. Both
were in the pre-mount stage of their setup, but did have mounts active
and gsyncd
On 23 Apr 2015, at 05:47, Joe Julian j...@julianfamily.org wrote:
I suggested it. Some other people in North America besides just myself
expressed an interest in being involved, but could not make early (or very
early) morning meetings. Since the globe has this cool spherical feature I
On 22 Apr 2015, at 11:59, Justin Clift jus...@gluster.org wrote:
Reminder!!!
The weekly Gluster Community meeting is in 30 mins, in
#gluster-meeting on IRC.
This is a completely public meeting, everyone is encouraged
to attend and be a part of it. :)
Thanks everyone who attended. Quite
,
Justin Clift
--
GlusterFS - http://www.gluster.org
An open source, distributed file system scaling to several
petabytes, and handling thousands of clients.
My personal twitter: twitter.com/realjustinclift
___
Gluster-devel mailing list
Gluster-devel
it should be is it?
Regards and best wishes,
Justin Clift
--
GlusterFS - http://www.gluster.org
An open source, distributed file system scaling to several
petabytes, and handling thousands of clients.
My personal twitter: twitter.com/realjustinclift
On 22 Apr 2015, at 15:39, Jeff Darcy jda...@redhat.com wrote:
As we know, we have a patch from Manu which re-triggers a given failed
test. The idea was to reduce the burden of re-triggering the regression,
but I've been noticing it is failing in 2nd attempt as well and I've
seen this happening
running the very latest release of Gerrit + the GitHub auth plugin, and
that allows anonymous read access.
So, we might be upgrading shortly. ;)
Regards and best wishes,
Justin Clift
--
GlusterFS - http://www.gluster.org
An open source, distributed file system scaling to several
petabytes
, and be at
the meeting. :)
https://public.pad.fsfe.org/p/gluster-community-meetings
Regards and best wishes,
Justin Clift
--
GlusterFS - http://www.gluster.org
An open source, distributed file system scaling to several
petabytes, and handling thousands of clients.
My personal twitter: twitter.com/realjustinclift
On 22 Apr 2015, at 09:24, Anoop C S achir...@redhat.com wrote:
On 04/22/2015 12:46 PM, Justin Clift wrote:
On 22 Apr 2015, at 07:42, Justin Clift jus...@gluster.org wrote:
On 20 Apr 2015, at 04:43, Aravinda avish...@redhat.com wrote:
Is it not possible to view the patches if not logged in? I
On 20 Apr 2015, at 14:14, Jeff Darcy jda...@redhat.com wrote:
The same problems that affect mainline are affecting release-3.7 too. We
need to get over this soon.
I think it's time to start skipping (or even deleting) some tests. For
example, volume-snapshot-clone.t alone is responsible for
On 20 Apr 2015, at 08:11, Atin Mukherjee amukh...@redhat.com wrote:
On 04/20/2015 08:35 AM, Vijay Bellur wrote:
snip
The procedure for migration from an admin perspective is quite involved
and account migrations are better done in batches. Instead of mailing
any of us directly, can you please
On 20 Apr 2015, at 20:02, Vijay Bellur vbel...@redhat.com wrote:
On 04/21/2015 12:19 AM, Justin Clift wrote:
On 20 Apr 2015, at 18:53, Jeff Darcy jda...@redhat.com wrote:
I propose that we don't drop test units but provide an ack to patches
that have known regression failures.
IIRC
not sure why yet, but will start looking into it
shortly (after coffee). :)
+ Justin
~aravinda
On 04/20/2015 08:35 AM, Vijay Bellur wrote:
On 04/20/2015 04:25 AM, Justin Clift wrote:
The good news:
1) Gerrit is kind of :/ updated. The very very latest versions
(released friday) don't
really
needing atm).
+ Justin
On 19 Apr 2015, at 11:38, Justin Clift jus...@gluster.org wrote:
Gerrit and Jenkins are going to be shutting off pretty soon.
So, any job running in Jenkins will be aborted. ;)
*Please don't* submit new CR's, or run any new Jenkins jobs
from now until
On 18 Apr 2015, at 07:49, Raghavendra Talur raghavendra.ta...@gmail.com wrote:
snip
Use the second search box, the one below Google search for Gluster.
Works for me on both Chrome and Firefox on Android and Fedora 21.
Please try again and let me know :)
Errr, which second search box? :)
On 18 Apr 2015, at 16:01, Niels de Vos nde...@redhat.com wrote:
On Sat, Apr 18, 2015 at 03:14:41PM +0100, Justin Clift wrote:
On 18 Apr 2015, at 07:49, Raghavendra Talur raghavendra.ta...@gmail.com
wrote:
snip
Use the second search box, the one below Google search for Gluster.
Works for me
for updating, until it's ready.
I wish there was a better way... but there doesn't seem
to be. :/
Sorry in advance, etc.
Regards and best wishes,
Justin Clift
--
GlusterFS - http://www.gluster.org
An open source, distributed file system scaling to several
petabytes, and handling thousands
need to merge our new GitHub account info with
our existing accounts.
After that though, we should be good.
Is anyone really against this idea?
Regards and best wishes,
Justin Clift
--
GlusterFS - http://www.gluster.org
An open source, distributed file system scaling to several
petabytes
On 16 Apr 2015, at 05:28, Emmanuel Dreyfus m...@netbsd.org wrote:
Hi
We all know regression spurious failures are a problem. In order to
minimize their impact, NetBSD regression restart the whole test suite in
case of error so that spurious failures do not cause an undeserved
verified=-1
On 15 Apr 2015, at 12:54, Ravishankar N ravishan...@redhat.com wrote:
On 04/15/2015 03:31 PM, Justin Clift wrote:
On 15 Apr 2015, at 08:09, Ravishankar N ravishan...@redhat.com wrote:
On 04/14/2015 11:57 PM, Vijay Bellur wrote:
From here on, we would need patches to be explicitly sent
let me know. (or send a pull request adding
it to the config file)
Hopefully that's workable for people, and helps us get a
bunch more contributors to the projects over time... :D
Regards and best wishes,
Justin Clift
--
GlusterFS - http://www.gluster.org
An open source, distributed file system
On 16 Apr 2015, at 03:56, Jeff Darcy jda...@redhat.com wrote:
Noticing several of the recent regression tests are being marked as SUCCESS
in Jenkins (then Gerrit), when they're clearly failing.
eg:
http://build.gluster.org/job/rackspace-regression-2GB-triggered/6968/console
Noticing several of the recent regression tests are being marked as SUCCESS
in Jenkins (then Gerrit), when they're clearly failing.
eg:
http://build.gluster.org/job/rackspace-regression-2GB-triggered/6968/console
http://build.gluster.org/job/rackspace-regression-2GB-triggered/6969/console
On 15 Apr 2015, at 08:09, Ravishankar N ravishan...@redhat.com wrote:
On 04/14/2015 11:57 PM, Vijay Bellur wrote:
From here on, we would need patches to be explicitly sent on release-3.7 for
the content to be included in a 3.7.x release. Please ensure that you send a
backport for release-3.7
, and people not involved in spurious failure fixing
are still able to do dev work on master.
?
Regards and best wishes,
Justin Clift
--
GlusterFS - http://www.gluster.org
An open source, distributed file system scaling to several
petabytes, and handling thousands of clients.
My personal twitter
:/)
Regards and best wishes,
Justin Clift
--
GlusterFS - http://www.gluster.org
An open source, distributed file system scaling to several
petabytes, and handling thousands of clients.
My personal twitter: twitter.com/realjustinclift
___
Gluster-devel
-failures
Regards and best wishes,
Justin Clift
--
GlusterFS - http://www.gluster.org
An open source, distributed file system scaling to several
petabytes, and handling thousands of clients.
My personal twitter: twitter.com/realjustinclift
___
Gluster
On 8 Apr 2015, at 14:13, Pranith Kumar Karampuri pkara...@redhat.com wrote:
On 04/08/2015 06:20 PM, Justin Clift wrote:
snip
Hagarth mentioned in the weekly IRC meeting that you have an
idea what might be causing the regression tests to generate
cores?
Can you outline that quickly, as Jeff
Just an FYI. Shutting down Gerrit for a few minutes, to move around
some files on the Gerrit server (need to free up space urgently).
Shouldn't be too long. (fingers crossed) :)
+ Justin
--
GlusterFS - http://www.gluster.org
An open source, distributed file system scaling to several
On 7 Apr 2015, at 15:45, Justin Clift jus...@gluster.org wrote:
Just an FYI. Shutting down Gerrit for a few minutes, to move around
some files on the Gerrit server (need to free up space urgently).
Shouldn't be too long. (fingers crossed) :)
... and it hasn't returned from rebooting after
On 7 Apr 2015, at 16:31, Justin Clift jus...@gluster.org wrote:
On 7 Apr 2015, at 15:45, Justin Clift jus...@gluster.org wrote:
Just an FYI. Shutting down Gerrit for a few minutes, to move around
some files on the Gerrit server (need to free up space urgently).
Shouldn't be too long
this patch get submitted
and merged? :)
Regards and best wishes,
Justin Clift
--
GlusterFS - http://www.gluster.org
An open source, distributed file system scaling to several
petabytes, and handling thousands of clients.
My personal twitter: twitter.com/realjustinclift
On 31 Mar 2015, at 08:15, Niels de Vos nde...@redhat.com wrote:
On Tue, Mar 31, 2015 at 12:20:19PM +0530, Kaushal M wrote:
IMHO, doing hardening and security should be left the individual
distributions and the package maintainers. Generally, each distribution has
it's own policies with regards
On 2 Apr 2015, at 12:10, Vijay Bellur vbel...@redhat.com wrote:
On 04/02/2015 06:27 AM, Jeff Darcy wrote:
My recommendations:
(1) Apply the -Wno-error=cpp and -Wno-error=maybe-uninitialized
changes wherever they need to be applied so that they're
effective during
On 2 Apr 2015, at 01:57, Jeff Darcy jda...@redhat.com wrote:
As many of you have undoubtedly noticed, we're now in a situation where
*all* regression builds are now failing, with something like this:
-
cc1: warnings being treated as errors
On 2 Apr 2015, at 12:10, Vijay Bellur vbel...@redhat.com wrote:
On 04/02/2015 06:27 AM, Jeff Darcy wrote:
My recommendations:
(1) Apply the -Wno-error=cpp and -Wno-error=maybe-uninitialized
changes wherever they need to be applied so that they're
effective during
added last night.
Should we adjust it?
Regards and best wishes,
Justin Clift
--
GlusterFS - http://www.gluster.org
An open source, distributed file system scaling to several
petabytes, and handling thousands of clients.
My personal twitter: twitter.com/realjustinclift
On 2 Apr 2015, at 05:18, Emmanuel Dreyfus m...@netbsd.org wrote:
Hi
I am now convinced the solution to our multiple regression problem is to
introduce more Gluster Build System users: one for CentOS regression,
another one for NetBSD regression (and one for each smoke test, as
exaplained
On 1 Apr 2015, at 19:47, Jeff Darcy jda...@redhat.com wrote:
When doing an initial burn in test (regression run on master head
of GlusterFS git), it coredumped on the new slave23.cloud.gluster.org VM.
(yeah, I'm reusing VM names)
On 2 Apr 2015, at 14:42, Jeff Darcy jda...@redhat.com wrote:
Is it ok to put slave23.cloud.gluster.org into general rotation, so it
runs regression jobs along with the rest?
Sounds OK to me. Do we have a place to store the core tarball, just in
case we decide we need to go back to it some
On 2 Apr 2015, at 14:08, Niels de Vos nde...@redhat.com wrote:
On Thu, Apr 02, 2015 at 01:21:57PM +0100, Justin Clift wrote:
On 31 Mar 2015, at 08:15, Niels de Vos nde...@redhat.com wrote:
On Tue, Mar 31, 2015 at 12:20:19PM +0530, Kaushal M wrote:
IMHO, doing hardening and security should
On 1 Apr 2015, at 13:14, Tom Callaway tcall...@redhat.com wrote:
Hello Gluster Ant People!
Right now, if you go to gluster.org, you see our current slogan in giant text:
Write once, read everywhere
However, no one seems to be super-excited about that slogan. It doesn't
really help
On 1 Apr 2015, at 17:20, Marcelo Barbosa fireman...@fedoraproject.org wrote:
yep, using on Jenkins the plugin Gerrit Trigger, this plugin trigger all
requests for all repositories and all branches, this function running with
Auto QA tests and vote with ACL verified, for example:
On 1 Apr 2015, at 10:57, Emmanuel Dreyfus m...@netbsd.org wrote:
Hi
crypt.t was recently broken in NetBSD regression. The glusterfs returns
a node with file type invalid to FUSE, and that breaks the test.
After running a git bisect, I found the offending commit after which
this behavior
On 1 Apr 2015, at 03:48, Justin Clift jus...@gluster.org wrote:
On 31 Mar 2015, at 14:18, Shyam srang...@redhat.com wrote:
snip
Also, most of the regression runs produced cores. Here are
the first two:
http://ded.ninja/gluster/blk0/
There are 4 cores here, 3 pointing to the (by now
://build.gluster.org/job/regression-test-burn-in/16/console
Does anyone have time to check the coredump, and see if this is
the bug we already know about?
Regards and best wishes,
Justin Clift
--
GlusterFS - http://www.gluster.org
An open source, distributed file system scaling to several
petabytes
On 1 Apr 2015, at 19:51, Shyam srang...@redhat.com wrote:
On 04/01/2015 02:47 PM, Jeff Darcy wrote:
When doing an initial burn in test (regression run on master head
of GlusterFS git), it coredumped on the new slave23.cloud.gluster.org VM.
(yeah, I'm reusing VM names)
On 1 Apr 2015, at 17:38, Emmanuel Dreyfus m...@netbsd.org wrote:
Justin Clift jus...@gluster.org wrote:
We need some kind of solution.
What about ading another nb7build user in gerrit? That way results will
not conflict.
I'm not sure. However, Vijay's now added me as an admin in our
On 1 Apr 2015, at 20:09, Vijay Bellur vbel...@redhat.com wrote:
snip
My sanity run got blown due to this as I use -Wall -Werror during compilation.
Submitted http://review.gluster.org/10105 to correct this.
Should we add -Wall -Werror to the compile options for our CentOS 6.x
regression runs?
On 1 Apr 2015, at 20:22, Vijay Bellur vbel...@redhat.com wrote:
On 04/02/2015 12:46 AM, Justin Clift wrote:
On 1 Apr 2015, at 20:09, Vijay Bellur vbel...@redhat.com wrote:
snip
My sanity run got blown due to this as I use -Wall -Werror during
compilation.
Submitted http
/blk0/
http://ded.ninja/gluster/blk1/
Hoping someone has some time to check those quickly and see
if there's anything useful in them or not.
(the hosts are all still online atm, shortly to be nuked)
Regards and best wishes,
Justin Clift
--
GlusterFS - http://www.gluster.org
An open source
On 1 Apr 2015, at 03:03, Emmanuel Dreyfus m...@netbsd.org wrote:
Jeff Darcy jda...@redhat.com wrote:
That's fine. I left a note for you in the script, regarding what I
think it needs to do at that point.
Here is the comment:
# We shouldn't be touching CR at all. For V, we should set
On 31 Mar 2015, at 14:18, Shyam srang...@redhat.com wrote:
snip
Also, most of the regression runs produced cores. Here are
the first two:
http://ded.ninja/gluster/blk0/
There are 4 cores here, 3 pointing to the (by now hopefully) famous bug
#1195415. One of the cores exhibit a
On 1 Apr 2015, at 04:07, Emmanuel Dreyfus m...@netbsd.org wrote:
Justin Clift jus...@gluster.org wrote:
It sounds like we need a solution to have both the NetBSD and CentOS
regressions run, and only give the +1 when both of them have successfully
finished. If either of them fail
On 1 Apr 2015, at 05:04, Emmanuel Dreyfus m...@netbsd.org wrote:
Justin Clift jus...@gluster.org wrote:
That, or perhaps we could have two verified fields?
Sure. Whichever works. :)
Personally, I'm not sure how to do either yet.
In http://build.gluster.org/gerrit-trigger/ you have
On 31 Mar 2015, at 17:43, Nithya Balachandran nbala...@redhat.com wrote:
snip
* 11 x tests/bugs/distribute/bug-1117851.t
Failed test: 15
55% fail rate
Is the test output for the bug-1117851.t failure available anywhere?
Not at the moment. It would be really easy to setup a
the first one. I'll leave the others for you, so you
embed the skill :)
Regards and best wishes,
Justin Clift
--
GlusterFS - http://www.gluster.org
An open source, distributed file system scaling to several
petabytes, and handling thousands of clients.
My personal twitter: twitter.com
our general failure rate is improving. :) The hangs
are a bit worrying though. :(
Regards and best wishes,
Justin Clift
--
GlusterFS - http://www.gluster.org
An open source, distributed file system scaling to several
petabytes, and handling thousands of clients.
My personal twitter
On 29 Mar 2015, at 21:31, Emmanuel Dreyfus m...@netbsd.org wrote:
Hi
We now have the patches to recover NetBSD regression. This needs to be
reviewed and merged (it would be nice if it could be before release-3.7
branching):
http://review.gluster.org/10030
http://review.gluster.org/10032
On 28 Mar 2015, at 13:12, Emmanuel Dreyfus m...@netbsd.org wrote:
Pranith Kumar Karampuri pkara...@redhat.com wrote:
By which time some more problems may creep in, it will be chicken and
egg problem. Force a -2. Everybody will work just on Netbsd for a while
but after that things should be
latest one in
the 3.5.x series is 3.5.3.
If it helps, our very latest release of GlusterFS is 3.6.2, and we're
working on version 3.7.0 presently as well.
What are you wanting to do with GlusterFS btw, if you're ok to
describe it. :)
Regards and best wishes,
Justin Clift
--
GlusterFS - http
translators?
:)
Regards and best wishes,
Justin Clift
--
GlusterFS - http://www.gluster.org
An open source, distributed file system scaling to several
petabytes, and handling thousands of clients.
My personal twitter: twitter.com/realjustinclift
Note for everyone - This is likely to be the last ever GlusterFS 3.4.x release.
If you're using 3.4.x, you'll want to make sure this works. Any 3.4.x bugs
after this one... - please upgrade your GlusterFS. ;)
Regards and best wishes,
Justin Clift
On 27 Mar 2015, at 14:43, Kaleb S. KEITHLEY
FYI. Now we're past the mad rush of patch submissions for the
3.7 feature freeze, the number of Rackspace regression VM's is
being reduced. Trying to get us into the ballpark of our
budget... ;)
+ Justin
--
GlusterFS - http://www.gluster.org
An open source, distributed file system scaling to
and as
effective.
Better suggestions welcome of course. :)
Regards and best wishes,
Justin Clift
--
GlusterFS - http://www.gluster.org
An open source, distributed file system scaling to several
petabytes, and handling thousands of clients.
My personal twitter: twitter.com/realjustinclift
On 26 Mar 2015, at 17:28, Vijay Bellur vbel...@redhat.com wrote:
On 03/26/2015 10:48 PM, Justin Clift wrote:
Hi us :),
Just added a page to the wiki, showing how to retrigger a
failed job in Jenkins:
http://www.gluster.org/community/documentation/index.php/Retrigger_jobs_in_Jenkins
On 23 Mar 2015, at 07:01, Shravan Chandrashekar schan...@redhat.com wrote:
Hi All,
The Gluster Filesystem documentation is not user friendly and fragmented
and this has been the feedback we have been receiving.
We got back to our drawing board and blueprints and realized that the
1 - 100 of 358 matches
Mail list logo