On Mon, Apr 24, 2017, at 11:52 AM, Jeff Darcy wrote:
> On Fri, Apr 21, 2017, at 09:17 AM, Atin Mukherjee wrote:
>> As we don't run our .t files with brick mux being enabled for every
>> patches can we ensure that there is a nightly regression trigger with
>> brick
On Fri, Apr 21, 2017, at 09:17 AM, Atin Mukherjee wrote:
> As we don't run our .t files with brick mux being enabled for every
> patches can we ensure that there is a nightly regression trigger with
> brick multiplexing feature being enabled. The reason for this ask is
> very simple, we have no
> After the restart, our Jenkins server has accidentally had a fr_FR locale.
I know this was probably frustrating for you (and possibly others as well), but
I have to admit it gave me a good chuckle.
"Of course I'm French! Why else would I have this outrageous French error
message?"
Key
Thank you, Nigel. Post mortems like this can be uncomfortable, but they're how
we as a team learn and improve. The good example is appreciated, as is all
your hard work.
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://lists.gluster.org
> Fixed! Thanks to Matt and Karsten :)
...and yourself, of course. Thanks for the prompt response.
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-infra
I saw a bunch of jobs get aborted about half an hour ago, due to the nodes they
were on going offline. I figured it was a power hit or something similar and
things would come back by themselves, so I went off to dinner. Checking now,
they're still offline and seeming inclined to remain so. I
It's hanging right after "*** Build GlusterFS ***". I marked it offline, a
retriggered job on another node got past that point just fine after two
failures on 75.
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/l
> Michael and I are happy to announce that the migration is now complete.
Thank you both for all of your hard work. :)
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra
> > Would it be possible to have a workflow where verified +1 vote from the
> > developer indicates that the regression tests have passed in their local
> > setup?
>
> I *really* hope that is the case already!!
It doesn't seem that way. I'm not even trying to address the reasons or
whether they'
> > Our regression test suite allows running tests in a subdirectory, so
> > changes to the tests should not really be needed. In addition to the
> > simple smoke test, this might do:
> >
> > # ./run-tests.sh tests/basic/
>
> Perhaps, we could do build + smoke + tests/basic pre-merge so we catch
> We already only merge after NetBSD-regression and CentOS-regression have
> voted
> back. All I'm changing is that you don't need to do the merge manually or do
> Verified +1 for regression to run.. Zuul will run the tests after you get
> Code-Review +2 and merge it for you with patches ordered co
I had to log back in (for reasons unknown) and the login keeps failing.
AFAICT, the sequence is like this:
(1) Click the "Sign-in with GitHub" link
(2) Get redirected to GitHub, possibly log in there
(3) GitHub seems happy
(4) Get redirected back to r.g.o/oauth?code=FOO&state=BAR%3D%2C%2Flogi
> I also remember that Jeff Darcy posted on "we need to change a value in
> lvm.conf (LVM allocation settings, mail from 6th of april), so anyone
> has more information about that ? (or if someone can ping Jeff for me)
Yes, I found that the default value for activation.reserved_mem
> So can a workable solution be pushed to git, because I plan to force the
> checkout to be like git, and it will break again (and this time, no
> workaround will be possible).
>
It has been pushed to git, but AFAICT pull requests for that repo go into
a black hole.
__
Often, when a regression test seems to be running much slower than
usual, I look in the logs and see this:
> Internal error: Reserved memory (14749696) not enough: used 19890176.
> Increase activation/reserved_memory?
It's pretty easy to fix this manually, by changing a value in lvm.conf,
and I r
> Once this was done, I pushed to use the regular upstream change,
> something that was not done before since the local change broke
> automation to deploy test suite on Freebsd.
It looks like we have two options here:
(1) Fix configure so that it accurately detects whether the
system can/should
> Please make sure that this change also gets included in the repository:
>
> https://github.com/gluster/glusterfs-patch-acceptance-tests
Looks like we're getting a bit of a queue there. Who can merge some of
these?
___
Gluster-infra mailing list
Glu
- Original Message -
> On Sat, Apr 02, 2016 at 07:53:32AM -0400, Jeff Darcy wrote:
> > > IIRC, this happens because in the build job use "--enable-bd-xlator"
> > > option while configure
> >
> > I came to the same conclusion, and set --enable
> IIRC, this happens because in the build job use "--enable-bd-xlator"
> option while configure
I came to the same conclusion, and set --enable-bd-xlator=no on the
slave. I also had to remove -Werror because that was also causing
failures. FreeBSD smoke is now succeeding.
___
I've seen a lot of patches blocked lately by this:
> BD xlator requested but required lvm2 development library not found.
It doesn't happen all the time, so there must be something about
certain patches that triggers it. Any thoughts?
___
Gluster-infra
- Original Message -
> There are a handful of centos regressions that have been running for over
> eight hours.
>
> I don't know if that's contributing to the short backlog of centos
> regressions waiting to run.
I'm going to kill these in a moment, but here's more specific info
in case
> I think we just need to come up with rules for considering a
> platform to have voting ability before merging the patch.
I totally agree, except for the "just" part. ;) IMO a platform is much
like a feature in terms of requiring commitment/accountability,
community agreement on cost/b
- Original Message -
> On Fri, Jan 08, 2016 at 05:11:22AM -0500, Jeff Darcy wrote:
> > [08:45:57] ./tests/basic/afr/arbiter-statfs.t ..
> > [08:43:03] ./tests/basic/afr/arbiter-statfs.t ..
> > [08:40:06] ./tests/basic/afr/arbiter-statfs.t ..
> > [08:08:51
> I am a bit disturbed by the fact that people raise the
> "NetBSD regression ruins my life" issue without doing the work of
> listing the actual issues encountered.
That's because it's not a simple list of persistent issues. As with
spurious regression-test failures on Linux, it's an ever changi
> > Good idea. Once per merge is still less than one per submission (what
> > we have today), and better than nightly/weekly when it comes to
> > identifying the source of a regression. Seems like a good compromise.
>
> Now what is the policy on post-merge regression failure? What happens
> if
> If you do not
> prevent integration of patches that break NetBSD regression, that will get
> in, and tests will break one by one over time.
On the other hand, if patch A starts blocking all merges because of
NetBSD fai
I’m seeing some regression failures that are unique to NetBSD 7 (surprise
surprise) but I can’t get the logs because apparently those machines aren’t
running HTTP servers and aren’t using the proper password. Could somebody
please either tell me how else I can get logs or send me (securely) the
At least one patch that could otherwise be merged is hung up on the lack of
these headers on our at least one of our FreeBSD workers, causing freebsd-smoke
to fail repeatedly.
http://review.gluster.org/#/c/11771/
https://build.gluster.org/job/freebsd-smoke/10549/
Is this already on someone’s ra
> It is the easiest if your blog offers filtered RSS feeds. Some blogs
> have tags or categories that can be used in RSS feeds. If you have
> something like that, please fix the feed with a pull request:
>
> https://github.com/gluster/planet-gluster/blob/master/data/feeds.yml
Pelican does per-ca
I just noticed that one of my non-Gluster-related blog posts from Platypus
Reloaded is being syndicated on Planet Gluster. That's probably not what we
want to have happen, especially when I start writing about more controversial
topics like gun control. ;) Back in the day, we had things set u
> And in the past, if not now, are contributing factors to small file
> performance issues.
I'm not quite seeing the connection here. Which macros are you
thinking of, and how does the fact that they're macros instead of
functions make them bad for small-file performance? AFAIK the
problem with
> Or perhaps we could just get everyone to stop using 'inline'
I agree that it would be a good thing to reduce/modify our use of
'inline' significantly. Any advantage gained from avoiding normal
function-all entry/exit has to be weighed against cache pollution from
having the same code repeated o
> I've discussed this idea with different people before. But the major concern
> was how do we identify that minimal set of basic tests that would do a good
> job of identifying most regressions. Considering that regression suite will
> keep growing and will take longer to complete in the future, n
> I should have made this clearer in the steps I listed.
> Under the 2nd step (I should have numbered as well), I've mentioned
> that Zuul will report back the status of smoke/pre-review tests. This
> is the Verified+1. Though I was thinking of using different flags, we
> can use Verified it self
> Current auto-triggering of regression runs is stupid and a waste of
> time and resources. Bring in a project gating system, Zuul, which can
> do a much more intelligent jobs triggering, and use it to
> automatically trigger regression only for changes with Reviewed+2 and
> automatically merge one
> > I'm adding a new configuration to enable linking bug-ids in commit
> > messages (and other places) to their respective bugzilla page.
> >
> > This makes it easier for reviewers to quickly lookup the bug when
> > reviewing.
Excellent. I was just wishing for this yesterday. Thanks!
___
> trying to clone ssh://kkeit...@git.gluster.org/glusterfs.git from two
> different machines is giving
>
> remote: internal server error7
> fatal: early EOF
> fatal: index-pack failed
> [kkeithle@f21node1 gerrit]$ fatal: internal server error
Several regression tests failed in a similar way, but
37 matches
Mail list logo