ancient bug.
--
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
m...@netbsd.org
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra
Nigel Babu <nig...@redhat.com> wrote:
> I'll definitely appreciate any feedback you can have in terms
> of code when it's ready for review.
No problem. But regresson infrastructure will catch any issue better
that I would, anyway. :-)
--
Emmanuel Dreyfus
http://hcpnet.fr
setting bugs when a job is manually
cancelled and retriggered (here is a point to fix!)
--
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
m...@netbsd.org
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra
on netbsd-6 anyway.
--
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
m...@netbsd.org
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra
I am not sure netbsd7.cloud.gluster.org is used anymore.
--
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
m...@netbsd.org
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra
one machine using vipw, copy /etc/master.passwd and run
pwd_mkdb -p /etc/master.passwd eveyrwhere to regenerate /etc/passwd
--
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
m...@netbsd.org
___
Gluster-infra mailing list
Gluster-infra@glus
On Fri, Jul 15, 2016 at 03:04:39PM +0530, Nigel Babu wrote:
> Would it be okay to write a cron to clean up anything older than 15 days in
> /build/install and /archives?
You have to cleanup after some time. How is it handled on Linux boxen?
--
Emmanuel Dreyfus
m...@netb
On Fri, Jul 15, 2016 at 07:19:17AM +, Emmanuel Dreyfus wrote:
> On Fri, Jul 15, 2016 at 10:59:04AM +0530, Nigel Babu wrote:
> > nbslave77.cloud.gluster.org
>
> That one has 1,6 Go of logs in /build/install/var/log/glusterfs
And if you look for free space, you can wipe /usr
Is there anywhere else I should be looking?
What are the offending machines?
Core files are configured to go in /var/crash, but /var/* should be a
good place to look at.
--
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
m...@netbsd.org
___
Gluster-
if we start to manage them with ansible like
> we do for the Centos ones ?
I have no problem with it, but I must confess a complete lack of
experience with this tool.
--
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
m...@netbsd.org
___
Gluster-in
elated: I gave it a quick try,
and it is able to build and run tests.
--
Emmanuel Dreyfus
m...@netbsd.org
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra
t; you let me know a procedure to fix this issue if I encounter the same in
> future, I can do it myself.
The machine is stuck in a bad corner case from previous run, and cannot
cleanup for the new run. ps -axl shows many umount processes in tstile
wchan. reboot -n is adivsed in such a situation.
--
ld be that I just forgot.
I just checked out master onnbslave74, built and ran tests, it seems
fine.
--
Emmanuel Dreyfus
m...@netbsd.org
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra
testing, you could pick one.
I still does not know who is the password guardian at Rehat, though.
--
Emmanuel Dreyfus
m...@netbsd.org
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra
*/
+(void)sysctlbyname("proc.curproc.corename", NULL, NULL,
+ corename, strlen(corename) + 1);
+
/*
* At least on NetBSD, the chdir() above uncovers a
ILED,
"chdir() to \"%s\" failed",
_private->base_path);
goto out;
}
And the core goes in current directory by default. We could use
sysctl(3
, but I beleive connection
refused is really when it gets a TCP RST.
--
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
m...@netbsd.org
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra
d regression at some time.
--
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
m...@netbsd.org
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra
cted early and reported. The second regression will
fail, but the idea is to get a better understanding of how that can
occur.
This fix is not deployed yet, I await the fixes from point 2 to be
merged
--
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
m...@
Emmanuel Dreyfus <m...@netbsd.org> wrote:
> But I just realized the change is wrong, since running tests "new way"
> stops on first failed test. My change just retry the failed test and
> considers the regression run to be good on success, without running next
> tests
Emmanuel Dreyfus <m...@netbsd.org> wrote:
> While trying to reproduce the problem in
> ./tests/basic/afr/arbiter-statfs.t, I came to many failures here:
>
> [03:53:07] ./tests/basic/afr/split-brain-resolution.t
I was running tests from wrong directory :-/
This one
you have done so far in making sure
> glusterfs is stable on NetBSD.
Thanks! I must confess the idea of having the NetBSD port demoted is a bit
depressing given the amount of work I invested in it.
--
Emmanuel Dreyfus
m...@netbsd.org
___
Gluste
On Fri, Jan 08, 2016 at 12:42:36PM +0530, Sachidananda URS wrote:
> I have a NetBSD 7.0 installation which I can share with you, to get
> started.
> Once manu@ gets back on a specific version, I can set that up too.
NetBSD 7.0 is fine and has everything required in GENERIC kernel.
--
Emmanuel Dreyfus <m...@netbsd.org> wrote:
> > With your support I think we can make things better. To avoid duplication of
> > work, did you take any tests that you are already investigating? If not that
> > is the first thing I will try to find out.
>
> I will
ot;" instead of "brick1"
It is not in the lists posted here. Is it only at mine?
--
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
m...@netbsd.org
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra
tigating right now because I have no idea where
I should look at. Your input will be very valuable.
--
Emmanuel Dreyfus
m...@netbsd.org
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra
ers
regressions they did not cause. Being in the situation to fix the
regression test will be a rare event, and the whole thing will quickly rot.
--
Emmanuel Dreyfus
m...@netbsd.org
___
Gluster-infra mailing list
Gluster-infra@gluster.org
h
ice.
How are you going to make a serious issue a blocker? The serious issue
will be related to multiple patches, it will be impossible to tell which
one is the offender.
If we go that way, we need to run a regression for each merged patch,
which will be much less load than today.
--
Emm
egression.sh which might have
> caused it.
Yes, I am surely the culprit.
> Comparing the version at github and one at nbslave77.cloud.gluster.org I
> found quite a few differences. If someone is aware of recent changes need
> help in fixing it.
What difference do you ha
or if I prepare an
install script to freeze the setup once the VM is created.
--
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
m...@netbsd.org
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra
Emmanuel Dreyfus m...@netbsd.org wrote:
Let me know if it is too wide and causes trouble.
It was, I had to remove the immutable flag on:
/usr/pkg/lib/python2.7/site-packages/gluster/
= we install glupy.py there
/etc/openssl
= ssl-authz.t create key and cert there
And that lets a job pass
will be glad to fix the implementation.
--
Emmanuel Dreyfus
m...@netbsd.org
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra
.
Yes, I can wipe them from regression.sh before running the tests, like
we do for tests/bugs (never ported), tests/basic/tier/tier.t and
tests/basic/ec (the two later used to pass but started exhibiting too
much spurious failures).
--
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
m...@netbsd.org
seems very sick too.
--
Emmanuel Dreyfus
m...@netbsd.org
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra
Vijay Bellur vbel...@redhat.com wrote:
This required a gerrit db update and I have done that. Can you please
check now?
Yes, it works. The test is to run as jenkins user:
ssh nb7bu...@review.gluster.org 'gerrit --help'
--
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
m...@netbsd.org
-- 1 jenkins wheel417 Dec 19 2014 id_rsa2048.oub
-rw-r--r-- 1 jenkins wheel 10508 Apr 14 13:47 known_hosts
The simpliest fix is to copy nbslave7h:/home/jenkins/.ssh/id_rsa in
review.gluster.org:~nb7build/.ssh/authorized_keys but I cannot do that.
--
Emmanuel Dreyfus
http://hcpnet.free.fr
Hi
build.gluster.org presented me a self-signed certificate. I accepted it,
but please someone confirm it is on purpose.
While there, startssl offers free certs...
--
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
m...@netbsd.org
___
Gluster-infra
by the right one
copied from a machine where it worked.
--
Emmanuel Dreyfus
m...@netbsd.org
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra
be restarted,
but nobody dare to try.
--
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
m...@netbsd.org
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra
an agent on e.g. nbslave75.
Perhaps it needs to be restarted?
--
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
m...@netbsd.org
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra
On Wed, Jun 17, 2015 at 07:44:14AM -0400, Vijay Bellur wrote:
Do we still have the NFS crash that was causing tests to hang?
Do we still have it on rebased patchsets?
--
Emmanuel Dreyfus
m...@netbsd.org
___
Gluster-infra mailing list
Gluster-infra
this
is the reason why Jenkins bugs.
--
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
m...@netbsd.org
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra
.
Perhaps a /etc/hosts would do it: jenkins launches the ssh command,
and ssh should use /etc/hosts before the DNS.
--
Emmanuel Dreyfus
m...@netbsd.org
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster
On Wed, Jun 17, 2015 at 08:34:06PM +0530, Kaushal M wrote:
Would restarting jenkins once help? It might help it pick up the newly
added entries to the hosts file.
Won't it break all running jobs?
--
Emmanuel Dreyfus
m...@netbsd.org
___
Gluster-infra
.
--
Emmanuel Dreyfus
m...@netbsd.org
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra
the failure was caused by the master breaking connexion, but I was not
able to understand why.
I w once able to receover a VM by fiddeling the jenkins configuration
in web UI, but experimenting is not easy, as a miss will drain all the
queue into complete failures.
--
Emmanuel Dreyfus
m...@netbsd.org
again.
But there are other frustrating failures: for instance this one was
disconnected during a run, I still wonder why:
http://build.gluster.org/job/rackspace-netbsd7-regression-triggered/5380
--
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
m...@netbsd.org
understanding is that this is the jenkins side
that is screwed.
Justin, we experienced similar troubles in the past, how did you fix
them?
--
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
m...@netbsd.org
___
Gluster-infra mailing list
Gluster-infra
Emmanuel Dreyfus m...@netbsd.org wrote:
I restored nbslave7[3ab] from image, but they keep failing. I uess the
fault is on jenkins' side. Justin has been able to clear that kind of
mess in the past. Justin?
Jenkins just said this when launchign agent on nbslave7a:
===[JENKINS REMOTING
49 matches
Mail list logo