ffs (log, nodev, nosuid, local)
kernfs on /kern type kernfs (local)
--
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
m...@netbsd.org
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra
Nigel Babu wrote:
> Atin says we've noticed this in the past and somehow fixed it. Do you
> recall what we did to fix it?
Is it the same problem? The key test is ps -axl and observe the WCHAN
column for stuck umount process. If it is tstile then this is the
ancient bug.
--
Emman
Nigel Babu wrote:
> I'll definitely appreciate any feedback you can have in terms
> of code when it's ready for review.
No problem. But regresson infrastructure will catch any issue better
that I would, anyway. :-)
--
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
nt setting bugs when a job is manually
cancelled and retriggered (here is a point to fix!)
--
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
m...@netbsd.org
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra
Nigel Babu wrote:
> Oh, it's in the pool for netbsd-7 smoke, which we don't run anymore. Shall the
> I kill the machine, then?
No problem for me.
> The smoke is perhaps just a build, which we do during regressions on netbsd7
> anyway.
And we do smoke on netbsd-6 anyway.
.gluster.org is used anymore.
--
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
m...@netbsd.org
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra
vipw, copy /etc/master.passwd and run
pwd_mkdb -p /etc/master.passwd eveyrwhere to regenerate /etc/passwd
--
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
m...@netbsd.org
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.
it and deploy to other machines,
it would not hurt.
--
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
m...@netbsd.org
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra
On Mon, Jul 18, 2016 at 09:37:19AM +, Emmanuel Dreyfus wrote:
> On Mon, Jul 18, 2016 at 10:35:45AM +0530, Nigel Babu wrote:
> > Would it be problematic if I added 20GB of block storage per machine for the
> > /build, /home/jenkins and /archives folder? That should easily sort o
have some spare space
beyond the / partition
--
Emmanuel Dreyfus
m...@netbsd.org
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra
On Fri, Jul 15, 2016 at 03:04:39PM +0530, Nigel Babu wrote:
> Would it be okay to write a cron to clean up anything older than 15 days in
> /build/install and /archives?
You have to cleanup after some time. How is it handled on Linux boxen?
--
Emmanuel Dreyfus
m...@netb
On Fri, Jul 15, 2016 at 07:19:17AM +, Emmanuel Dreyfus wrote:
> On Fri, Jul 15, 2016 at 10:59:04AM +0530, Nigel Babu wrote:
> > nbslave77.cloud.gluster.org
>
> That one has 1,6 Go of logs in /build/install/var/log/glusterfs
And if you look for free space, you can wipe /usr
On Fri, Jul 15, 2016 at 10:59:04AM +0530, Nigel Babu wrote:
> nbslave77.cloud.gluster.org
That one has 1,6 Go of logs in /build/install/var/log/glusterfs
--
Emmanuel Dreyfus
m...@netbsd.org
___
Gluster-infra mailing list
Gluster-infra@gluster.org
h
e anywhere else I should be looking?
What are the offending machines?
Core files are configured to go in /var/crash, but /var/* should be a
good place to look at.
--
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
m...@netbsd.org
___
Gluster-infra
out.
I did this because of a glusterfs bug that overrote random file with
logs.
I tend to use it that way to overrite a file:
cat hosts | ssh root@host "chflags nouchg /etc/hosts; cat > /etc/hosts;
chflags uchg /etc/hosts"
--
Emmanuel Dreyfus
http://hcpnet.fre
ansible like
> we do for the Centos ones ?
I have no problem with it, but I must confess a complete lack of
experience with this tool.
--
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
m...@netbsd.org
___
Gluster-infra mailing list
Gluste
elated: I gave it a quick try,
and it is able to build and run tests.
--
Emmanuel Dreyfus
m...@netbsd.org
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra
x this issue if I encounter the same in
> future, I can do it myself.
The machine is stuck in a bad corner case from previous run, and cannot
cleanup for the new run. ps -axl shows many umount processes in tstile
wchan. reboot -n is adivsed in such a situation.
--
Emmanuel Dreyfus
http://hcpnet
> Or it could be that I just forgot.
I just checked out master onnbslave74, built and ran tests, it seems
fine.
--
Emmanuel Dreyfus
m...@netbsd.org
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra
d for testing, you could pick one.
I still does not know who is the password guardian at Rehat, though.
--
Emmanuel Dreyfus
m...@netbsd.org
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra
as the result.
--
Emmanuel Dreyfus
m...@netbsd.org
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra
*/
+(void)sysctlbyname("proc.curproc.corename", NULL, NULL,
+ corename, strlen(corename) + 1);
+
/*
* At least on NetBSD, the chdir() above uncovers a
On Thu, Jan 28, 2016 at 12:10:49PM +0530, Atin Mukherjee wrote:
> So does that mean we never analyzed any core reported by NetBSD
> regression failure? That's strange.
We got the cores from / but not from d/backends/*/ as I understand.
I am glad someone figured out the mystery.
-
ILED,
"chdir() to \"%s\" failed",
_private->base_path);
goto out;
}
And the core goes in current directory by default. We could use
sysctl(3
efused is really when it gets a TCP RST.
--
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
m...@netbsd.org
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra
t, nor on netbsd. Did it happened
> again ?
I could imagine problems with exhausted system resources, but it would
not produce a "Connection refused".
--
Emmanuel Dreyfus
m...@netbsd.org
___
Gluster-infra mailing list
Gluster-infra@gluster.
e time.
--
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
m...@netbsd.org
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra
Vijay Bellur wrote:
> There is some problem with review.gluster.org now. git clone/pull fails
> for me consistently.
First check DNS is working. I recall seing rackspace DNS failing to
answer.
--
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
m...@netb
Hi all
I have the followif changes awaiting code review/merge:
http://review.gluster.org/13204
http://review.gluster.org/13205
http://review.gluster.org/13245
http://review.gluster.org/13247
--
Emmanuel Dreyfus
m...@netbsd.org
___
Gluster-infra
Emmanuel Dreyfus wrote:
> But I just realized the change is wrong, since running tests "new way"
> stops on first failed test. My change just retry the failed test and
> considers the regression run to be good on success, without running next
> tests.
>
> I will p
tps://github.com/gluster/glusterfs-patch-acceptance-tests/ ? Or, if
> you dont use GitHub, send the patch by email and we'll take care of
> pushing it for you.
Sure, but let me settle on something that works first.
--
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
m...@netbsd.org
___
s detected early and reported. The second regression will
fail, but the idea is to get a better understanding of how that can
occur.
This fix is not deployed yet, I await the fixes from point 2 to be
merged
--
Emmanuel Dreyfus
http://hcpnet.free.fr/
Emmanuel Dreyfus wrote:
> While trying to reproduce the problem in
> ./tests/basic/afr/arbiter-statfs.t, I came to many failures here:
>
> [03:53:07] ./tests/basic/afr/split-brain-resolution.t
I was running tests from wrong directory :-/
This one is fine with HEAD.
--
Emmanuel
runs that are scheduled simultaneously?
--
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
m...@netbsd.org
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra
on nbslave70, with reboot on panic disabled (it
will drop into kernel debugger instead). No result so far.
--
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
m...@netbsd.org
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mail
rick0_alive"
cat: /mnt/glusterfs/0/data-split-brain.txt: Input/output error
not ok 27 Got "" instead of "brick1_alive"
getfattr: Removing leading '/' from absolute path namesnot ok 30 Got ""
instead of "brick0"
not ok 32 Got "" inst
Emmanuel Dreyfus wrote:
> > With your support I think we can make things better. To avoid duplication of
> > work, did you take any tests that you are already investigating? If not that
> > is the first thing I will try to find out.
>
> I will look at the ./tests/ba
roying? Could be a stupid question but still
> asking.
Well the kernel tells us it is not in use. I am not sure what you mean.
--
Emmanuel Dreyfus
m...@netbsd.org
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mai
look for loopback devices which backing store are in $B0
and unconfigure them.
--
Emmanuel Dreyfus
m...@netbsd.org
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra
On Fri, Jan 08, 2016 at 10:56:22AM +, Emmanuel Dreyfus wrote:
> On Fri, Jan 08, 2016 at 03:18:02PM +0530, Pranith Kumar Karampuri wrote:
> > With your support I think we can make things better. To avoid duplication of
> > work, did you take any tests that you are already invest
ok at the ./tests/basic/afr/arbiter-statfs.t problem with
loopback device.
--
Emmanuel Dreyfus
m...@netbsd.org
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra
at point, we could start the regression script by:
( sleep 7200 && /sbin/reboot -n ) &
And end it with:
kill %1
Does it seems reasonable? That way nothing can hang more than 2 hours.
--
Emmanuel Dreyfus
m...@netbsd.org
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra
tigating right now because I have no idea where
I should look at. Your input will be very valuable.
--
Emmanuel Dreyfus
m...@netbsd.org
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra
On Fri, Jan 08, 2016 at 12:42:36PM +0530, Sachidananda URS wrote:
> I have a NetBSD 7.0 installation which I can share with you, to get
> started.
> Once manu@ gets back on a specific version, I can set that up too.
NetBSD 7.0 is fine and has everything required in GENERIC kernel.
--
u have done so far in making sure
> glusterfs is stable on NetBSD.
Thanks! I must confess the idea of having the NetBSD port demoted is a bit
depressing given the amount of work I invested in it.
--
Emmanuel Dreyfus
m...@netbsd.org
___
Gluste
what
> caused the issue in the first place.
I meant: what test exhibited spurious failure or hang? You can see that
from the regression test run. Previous experence makes me suspect we
will narrow the problem to a few tests that can be disabled.
--
Emmanuel Dreyfus
http://h
y forward would be to identify what tests cause
frequent NetBSD spurious failures and disable them for NetBSD
regression. I am a bit disturbed by the fact that people raise the
"NetBSD regression ruins my life" issue without doing the work of
listing the actual issues encountered.
--
Emmanue
ke a serious issue a blocker? The serious issue
will be related to multiple patches, it will be impossible to tell which
one is the offender.
If we go that way, we need to run a regression for each merged patch,
which will be much less load than today.
--
Emmanuel Dreyfus
http://hcpnet.fr
ers
regressions they did not cause. Being in the situation to fix the
regression test will be a rare event, and the whole thing will quickly rot.
--
Emmanuel Dreyfus
m...@netbsd.org
___
Gluster-infra mailing list
Gluster-infra@gluster.org
h
On Thu, Dec 31, 2015 at 03:57:15PM +0530, Raghavendra Talur wrote:
> You can log in. I think the HUP signal did not cause any
> change in process state. I still see it in I state.
> pid is 10967.
That one is perl runninf proce quota.t. I beleive 15221 is the
stuck one.
--
Emmanuel
There is a jenkins job running on that machine. May I proceed? Where
is the relevant test suite?
A nice way of handling over the bug to someone else could be to run in
the screen utility.
--
Emmanuel Dreyfus
m...@netbsd.org
___
Gluster-infra mailing
ng to investigate but I lack time for now. I
issued a reboot. Please tell me if you can reproduce it.
--
Emmanuel Dreyfus
m...@netbsd.org
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra
ot nbslave74?
--
Emmanuel Dreyfus
m...@netbsd.org
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra
ckspace-netbsd7-regression-triggered/10473/console
> for instance.
I will be able to look at this in a few hours. In the meantime, check
that the filesystems of the test node are not full.
--
Emmanuel Dreyfus
m...@netbsd.org
___
Gluster-infra ma
egression.sh which might have
> caused it.
Yes, I am surely the culprit.
> Comparing the version at github and one at nbslave77.cloud.gluster.org I
> found quite a few differences. If someone is aware of recent changes need
> help in fixing it.
What difference do you ha
repare an
install script to "freeze" the setup once the VM is created.
--
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
m...@netbsd.org
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra
Emmanuel Dreyfus wrote:
> Let me know if it is too wide and causes trouble.
It was, I had to remove the immutable flag on:
/usr/pkg/lib/python2.7/site-packages/gluster/
=> we install glupy.py there
/etc/openssl
=> ssl-authz.t create key and cert there
And that lets a job pass r
Emmanuel Dreyfus wrote:
> Let me know if it is too wide and causes trouble. I
--
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
m...@netbsd.org
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/glus
this because I am
not sure rackspace console access lets us use single user mode.
--
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
m...@netbsd.org
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster
Avra Sengupta wrote:
> All NetBSD regression failures are again failing (more like refusing to
> build), with the following error.
Random files clobbered by G_LOG?
--
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
m...@netbsd.org
___
Gluster
e test in C showing the problem,
I will be glad to fix the implementation.
--
Emmanuel Dreyfus
m...@netbsd.org
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra
well.
Yes, I can wipe them from regression.sh before running the tests, like
we do for tests/bugs (never ported), tests/basic/tier/tier.t and
tests/basic/ec (the two later used to pass but started exhibiting too
much spurious failures).
--
Emmanuel Dreyfus
http://hcpnet.free.fr/pub
On Mon, Aug 17, 2015 at 07:37:56AM +, Emmanuel Dreyfus wrote:
> > Michael/Manu, could you have a look at that?
> nbslave71 and nbslave79 seems very sick too.
I restored nbslave7[179]
--
Emmanuel Dreyfus
m...@netbsd.org
___
Gluster-infr
resting: almost all files in /root and /etc were corupted with
glusterfs regression log messages appended to them.
--
Emmanuel Dreyfus
m...@netbsd.org
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra
nbslave71 and nbslave79 seems very sick too.
--
Emmanuel Dreyfus
m...@netbsd.org
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra
Vijay Bellur wrote:
> This required a gerrit db update and I have done that. Can you please
> check now?
Yes, it works. The test is to run as jenkins user:
ssh nb7bu...@review.gluster.org 'gerrit --help'
--
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
enkins wheel417 Dec 19 2014 id_rsa2048.oub
-rw-r--r-- 1 jenkins wheel 10508 Apr 14 13:47 known_hosts
The simpliest fix is to copy nbslave7h:/home/jenkins/.ssh/id_rsa in
review.gluster.org:~nb7build/.ssh/authorized_keys but I cannot do that.
--
Emmanuel Dreyfus
http://hcpnet.free.fr/
Hi
build.gluster.org presented me a self-signed certificate. I accepted it,
but please someone confirm it is on purpose.
While there, startssl offers free certs...
--
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
m...@netbsd.org
___
Gluster-infra
rsa on a few nodes. I removed
the new id_rsa and id_rsa.pub and replaced id_rsa by the right one
copied from a machine where it worked.
--
Emmanuel Dreyfus
m...@netbsd.org
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra
- Weak upstream DNS service: worked around by /etc/hosts (a secondary
DNS would be more automatic, but at least it works)
- Jenkins has a DNS cache and needs a restart
How do ongoing jobs behaved on Jenkins restart? Did you have to restart
them all or did Jenkins care of it?
--
Emmanuel Dreyfu
d,
but nobody dare to try.
--
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
m...@netbsd.org
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra
Niels de Vos wrote:
> I'm not sure what limitation you mean. Did we reach the limit of slaves
> that Jenkins can reasonably address?
No I mean its inability to catch a new DNS record.
--
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
m...
re having problems with Jenkin's
ability to get more hosts?
--
Emmanuel Dreyfus
m...@netbsd.org
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra
agent on e.g. nbslave75.
Perhaps it needs to be restarted?
--
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
m...@netbsd.org
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra
On Wed, Jun 17, 2015 at 03:00:29PM +, Emmanuel Dreyfus wrote:
> Oh no, it did, but nuked them all almost instantly (see below). I
> disabled it again. Basically we have borken jenkins setups, and DNS
> trouble prevent us from adding new VM. What a mess.
I retriggered most of the job
On Wed, Jun 17, 2015 at 08:34:06PM +0530, Kaushal M wrote:
> Would restarting jenkins once help? It might help it pick up the newly
> added entries to the hosts file.
Won't it break all running jobs?
--
Emmanuel Dreyfus
m...@netbsd.org
On Wed, Jun 17, 2015 at 02:57:28PM +, Emmanuel Dreyfus wrote:
> I re-enabled it and it went online, but it does not seems to pick a job.
Oh no, it did, but nuked them all almost instantly (see below). I
disabled it again. Basically we have borken jenkins setups, and DNS
trouble prevent
re-enable it.
I re-enabled it and it went online, but it does not seems to pick a job.
--
Emmanuel Dreyfus
m...@netbsd.org
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra
;org shows it does not even tries.
Perhaps there is a name cache in jenkisn and it needs to be restarted?
I am leaving the /etc/hosts file loaded with nbslave74 nbslave75 nbslave79
--
Emmanuel Dreyfus
m...@netbsd.org
___
Gluster-infra mailing list
On Wed, Jun 17, 2015 at 07:44:14AM -0400, Vijay Bellur wrote:
> Do we still have the NFS crash that was causing tests to hang?
Do we still have it on rebased patchsets?
--
Emmanuel Dreyfus
m...@netbsd.org
___
Gluster-infra mailing list
Gluster-in
kets, TCP is more resistant, and if it is an
overloaded DNS server, the problem is only for DNS.
--
Emmanuel Dreyfus
m...@netbsd.org
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra
ct
> something else.
Perhaps a /etc/hosts would do it: jenkins launches the ssh command,
and ssh should use /etc/hosts before the DNS.
--
Emmanuel Dreyfus
m...@netbsd.org
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/ma
On Wed, Jun 17, 2015 at 11:59:22AM +0530, Kaushal M wrote:
> cloud.gluster.org is served by Rackspace Cloud DNS
Perhaps we can change that and setup a DNS for the zone?
--
Emmanuel Dreyfus
m...@netbsd.org
___
Gluster-infra mailing list
Gluster-in
s the reason why Jenkins bugs.
--
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
m...@netbsd.org
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra
only for rebased changes.
--
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
m...@netbsd.org
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra
ike revview and verified.
--
Emmanuel Dreyfus
m...@netbsd.org
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra
rastructure.
--
Emmanuel Dreyfus
m...@netbsd.org
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra
is wrecked on build.gluster.org: I tried a tcpdump
to diagnose the problem and:
tcpdump: unknown host 'nbslave71.cloud.gluster.org'
Another attmpt gives me the correct answer after more than 5 seconds.
I am almost convinced that a local named on build.gluster.org would
help a lot.
--
Emmanuel
On Thu, Jun 11, 2015 at 07:26:00AM +, Emmanuel Dreyfus wrote:
> In my opinion the fix to this problem is to start new VM. I was busy
> on other fronts hence I did not watched the situation, but it is still
> grim, with most NetBSD slaves been in screwed state. We need to spi
vestigated
the failure was caused by the master breaking connexion, but I was not
able to understand why.
I w once able to receover a VM by fiddeling the jenkins configuration
in web UI, but experimenting is not easy, as a miss will drain all the
queue into complete failures.
--
Emma
In my opinion the fix to this problem is to start new VM. I was busy
on other fronts hence I did not watched the situation, but it is still
grim, with most NetBSD slaves been in screwed state. We need to spin
more.
--
Emmanuel Dreyfus
m...@netbsd.org
__
Vijay Bellur wrote:
> This certainly does explain the baffling behavior
I just had a look, nbslave7[1cde] are stuck.
nbslave71 does not accept SSH connexionx
I rebooted nbslave7c
nbslave[de] are not in the DNS.
--
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
m...@netbsd.
n
order to have DNS set up.
--
Emmanuel Dreyfus
m...@netbsd.org
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra
lave7g after nbslave7f :-)
When it does not reboot, a peek at the console may help. I guess this is
a fsck problem.
--
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
m...@netbsd.org
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster
incorect responses, just lack of response.
--
Emmanuel Dreyfus
m...@netbsd.org
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra
rFS stuff and will
speed up all DNS requests and make them more reliable at the same time.
--
Emmanuel Dreyfus
m...@netbsd.org
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra
Another connectivity failure:
http://build.gluster.org/job/rackspace-netbsd7-regression-triggered/5420
The slave VM uptime sugests it did not reboot during the build.
--
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
m...@netbsd.org
___
Gluster-infra
Vijay Bellur wrote:
> Around 9:25 UTC.
There is this one that looks like the old bug:
http://build.gluster.org/job/rackspace-netbsd7-regression-triggered/5410/console
But the same machine (nbslave71) was at least able to run other jobs after this.
--
Emmanuel Dreyfus
http://hcpnet.free
Vijay Bellur wrote:
> Manu - can you please verify and report back if the NetBSD slaves work
> better with the upgraded Jenkins master?
At what time the new jenkins started up?
--
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
m...@netb
o run jobs again.
But there are other frustrating failures: for instance this one was
disconnected during a run, I still wonder why:
http://build.gluster.org/job/rackspace-netbsd7-regression-triggered/5380
--
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
m...@
1 - 100 of 108 matches
Mail list logo