Sure, the cluster is still up at canonistack if you want to poke at
them:
I've used two charms: jenkins x1 and jenkins-slave x2, vanilla on
precise.
My envirionments.yaml looks like this:
environments:
canonistack:
type: ec2
ec2-uri: http://91.189.93.65:8773/services/Cloud
s3-uri:
Public bug reported:
I've been running a small juju cluster for a week now.
There were some connection issues in the data center I'm using and juju
had issues connecting to zookeeper. As a result /var/log/juju/juju-
machine-agent.log quickly grew to 200M
Overall on the four machines that make
Public bug reported:
I've deployed jenkins and jenkins-slave units in my juju cluster. On one
of the jenkins-slave instances the /var/lib/juju/units/jenkins-
slave-*/charm.log was over 700M in size (uncompressed) before I spotted
the issue. This instance is very lightly used and it was running
Public bug reported:
My juju cluster had some connection issues to zookeeper. While I was
reading the charm.log of my jenkins-slave unit I noticed that juju had
logged many thousands of exceptions such as this one:
2012-11-09 06:51:07,514: twisted@ERROR: Traceback (most recent call last):
Public bug reported:
I was analyzing why my juju cluster was down. The particular instance I
was analyzing had the jenkins-slave charm deployed.
Reading /var/lib/juju/units/jenkins-slave-*/charm.log I found the
following unhandled exception:
2012-11-13 09:04:23,261: twisted@ERROR: Unhandled
Public bug reported:
My juju cluster running on precise had severe connection issues to
zookeeper. Looking at zookeeper logs I can see that my juju nodes cannot
connect as they already have pending connections.
Zookeeper log is full of lines like (quick count gives me around 30,000)
2012-11-08
Public bug reported:
After default installation the following permissions are applied:
-rw-r--r-- 1 maas root 193 Oct 15 14:37
/etc/bind/maas/named.conf.rndc.maas
This makes the bind communication key readable to all users of the
system
ProblemType: Bug
DistroRelease: Ubuntu 12.10
Package:
Public bug reported:
Running dpkg-reconfigure maas-dns breaks the current dns config
Observation seems to indicate that each reconfigure appends the
following line to /etc/bind/named.conf.local
include /etc/bind/maas/named.conf.maas;
This causes bind to choke and output confusing messages
Public bug reported:
The default install of maas-dns creates the following file
/etc/bind/maas/rndc.conf.maas
Among it are the following two bind9 configuration sections:
key rndc-maas-key {
algorithm hmac-md5;
secret (edited away);
};
options {
default-key
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to juju in Ubuntu.
https://bugs.launchpad.net/bugs/955110
Title:
juju should tell me that I'm not in libvirtd group when running juju
bootstrap
To manage notifications about this bug go
Public bug reported:
This causes spurious errors errors such as:
2012-03-14 15:37:06,981 ERROR Command '['virsh', 'net-start',
'default']' returned non-zero exit status 1
ProblemType: Bug
DistroRelease: Ubuntu 12.04
Package: juju 0.5+bzr457-0ubuntu1
ProcVersionSignature: Ubuntu
This occurred while configuring the same package using linaro headless
20110208 with identical error messages:
qemu: Unsupported syscall: 250
plymouth: ply-event-loop.c:466: ply_event_loop_new: Assertion `loop-epoll_fd
= 0' failed.
qemu: uncaught target signal 6 (Aborted) - core dumped
Aborted
I built qemu-kvm with DEB_BUILD_OPTIONS=noopt that replaces -O2 with -O0
and (after _long_ session) ran into this crasher:
Program received signal SIGSEGV, Segmentation fault.
0xcde9719c in ?? ()
(gdb) bt
#0 0xcde9719c in ?? ()
#1 0x7fffe090 in ?? ()
#2
Hi
I can reproduce this each time by running netboot installer using the
versatile kernel:
#!/bin/sh
qemu-img create -f qcow2 sda.qcow2 16G
gdb --args qemu-system-arm -M versatilepb -m 256 -cpu cortex-a8 -kernel vmlinuz
-initrd initrd.gz -hda sda.qcow2 -append mem=256M
Here is the backtrace:
14 matches
Mail list logo