Hi, I tried with ubuntu charm and the issue is still reproducible. I tried creating a container first and then KVM. But behavior seems to be similar.
ID WORKLOAD-STATE AGENT-STATE VERSION MACHINE PORTS PUBLIC-ADDRESS MESSAGE juju-gui/0 unknown idle 1.25.3 0 80/tcp,443/tcp cm-mainserver-juju mysql/1 unknown idle 1.25.3 5 3306/tcp 10.214.2.125 oai-enb/10 blocked idle 1.25.3 21 2152/udp 10.214.2.135 Waiting for EPC relation ubuntu-a/1 unknown allocating 26/lxc/2 10.0.3.87 Waiting for agent initialization to finish ubuntu/2 unknown allocating 26/kvm/6 192.168.122.22 Waiting for agent initialization to finish [Machines] ID STATE VERSION DNS INS-ID SERIES HARDWARE 0 started 1.25.3 cm-mainserver-juju manual: trusty arch=amd64 cpu-cores=1 mem=3942M 5 started 1.25.3 10.214.2.125 manual:10.214.2.125 trusty arch=amd64 cpu-cores=1 mem=3942M 21 started 1.25.3 10.214.2.135 manual:10.214.2.135 trusty arch=amd64 cpu-cores=4 mem=15677M 26 started 1.25.3 10.214.2.127 manual:10.214.2.127 trusty arch=amd64 cpu-cores=1 mem=3942M How is the public address generated ?. I don't see those IP addresses when I execute "ip addr show" in machine 26. Also, I observed following error when I checked /var/log/juju/machine-26-lxc-2.log 2016-04-04 01:44:42 DEBUG juju.worker.logger logger.go:45 reconfiguring logging from "<root>=DEBUG" to "<root>=WARNING;unit=DEBUG" 2016-04-04 01:44:42 WARNING juju.cmd.jujud machine.go:948 determining kvm support: INFO: /dev/kvm does not exist HINT: sudo modprobe kvm_intel modprobe: ERROR: ../libkmod/libkmod.c:556 kmod_search_moddep() could not open moddep file '/lib/modules/3.19.0-031900-lowlatency/modules.dep.bin' : exit status 1 no kvm containers possible However, /dev/kvm and /lib/modules/3.19.0-031900-lowlatency/modules.dep.bin files are present. I regenerated dep.bin file using sudo depmod as well. But still observing same behavior. Further, I *didn't see any* /var/log/juju/machine-26-kvm*.log files when I created a KVM. Any inputs will be greatly helpful. Regards, Phani On Sun, Apr 3, 2016 at 12:22 AM, John Meinel <j...@arbash-meinel.com> wrote: > To isolate issues you can always just try deploying the "ubuntu" charm, > and see if it comes up correctly. For further debugging I would ssh onto > the outer machine (juju ssh 26 in this case) and see how things are > configured. You can look at /var/log/juju/machine*.log. And get the output > of things like "ip addr show". > > John > =:-> > On Apr 3, 2016 11:19 AM, "John Meinel" <j...@arbash-meinel.com> wrote: > >> My initial thought is that we aren't getting KVM onto a bridge to the >> outer network properly. One thought that you could try is to first deploy >> and LXC/LXD container onto machine 26 and then do the KVM. This is simply >> because I know we've spent a lot more time getting container networking >> right, and we could have a bug where we are assuming a bridge that we only >> set up for containers. >> >> John >> =:-> >> On Apr 2, 2016 11:57 PM, "phani shankar" <phanishanka...@gmail.com> >> wrote: >> >>> Hi, >>> >>> We are trying to use Juju Charms to deploy a network. We require each of >>> the charm to run in its own KVM. We are using Juju GUI (running on another >>> machine) to create a KVM container on the node and start the charm. >>> However, we observe that charm is stuck at agent initialization. We are >>> able to bring up the charm in the root container. Can you please guide us >>> on how we can debug it further. >>> >>> juju-debug-log indicates that there was an error fetching public address >>> and there was a broken pipe. Juju GUI is running on 10.214.2.61 and node on >>> which we are installing the charm is in 10.214.2.127 >>> >>> juju-debug-log >>> ========== >>> machine-0: message repeated 3 times: [2016-04-02 19:29:05 ERROR juju.rpc >>> server.go:573 error writing response: write tcp 10.214.2.127:53794: >>> broken pipe] >>> machine-0: 2016-04-02 19:38:50 WARNING juju.state allwatcher.go:351 >>> getting a public address for unit "oai-hss/7" failed: "public no address" >>> machine-0: 2016-04-02 19:38:50 WARNING juju.state allwatcher.go:355 >>> getting a private address for unit "oai-hss/7" failed: "private no address" >>> machine-0: 2016-04-02 19:39:37 WARNING juju.apiserver.client >>> status.go:465 error fetching public address: "public no address" >>> machine-0: 2016-04-02 19:39:37 WARNING juju.apiserver.client >>> status.go:677 error fetching public address: public no address >>> >>> Outcome of juju status: >>> ================= >>> >>> crossmobile@cm-mainserver-juju:~$ juju status --format=tabular >>> [Services] >>> NAME STATUS EXPOSED CHARM >>> juju-gui unknown true cs:trusty/juju-gui-52 >>> mysql unknown false cs:trusty/mysql-36 >>> oai-enb blocked false local:trusty/oai-enb-23 >>> oai-hss unknown false local:trusty/oai-hss-5 >>> >>> [Units] >>> ID WORKLOAD-STATE AGENT-STATE VERSION MACHINE PORTS >>> PUBLIC-ADDRESS MESSAGE >>> juju-gui/0 unknown idle 1.25.3 0 80/tcp,443/tcp >>> cm-mainserver-juju >>> mysql/1 unknown idle 1.25.3 5 3306/tcp >>> 10.214.2.125 >>> oai-enb/10 blocked idle 1.25.3 21 2152/udp >>> 10.214.2.135 Waiting for EPC relation >>> oai-hss/7 unknown allocating 26/kvm/1 >>> Waiting for agent initialization to finish >>> >>> [Machines] >>> ID STATE VERSION DNS INS-ID SERIES >>> HARDWARE >>> 0 started 1.25.3 cm-mainserver-juju manual: trusty >>> arch=amd64 cpu-cores=1 mem=3942M >>> 5 started 1.25.3 10.214.2.125 manual:10.214.2.125 trusty >>> arch=amd64 cpu-cores=1 mem=3942M >>> 21 started 1.25.3 10.214.2.135 manual:10.214.2.135 trusty >>> arch=amd64 cpu-cores=4 mem=15677M >>> 26 started 1.25.3 10.214.2.127 manual:10.214.2.127 trusty >>> arch=amd64 cpu-cores=1 mem=3942M >>> >>> >>> Please let me know your thoughts. >>> >>> PHANI SHANKAR >>> >>> -- >>> Juju-dev mailing list >>> Juju-dev@lists.ubuntu.com >>> Modify settings or unsubscribe at: >>> https://lists.ubuntu.com/mailman/listinfo/juju-dev >>> >>> -- PHANI SHANKAR
ubuntu@cm-mainserver-juju-EPC:~$ ip addr show 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 82:22:fc:d8:15:29 brd ff:ff:ff:ff:ff:ff inet 10.214.2.127/16 brd 10.214.255.255 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::8022:fcff:fed8:1529/64 scope link valid_lft forever preferred_lft forever 3: lxcbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether fe:96:23:96:b9:dc brd ff:ff:ff:ff:ff:ff inet 10.0.3.1/24 brd 10.0.3.255 scope global lxcbr0 valid_lft forever preferred_lft forever inet6 fe80::445c:29ff:fe43:f38f/64 scope link valid_lft forever preferred_lft forever 4: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether fe:54:00:02:d6:38 brd ff:ff:ff:ff:ff:ff inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0 valid_lft forever preferred_lft forever 5: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master virbr0 state UNKNOWN group default qlen 500 link/ether fe:54:00:2a:32:ee brd ff:ff:ff:ff:ff:ff inet6 fe80::fc54:ff:fe2a:32ee/64 scope link valid_lft forever preferred_lft forever 17: vethOF1FM7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master lxcbr0 state UP group default qlen 1000 link/ether fe:96:23:96:b9:dc brd ff:ff:ff:ff:ff:ff inet6 fe80::fc96:23ff:fe96:b9dc/64 scope link valid_lft forever preferred_lft forever 19: vnet1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master virbr0 state UNKNOWN group default qlen 500 link/ether fe:54:00:02:d6:38 brd ff:ff:ff:ff:ff:ff inet6 fe80::fc54:ff:fe02:d638/64 scope link valid_lft forever preferred_lft forever
machine-26-lxc-2.log
Description: Binary data
-- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev