Re: Trying to remove and redeploy quantum-gateway on a different server

2015-09-22 Thread Jeff McLamb
Mark, Mike -

Thanks for all of this. Indeed, it was pretty cool that I could just tear
things down and point it elsewhere. Mike's been very helpful on the
ubuntu-openstack-installer list with the Autopilot, as well as many other
issues. I've decided to go with the manual Juju approach for now because
I'm trying to understand things at a lower level, but once that's done and
I'm comfortable with all the pieces and how they fit together, I'll revisit
the Autopilot.

Thanks,

Jeff

On Sun, Sep 20, 2015 at 6:59 AM, Mark Shuttleworth  wrote:

> Hi Jeff
>
> Glad you found the issue! It's pretty cool that you could just reconnect
> the bits with a different mapping to VMs and voilà it works :)
>
> Generally, Juju will provide you with the fastest way to "do what you
> want", and to the extent the charms have seen your situation before, a best
> practice result. In this case we can enhance that charm to error helpfully
> (i.e. with a message) saying "I need two network interfaces, please put me
> on an appropriate VM or real machine".
>
> For those people who really want to understand every aspect of OpenStack,
> this is a great way to do what you want, and experiment quickly, to get the
> large scale cloud you want. We just added detailed support for Contrail 2.2
> for example, which is fun to test as a real SDN at scale.
>
> But we think most people do not need or want to be OpenStack experts; they
> want OpenStack to "Just Work" so they can focus on their apps and workloads.
>
> That's why we also built the Autopilot, which is free for up to 10 nodes
> (thereafter free with out standard support offering) and it will:
>
>  * study the hardware you provide it
>  * offer you a choice of SDNs and storage and other components
>  * create the appropriate reference architecture of containers, metal and
> VMs
>  * monitor the resulting cloud
>
> Once you've deployed, you can feed it more machines to grow, and it will
> evolve the reference architecture accordingly.
>
> If you're going to be building lots of these clouds, or need to focus at a
> higher level, please consider the autopilot.
>
> Mark
>
>
> On 19/09/15 16:48, Jeff McLamb wrote:
>
> Solved! TL;DR - You must deploy quantum-gateway or neutron-gateway to bare
> metal. It will not work in an LXC container.
>
> It appears as though you cannot deploy the {quantum,neutron}-gateway charm
> to an LXC. I eventually tracked down the stuck config-changed state due to
> the openvswitch vswitchd daemon not being able to start because it cannot
> load the kernel module properly, which causes a loop whereby it keeps
> attempting to add flows to the br-int OVS but fails because the daemon is
> not running.
>
> I removed the service from the LXC and deployed it to a bare metal machine
> (co-hosted with Ceph services that don’t seem to do anything fancy with
> bridges or OVS) and everything worked just fine.
>
> Thanks,
>
> Jeff
>
> On Fri, Sep 18, 2015 at 4:29 PM, Jeff McLamb  
>  wrote:
>
>
> Thanks for the info, Mike! I will provide a bit more detail as to what’s
> going on in case any Juju experts have seen something similar.
>
> Where I last left off, I had deleted all units of anything
> {neutron,quantum}-gateway related from juju’s perspective. However, the
> original bare metal node was still providing the neutron gateway (i.e.
> network node) services, so I manually issued a stop on all of those (e.g.
> stop neutron-dhcp, stop neutron-metadata-server, stop neutron-metering,
> etc.) The only thing I left running on that server was openvswitch because
> it is also a compute node, so it needs to have the openvswitch plugin
> running, as I understand it.
>
> I then issued a `juju deploy neutron-gateway —to lxc:3` in order to
> install the gateway in an lxc container.
>
> The lxc container is provisioned and everything seems to be going OK until
> it gets stuck in agent status -> message: running config-changed hook
>
> If I ssh into the container itself and look at processes, I can see it is
> in a seemingly infinite loop where the neutron rootwrapper is constantly
> issuing ovs-vsctl add-br br-int calls, along with adding and removing
> flows. It’s as though it keeps trying to add and remove the br-int ovs, and
> just sits there doing that forever.
>
> I manually killed the config-changed hook and then issued a `juju
> debug-hooks neutron-gateway/0 config-changed` followed by `juju resolved
> neutron-gateway/0 —retry`, which has dumped me into a config-changed tmux
> session where I manually issued ./hooks/config-changed.
>
> There is not much other than the following output, after which it hangs:
>
> root@juju-machine-3-lxc-11:/var/lib/juju/agents/unit-neutron-gateway-0/charm#
> ./hooks/config-changed
> Reading package lists... Done
> Building dependency tree
> Reading state information... Done
> Reading package lists... Done
> Building dependency tree
> Reading state information... Done
> Reading package lists... Done
> Building dependency tree
> Reading state inform

Re: What is the default username/password of LXC instances (Ubuntu) deployed by Juju via MAAS?

2015-09-22 Thread Jeff McLamb
Hey Andrew -

Thanks for the info! Turns out editing the rootfs path worked, as you
suggested, as did lxc-attach (good to know).

I am back to normal operation now... I was just in a strange state this
past weekend where I had lost power to my MAAS and Juju
deployment/bootstrap nodes, so I couldn't do what I normally do, which is
'juju ssh ...'

Thanks,

Jeff

On Sat, Sep 19, 2015 at 9:47 PM, Andrew Wilkins <
andrew.wilk...@canonical.com> wrote:

> On Sun, Sep 20, 2015 at 12:17 AM Jeff McLamb  wrote:
>
>> Hi all -
>>
>> I am currently in a situation where my juju deployment host and MAAS host
>> (thus DHCP/DNS) are down and I won’t be able to power them back on for a
>> few days.
>>
>> I have manually added entries to /etc/hosts on all of my bare metal
>> machines (so that OpenStack services can resolve names absent the DNS
>> server) but I cannot login to my LXC instances in order to modify
>> /etc/hosts.
>>
>> For whatever reason the keys aren’t in place to directly ssh into each
>> LXC instance from my bare-metal machines. The only option I have is to
>> lxc-console directly into each instance, where I am presented with a login
>> prompt.
>>
>
> You can use "lxc-attach" command to run arbitrary processes within the
> container, without the need for a login.
>
> Is there a default user/pass for these Ubuntu LXC instances deployed by
>> juju (via MAAS?) or some way to inject a file (e.g. replace/mod /etc/hosts)
>> into the container?
>>
>> It appears as though I could just modify
>> /var/lib/lxc//rootfs/etc/hosts directly, but that seems like it
>> might cause consistency issues? Or maybe doing that followed by an
>> `lxc-stop —name  -r` will reboot the container and it will Just
>> Work?
>>
>
> AFAIK editing the file via the rootfs path is fine, and you only need to
> restart the container if some application is caching the host resolution.
>
> FYI the LXC container, if provisioned by Juju, should have the public SSH
> keys of the user who bootstrapped. So you should be able to ssh to the LXC
> container from your client machine if you can ssh to the MAAS node.
>
> I'm not sure what versions of the juju CLI this is true for (definitely
> the most recent), but for a while now "juju ssh" will accept an IP address
> or hostname, and and attempt to ssh to it using a Juju server as a proxy.
> i.e.:
> juju ssh 10.x.y.z
> will do something like
> ssh -o ProxyCommand="juju ssh 0" 10.x.y.z
>
> HTH,
> Andrew
>
> Thanks,
>>
>> Jeff
>> --
>> Juju mailing list
>> Juju@lists.ubuntu.com
>> Modify settings or unsubscribe at:
>> https://lists.ubuntu.com/mailman/listinfo/juju
>>
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Which JUJU_DEV_FEATURE_FLAG were used?

2015-09-22 Thread Andreas Hasenack
Hi,

given an existing juju environment, is there a way to tell which
JUJU_DEV_FEATURE_FLAGs were used to bootstrap it?

I'm using 1.24.6
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Which JUJU_DEV_FEATURE_FLAG were used?

2015-09-22 Thread Nate Finch
The environment variables are transferred to the server, so getting them
from /proc//environ on the server should be doable (someone better at
bash might be able to give you a one liner).

On Tue, Sep 22, 2015 at 1:19 PM Andreas Hasenack 
wrote:

> Hi,
>
> given an existing juju environment, is there a way to tell which
> JUJU_DEV_FEATURE_FLAGs were used to bootstrap it?
>
> I'm using 1.24.6
>
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Which JUJU_DEV_FEATURE_FLAG were used?

2015-09-22 Thread Andreas Hasenack
On Tue, Sep 22, 2015 at 3:00 PM, Nate Finch 
wrote:

> The environment variables are transferred to the server, so getting them
> from /proc//environ on the server should be doable (someone better at
> bash might be able to give you a one liner).
>
>

Thanks. For completeness, the "e" flag for ps shows it:

 4194 ?Ssl0:10 /var/lib/juju/tools/machine-0/jujud machine
--data-dir /var/lib/juju --machine-id 0 --debug UPSTART_INSTANCE=
JUJU_DEV_FEATURE_FLAGS=address-allocation UPSTART_JOB=jujud-machine-0
TERM=linux
PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin PWD=/
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Which JUJU_DEV_FEATURE_FLAG were used?

2015-09-22 Thread Tim Penhey
On 23/09/15 05:18, Andreas Hasenack wrote:
> Hi,
> 
> given an existing juju environment, is there a way to tell which
> JUJU_DEV_FEATURE_FLAGs were used to bootstrap it?
> 
> I'm using 1.24.6

The second line of logging in every agent lists the feature flags that
the agent is using.

As the agent starts it logs first the version of juju, then the feature
flags.

Tim


-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Which JUJU_DEV_FEATURE_FLAG were used?

2015-09-22 Thread Andreas Hasenack
On Tue, Sep 22, 2015 at 4:57 PM, Tim Penhey 
wrote:

> On 23/09/15 05:18, Andreas Hasenack wrote:
> > Hi,
> >
> > given an existing juju environment, is there a way to tell which
> > JUJU_DEV_FEATURE_FLAGs were used to bootstrap it?
> >
> > I'm using 1.24.6
>
> The second line of logging in every agent lists the feature flags that
> the agent is using.
>
> As the agent starts it logs first the version of juju, then the feature
> flags.
>
>

Got it there too, thanks:
machine-0: 2015-09-22 19:36:19 INFO juju.cmd.jujud machine.go:419 machine
agent machine-0 start (1.24.5-trusty-amd64 [gc])
machine-0: 2015-09-22 19:36:19 WARNING juju.cmd.jujud machine.go:421
developer feature flags enabled: "address-allocation"
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Which JUJU_DEV_FEATURE_FLAG were used?

2015-09-22 Thread Marco Ceppi
This is an interesting one, I hacked together a one-liner from both Nate
and your replies

juju ssh 0 "ps -ae -o command= | grep [j]ujud | grep JUJU_DEV_FEATURE_FLAGS
| awk -f'\"' '{ print $2 }'"

and added[0] it to the juju plugins repo as `juju flags`:
https://github.com/juju/plugins

$ juju flags
storage jes

Marco

[0]: https://github.com/juju/plugins/pull/61


On Tue, Sep 22, 2015 at 3:59 PM Andreas Hasenack 
wrote:

> On Tue, Sep 22, 2015 at 4:57 PM, Tim Penhey 
> wrote:
>
>> On 23/09/15 05:18, Andreas Hasenack wrote:
>> > Hi,
>> >
>> > given an existing juju environment, is there a way to tell which
>> > JUJU_DEV_FEATURE_FLAGs were used to bootstrap it?
>> >
>> > I'm using 1.24.6
>>
>> The second line of logging in every agent lists the feature flags that
>> the agent is using.
>>
>> As the agent starts it logs first the version of juju, then the feature
>> flags.
>>
>>
>
> Got it there too, thanks:
> machine-0: 2015-09-22 19:36:19 INFO juju.cmd.jujud machine.go:419 machine
> agent machine-0 start (1.24.5-trusty-amd64 [gc])
> machine-0: 2015-09-22 19:36:19 WARNING juju.cmd.jujud machine.go:421
> developer feature flags enabled: "address-allocation"
>
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju