Is there a way to login to lxd containers of existing openstack?

2017-06-05 Thread Sathyashankara bhat
Hi,
I did deploy openstack using Autopilot and it was all fine.
However last week the server on which juju controller for openstack setup
was running got bricked. So, lost a way to juju ssh to any of the openstack
hypervisors. Is there a way ssh to the openstack in absence of old juju
controller?
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: OS X VMS on JAAS

2017-06-05 Thread John Meinel
...


>
>> Which is why using something like the "lxd provider" would be a more
>> natural use case, but according to James the sticking point is having to
>> set up a controller in the first place. So "--to lxd:0" is easier for them
>> to think about than setting up a provider and letting it decide how to
>> allocate machines.
>>
>> Note, it probably wouldn't be possible to use JAAS to drive an LXD
>> provider, because *that* would have JAAS be trying to make a direct
>> connection to your LXD agent in order to provision the next machine.
>> However "--to lxd:0" has the local juju agent (running for 'machine 0')
>> talking to the local LXD agent in order to create a container.
>>
> If this is a useful case, could we define it as a mode of operation and
> have juju just work in such a scenario? It's an interesting mix of allowing
> the benefits of jaas for manually provisioned machines and environments.
> Just eliminating the weird behaviors and having to pretend it's a known
> cloud / provider could be useful. An assume nothing mode if you will.


Its essentially the 'manual provider', which is explicitly about you have
to tell Juju about the machines you might want to use.  I wouldn't think it
would be hard to create a model that used manual provisioning, but its
probably not something we've been driving as a great use case. Things like
needing the right routing, etc, mean its easy for people to add machines
that won't actually work, so it isn't something we've pushed.

John
=:->
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: OS X VMS on JAAS

2017-06-05 Thread John Meinel
...


>
>> Which is why using something like the "lxd provider" would be a more
>> natural use case, but according to James the sticking point is having to
>> set up a controller in the first place. So "--to lxd:0" is easier for them
>> to think about than setting up a provider and letting it decide how to
>> allocate machines.
>>
>> Note, it probably wouldn't be possible to use JAAS to drive an LXD
>> provider, because *that* would have JAAS be trying to make a direct
>> connection to your LXD agent in order to provision the next machine.
>> However "--to lxd:0" has the local juju agent (running for 'machine 0')
>> talking to the local LXD agent in order to create a container.
>>
> If this is a useful case, could we define it as a mode of operation and
> have juju just work in such a scenario? It's an interesting mix of allowing
> the benefits of jaas for manually provisioned machines and environments.
> Just eliminating the weird behaviors and having to pretend it's a known
> cloud / provider could be useful. An assume nothing mode if you will.


Its essentially the 'manual provider', which is explicitly about you have
to tell Juju about the machines you might want to use.  I wouldn't think it
would be hard to create a model that used manual provisioning, but its
probably not something we've been driving as a great use case. Things like
needing the right routing, etc, mean its easy for people to add machines
that won't actually work, so it isn't something we've pushed.

John
=:->
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


develop has been branched in preparation of 2.2-rc1 release

2017-06-05 Thread Tim Penhey
Hi all,

develop is now the future 2.3 branch. A patch will be landing soon to
bump the version.

There is now a 2.2 branch. This branch will be used for the 2.2 release
candidate. Landings onto this branch are currently restricted. The
restriction will be lifted when 2.2.0 is released.

Thanks,
Tim

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: OS X VMS on JAAS

2017-06-05 Thread Rick Harding
It always comes back to Juju being a tool pushing for best practice for
operations. It's hard for a hosted service to make any service promises
when things are running on personal laptops and such. It's all do-able, but
there's some form of what is the best practice thing to do. The controller
affinity is something akin to that. Controllers can be dealing with a lot
of communication at scale.

What's interesting here is exploring some idea of the development story
with Juju. I do find it interesting that you've got a sort of pre-seed
workspace you can create and setup.

On Mon, Jun 5, 2017 at 4:03 PM James Beedy  wrote:

> This raises the question: why do we need a provider -> controller affinity
> at all?
>
> On Mon, Jun 5, 2017 at 12:23 PM, Nicholas Skaggs <
> nicholas.ska...@canonical.com> wrote:
>
>> On 06/03/2017 02:56 AM, John Meinel wrote:
>>
>>> You can add a manually provisioned machine to any model, as long as
>>> there is connectivity from the machine to the controller. Now, I would have
>>> thought initial setup was initiated by the Controller, but its possible
>>> that initial setup is actually initiated from the client.
>>>
>>> Once initial setup is complete, then it is definitely true that all
>>> connections are initiated from the agent running on the controlled machine
>>> to the controller. The controller no longer tries to socket.connect to the
>>> machine. (In 1.X 'actions' were initiated via ssh from the controller, but
>>> in 2.X the agents listen to see if there are any actions to run like they
>>> do for all other changes.)
>>>
>>> Now, given that he added a model into "us-east-1" if he ever did just a
>>> plain "juju add-machine" or "juju deploy" (without --to) it would
>>> definitely create a new instance in AWS and start configuring it, rather
>>> than from your VM.
>>>
>> Is it possible for us to convey the model's proper location, even when
>> using jaas? He's in effect lying to the controller which does have the
>> knock-on affect of weird behavior.
>>
>>>
>>> Which is why using something like the "lxd provider" would be a more
>>> natural use case, but according to James the sticking point is having to
>>> set up a controller in the first place. So "--to lxd:0" is easier for them
>>> to think about than setting up a provider and letting it decide how to
>>> allocate machines.
>>>
>>> Note, it probably wouldn't be possible to use JAAS to drive an LXD
>>> provider, because *that* would have JAAS be trying to make a direct
>>> connection to your LXD agent in order to provision the next machine.
>>> However "--to lxd:0" has the local juju agent (running for 'machine 0')
>>> talking to the local LXD agent in order to create a container.
>>>
>> If this is a useful case, could we define it as a mode of operation and
>> have juju just work in such a scenario? It's an interesting mix of allowing
>> the benefits of jaas for manually provisioned machines and environments.
>> Just eliminating the weird behaviors and having to pretend it's a known
>> cloud / provider could be useful. An assume nothing mode if you will.
>>
>> Nicholas
>>
>
> --
> Juju-dev mailing list
> juju-...@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: OS X VMS on JAAS

2017-06-05 Thread James Beedy
This raises the question: why do we need a provider -> controller affinity
at all?

On Mon, Jun 5, 2017 at 12:23 PM, Nicholas Skaggs <
nicholas.ska...@canonical.com> wrote:

> On 06/03/2017 02:56 AM, John Meinel wrote:
>
>> You can add a manually provisioned machine to any model, as long as there
>> is connectivity from the machine to the controller. Now, I would have
>> thought initial setup was initiated by the Controller, but its possible
>> that initial setup is actually initiated from the client.
>>
>> Once initial setup is complete, then it is definitely true that all
>> connections are initiated from the agent running on the controlled machine
>> to the controller. The controller no longer tries to socket.connect to the
>> machine. (In 1.X 'actions' were initiated via ssh from the controller, but
>> in 2.X the agents listen to see if there are any actions to run like they
>> do for all other changes.)
>>
>> Now, given that he added a model into "us-east-1" if he ever did just a
>> plain "juju add-machine" or "juju deploy" (without --to) it would
>> definitely create a new instance in AWS and start configuring it, rather
>> than from your VM.
>>
> Is it possible for us to convey the model's proper location, even when
> using jaas? He's in effect lying to the controller which does have the
> knock-on affect of weird behavior.
>
>>
>> Which is why using something like the "lxd provider" would be a more
>> natural use case, but according to James the sticking point is having to
>> set up a controller in the first place. So "--to lxd:0" is easier for them
>> to think about than setting up a provider and letting it decide how to
>> allocate machines.
>>
>> Note, it probably wouldn't be possible to use JAAS to drive an LXD
>> provider, because *that* would have JAAS be trying to make a direct
>> connection to your LXD agent in order to provision the next machine.
>> However "--to lxd:0" has the local juju agent (running for 'machine 0')
>> talking to the local LXD agent in order to create a container.
>>
> If this is a useful case, could we define it as a mode of operation and
> have juju just work in such a scenario? It's an interesting mix of allowing
> the benefits of jaas for manually provisioned machines and environments.
> Just eliminating the weird behaviors and having to pretend it's a known
> cloud / provider could be useful. An assume nothing mode if you will.
>
> Nicholas
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: OS X VMS on JAAS

2017-06-05 Thread James Beedy
This raises the question: why do we need a provider -> controller affinity
at all?

On Mon, Jun 5, 2017 at 12:23 PM, Nicholas Skaggs <
nicholas.ska...@canonical.com> wrote:

> On 06/03/2017 02:56 AM, John Meinel wrote:
>
>> You can add a manually provisioned machine to any model, as long as there
>> is connectivity from the machine to the controller. Now, I would have
>> thought initial setup was initiated by the Controller, but its possible
>> that initial setup is actually initiated from the client.
>>
>> Once initial setup is complete, then it is definitely true that all
>> connections are initiated from the agent running on the controlled machine
>> to the controller. The controller no longer tries to socket.connect to the
>> machine. (In 1.X 'actions' were initiated via ssh from the controller, but
>> in 2.X the agents listen to see if there are any actions to run like they
>> do for all other changes.)
>>
>> Now, given that he added a model into "us-east-1" if he ever did just a
>> plain "juju add-machine" or "juju deploy" (without --to) it would
>> definitely create a new instance in AWS and start configuring it, rather
>> than from your VM.
>>
> Is it possible for us to convey the model's proper location, even when
> using jaas? He's in effect lying to the controller which does have the
> knock-on affect of weird behavior.
>
>>
>> Which is why using something like the "lxd provider" would be a more
>> natural use case, but according to James the sticking point is having to
>> set up a controller in the first place. So "--to lxd:0" is easier for them
>> to think about than setting up a provider and letting it decide how to
>> allocate machines.
>>
>> Note, it probably wouldn't be possible to use JAAS to drive an LXD
>> provider, because *that* would have JAAS be trying to make a direct
>> connection to your LXD agent in order to provision the next machine.
>> However "--to lxd:0" has the local juju agent (running for 'machine 0')
>> talking to the local LXD agent in order to create a container.
>>
> If this is a useful case, could we define it as a mode of operation and
> have juju just work in such a scenario? It's an interesting mix of allowing
> the benefits of jaas for manually provisioned machines and environments.
> Just eliminating the weird behaviors and having to pretend it's a known
> cloud / provider could be useful. An assume nothing mode if you will.
>
> Nicholas
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: OS X VMS on JAAS

2017-06-05 Thread Nicholas Skaggs

On 06/03/2017 02:56 AM, John Meinel wrote:
You can add a manually provisioned machine to any model, as long as 
there is connectivity from the machine to the controller. Now, I would 
have thought initial setup was initiated by the Controller, but its 
possible that initial setup is actually initiated from the client.


Once initial setup is complete, then it is definitely true that all 
connections are initiated from the agent running on the controlled 
machine to the controller. The controller no longer tries to 
socket.connect to the machine. (In 1.X 'actions' were initiated via 
ssh from the controller, but in 2.X the agents listen to see if there 
are any actions to run like they do for all other changes.)


Now, given that he added a model into "us-east-1" if he ever did just 
a plain "juju add-machine" or "juju deploy" (without --to) it would 
definitely create a new instance in AWS and start configuring it, 
rather than from your VM.
Is it possible for us to convey the model's proper location, even when 
using jaas? He's in effect lying to the controller which does have the 
knock-on affect of weird behavior.


Which is why using something like the "lxd provider" would be a more 
natural use case, but according to James the sticking point is having 
to set up a controller in the first place. So "--to lxd:0" is easier 
for them to think about than setting up a provider and letting it 
decide how to allocate machines.


Note, it probably wouldn't be possible to use JAAS to drive an LXD 
provider, because *that* would have JAAS be trying to make a direct 
connection to your LXD agent in order to provision the next machine. 
However "--to lxd:0" has the local juju agent (running for 'machine 
0') talking to the local LXD agent in order to create a container.
If this is a useful case, could we define it as a mode of operation and 
have juju just work in such a scenario? It's an interesting mix of 
allowing the benefits of jaas for manually provisioned machines and 
environments. Just eliminating the weird behaviors and having to pretend 
it's a known cloud / provider could be useful. An assume nothing mode if 
you will.


Nicholas

--
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: OS X VMS on JAAS

2017-06-05 Thread Nicholas Skaggs

On 06/03/2017 02:56 AM, John Meinel wrote:
You can add a manually provisioned machine to any model, as long as 
there is connectivity from the machine to the controller. Now, I would 
have thought initial setup was initiated by the Controller, but its 
possible that initial setup is actually initiated from the client.


Once initial setup is complete, then it is definitely true that all 
connections are initiated from the agent running on the controlled 
machine to the controller. The controller no longer tries to 
socket.connect to the machine. (In 1.X 'actions' were initiated via 
ssh from the controller, but in 2.X the agents listen to see if there 
are any actions to run like they do for all other changes.)


Now, given that he added a model into "us-east-1" if he ever did just 
a plain "juju add-machine" or "juju deploy" (without --to) it would 
definitely create a new instance in AWS and start configuring it, 
rather than from your VM.
Is it possible for us to convey the model's proper location, even when 
using jaas? He's in effect lying to the controller which does have the 
knock-on affect of weird behavior.


Which is why using something like the "lxd provider" would be a more 
natural use case, but according to James the sticking point is having 
to set up a controller in the first place. So "--to lxd:0" is easier 
for them to think about than setting up a provider and letting it 
decide how to allocate machines.


Note, it probably wouldn't be possible to use JAAS to drive an LXD 
provider, because *that* would have JAAS be trying to make a direct 
connection to your LXD agent in order to provision the next machine. 
However "--to lxd:0" has the local juju agent (running for 'machine 
0') talking to the local LXD agent in order to create a container.
If this is a useful case, could we define it as a mode of operation and 
have juju just work in such a scenario? It's an interesting mix of 
allowing the benefits of jaas for manually provisioned machines and 
environments. Just eliminating the weird behaviors and having to pretend 
it's a known cloud / provider could be useful. An assume nothing mode if 
you will.


Nicholas

--
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: OS X VMS on JAAS

2017-06-05 Thread James Beedy
One big reason this has been such a gem for me, is because once a user adds
his vm to a model, I can deploy/manage/admin the application for them
remotely on their local vm. This is huge when on-boarding new users,
because it helps negate all the things someone foreign to Juju might
encounter when deploying my custom bundles that need proxy actions and the
like ran to get them initialized. ALSO, now I don't have to screen share
and deal with all the remote assistance mumbo jumbo deploying and
configuring their local lxd envs on a per user basis just to get them going
(this has been a huge time munch for me previously).

~James

On Sun, Jun 4, 2017 at 8:31 AM, James Beedy  wrote:

> @john, @andrew thanks for the details here
>
> On Sat, Jun 3, 2017 at 10:21 PM, Andrew Wilkins <
> andrew.wilk...@canonical.com> wrote:
>
>> On Sat, Jun 3, 2017 at 2:56 PM John Meinel 
>> wrote:
>>
>>> You can add a manually provisioned machine to any model, as long as
>>> there is connectivity from the machine to the controller. Now, I would have
>>> thought initial setup was initiated by the Controller, but its possible
>>> that initial setup is actually initiated from the client.
>>>
>>
>> Given the command:
>>
>> $ juju add-machine ssh:
>>
>> it goes something like this:
>>
>> 1. client connects to  via SSH, and performs basic hardware/OS
>> discovery
>> 2. client asks controller to add a machine entry, and controller returns
>> a script to be executed on the target machine, using the discovered
>> details, in order to fetch and install jujud
>> 3. client executes that script over the SSH connection
>>
>> Once initial setup is complete, then it is definitely true that all
>>> connections are initiated from the agent running on the controlled machine
>>> to the controller. The controller no longer tries to socket.connect to the
>>> machine. (In 1.X 'actions' were initiated via ssh from the controller, but
>>> in 2.X the agents listen to see if there are any actions to run like they
>>> do for all other changes.)
>>>
>>> Now, given that he added a model into "us-east-1" if he ever did just a
>>> plain "juju add-machine" or "juju deploy" (without --to) it would
>>> definitely create a new instance in AWS and start configuring it, rather
>>> than from your VM.
>>>
>>> Which is why using something like the "lxd provider" would be a more
>>> natural use case, but according to James the sticking point is having to
>>> set up a controller in the first place. So "--to lxd:0" is easier for them
>>> to think about than setting up a provider and letting it decide how to
>>> allocate machines.
>>>
>>> Note, it probably wouldn't be possible to use JAAS to drive an LXD
>>> provider, because *that* would have JAAS be trying to make a direct
>>> connection to your LXD agent in order to provision the next machine.
>>> However "--to lxd:0" has the local juju agent (running for 'machine 0')
>>> talking to the local LXD agent in order to create a container.
>>>
>>> John
>>> =:->
>>>
>>>
>>> On Fri, Jun 2, 2017 at 6:28 PM, Jay Wren  wrote:
>>>
 I do not understand how this works. Could someone with knowledge of how
 jujud on a  controller communicates with jujud agents on units describe how
 that is done?

 My limited understanding must be wrong give that James has this working.

 This is what I thought:

 On most cloud providers: add-machine instructs the cloud provider to
 start a new instance and the cloud-config passed to cloud-init includes how
 to download jujud agent and run it and configure it with public key trust
 of the juju controller.

 On manually added machine: same thing only instead of cloud-init and
 cloud-config an ssh connection is used to perform the same commands.

 I had thought the juju controller was initiating the ssh-connection to
 the address given in the add-machine command and that a non-internet
 routable address would simply not work as the controller cannot open any
 TCP connection to it. This is where my understanding stops.

 Please, anyone, describe how this works?
 --
 Jay


 On Fri, Jun 2, 2017 at 9:42 AM, James Beedy 
 wrote:

> I think the primary advantage being less clutter to the end user. The
> difference between the end user have to bootstrap and control things from
> inside the vm vs from their host. For some reason this small change made
> some of my users who were previously not really catching on, far more apt
> to jump in. I personally like it because these little vms go further when
> they don't have the controller on them as well. @jameinel totally, 
> possibly
> I'll add the bridge bits in place of the lxd-proxy in that write up, or
> possibly in another.
>
> ~James
>
> On Jun 2, 2017, at 12:56 AM, John Meinel 

Re: OS X VMS on JAAS

2017-06-05 Thread James Beedy
One big reason this has been such a gem for me, is because once a user adds
his vm to a model, I can deploy/manage/admin the application for them
remotely on their local vm. This is huge when on-boarding new users,
because it helps negate all the things someone foreign to Juju might
encounter when deploying my custom bundles that need proxy actions and the
like ran to get them initialized. ALSO, now I don't have to screen share
and deal with all the remote assistance mumbo jumbo deploying and
configuring their local lxd envs on a per user basis just to get them going
(this has been a huge time munch for me previously).

~James

On Sun, Jun 4, 2017 at 8:31 AM, James Beedy  wrote:

> @john, @andrew thanks for the details here
>
> On Sat, Jun 3, 2017 at 10:21 PM, Andrew Wilkins <
> andrew.wilk...@canonical.com> wrote:
>
>> On Sat, Jun 3, 2017 at 2:56 PM John Meinel 
>> wrote:
>>
>>> You can add a manually provisioned machine to any model, as long as
>>> there is connectivity from the machine to the controller. Now, I would have
>>> thought initial setup was initiated by the Controller, but its possible
>>> that initial setup is actually initiated from the client.
>>>
>>
>> Given the command:
>>
>> $ juju add-machine ssh:
>>
>> it goes something like this:
>>
>> 1. client connects to  via SSH, and performs basic hardware/OS
>> discovery
>> 2. client asks controller to add a machine entry, and controller returns
>> a script to be executed on the target machine, using the discovered
>> details, in order to fetch and install jujud
>> 3. client executes that script over the SSH connection
>>
>> Once initial setup is complete, then it is definitely true that all
>>> connections are initiated from the agent running on the controlled machine
>>> to the controller. The controller no longer tries to socket.connect to the
>>> machine. (In 1.X 'actions' were initiated via ssh from the controller, but
>>> in 2.X the agents listen to see if there are any actions to run like they
>>> do for all other changes.)
>>>
>>> Now, given that he added a model into "us-east-1" if he ever did just a
>>> plain "juju add-machine" or "juju deploy" (without --to) it would
>>> definitely create a new instance in AWS and start configuring it, rather
>>> than from your VM.
>>>
>>> Which is why using something like the "lxd provider" would be a more
>>> natural use case, but according to James the sticking point is having to
>>> set up a controller in the first place. So "--to lxd:0" is easier for them
>>> to think about than setting up a provider and letting it decide how to
>>> allocate machines.
>>>
>>> Note, it probably wouldn't be possible to use JAAS to drive an LXD
>>> provider, because *that* would have JAAS be trying to make a direct
>>> connection to your LXD agent in order to provision the next machine.
>>> However "--to lxd:0" has the local juju agent (running for 'machine 0')
>>> talking to the local LXD agent in order to create a container.
>>>
>>> John
>>> =:->
>>>
>>>
>>> On Fri, Jun 2, 2017 at 6:28 PM, Jay Wren  wrote:
>>>
 I do not understand how this works. Could someone with knowledge of how
 jujud on a  controller communicates with jujud agents on units describe how
 that is done?

 My limited understanding must be wrong give that James has this working.

 This is what I thought:

 On most cloud providers: add-machine instructs the cloud provider to
 start a new instance and the cloud-config passed to cloud-init includes how
 to download jujud agent and run it and configure it with public key trust
 of the juju controller.

 On manually added machine: same thing only instead of cloud-init and
 cloud-config an ssh connection is used to perform the same commands.

 I had thought the juju controller was initiating the ssh-connection to
 the address given in the add-machine command and that a non-internet
 routable address would simply not work as the controller cannot open any
 TCP connection to it. This is where my understanding stops.

 Please, anyone, describe how this works?
 --
 Jay


 On Fri, Jun 2, 2017 at 9:42 AM, James Beedy 
 wrote:

> I think the primary advantage being less clutter to the end user. The
> difference between the end user have to bootstrap and control things from
> inside the vm vs from their host. For some reason this small change made
> some of my users who were previously not really catching on, far more apt
> to jump in. I personally like it because these little vms go further when
> they don't have the controller on them as well. @jameinel totally, 
> possibly
> I'll add the bridge bits in place of the lxd-proxy in that write up, or
> possibly in another.
>
> ~James
>
> On Jun 2, 2017, at 12:56 AM, John Meinel 

Re: Exiting an unconditional juju debug-hooks session

2017-06-05 Thread Dmitrii Shcherbakov
John,

Any will work:

- ./hooks/$JUJU_HOOK_NAME && tmux kill-session -t $JUJU_UNIT_NAME
- ./hooks/$JUJU_HOOK_NAME, C-a d
- C-a 0, exit, ./hooks/$JUJU_HOOK_NAME, C-a exit

This is because we have an `exec` on new session creation
https://github.com/juju/juju/blob/develop/worker/uniter/runner/debug/client.go#L103
and on attachment to an existing session:
https://github.com/juju/juju/blob/develop/worker/uniter/runner/debug/client.go#L61

we have screen keys enabled for tmux so the prefix is C-a.

However, it is interesting that on session detach I am still getting a
message (defined in a function associated with the trap on EXIT) that
says "Cleaning up the debug session" - shouldn't be there after the
exec.

Looking at the process tree, I can see 4 bash processes under an sshd
process with the same script in base64:
http://paste.ubuntu.com/24781121/

It seems like bash (pid 3391) was forked 4 times consecutively with
the last process (3404) `exec`-ing `tmux attach-session -t
{unit_name}`

I am going to need to check why there are 4 of them but detaching like
that is fine by me.
Best Regards,
Dmitrii Shcherbakov

Field Software Engineer
IRC (freenode): Dmitrii-Sh


On Sun, Jun 4, 2017 at 6:56 PM, John Meinel  wrote:
> Doesn't the equivalent of ^A ^D (from screen) also work to just disconnect
> all sessions? (http://www.dayid.org/comp/tm.html says it would be ^B d). Or
> switching to session 0 and exiting that one first?
>
> I thought we had a quick way to disconnect, but its possible you have to
> exit 2x and that fast firing hooks always catch a new window before you can
> exit a second time.
>
> John
> =:->
>
>
> On Sun, Jun 4, 2017 at 5:56 PM, Dmitrii Shcherbakov
>  wrote:
>>
>> Hi everybody,
>>
>> Currently if you do
>>
>> juju debug-hooks  # no event (hook) in particular
>>
>> each time there is a new event you will get a new tmux window open and
>> this will be done serially as there is no parallelism in hook
>> execution on a given logical machine. This is all good and intentional
>> but when you've observed the charm behavior and want to let it work
>> without your interference again, you need to end your tmux session.
>> This can be hard via `exit [status]` shell builtin when you get a lot
>> of events (think of an OpenStack HA deployment) - each time you do
>>
>> ./hooks/$JUJU_HOOK_NAME && exit
>>
>> you are dropped into a session '0' and a new session is created for a
>> queued event for which you have to manually execute a hook and exit
>> again until you process the backlog.
>>
>> tmux list-windows
>> 0: bash- (1 panes) [239x62] [layout bbde,239x62,0,0,1] @1 # <---
>> dropping here after `exit`
>> 1: update-status* (1 panes) [239x62] [layout bbe0,239x62,0,0,3] @3
>> (active)
>>
>>
>> https://jujucharms.com/docs/stable/authors-hook-debug#running-a-debug-session
>> "Note: To allow Juju to continue processing events normally, you must
>> exit the hook execution with a zero return code (using the exit
>> command), otherwise all further events on that unit may be blocked
>> indefinitely."
>>
>> My initial thought was something like this - send SIGTERM to a child
>> of sshd which will terminate your ssh session:
>> unset n ; p=`pgrep -f 'tmux attach-session.*'$JUJU_UNIT_NAME` ; while
>> [ "$n" != "sshd" ] ; do pc=$p ; p=$(ps -o ppid= $p | tr -d ' ') ; echo
>> $p ; n=`basename $(readlink /proc/$p/exe || echo -n none)` ; done &&
>> kill $pc
>>
>> as an agent waits for an SSH client to exit:
>>
>> https://github.com/juju/juju/blob/develop/worker/uniter/runner/debug/server.go#L53
>>
>> After thinking about it some more, I thought it would be cleaner to
>> just kill a specific tmux session:
>>
>> tmux list-sessions
>> gluster/0: 2 windows (created Fri Jun  2 20:22:30 2017) [239x62]
>> (attached)
>>
>> ./hooks/$JUJU_HOOK_NAME && tmux kill-session -t $JUJU_UNIT_NAME
>> [exited]
>> Cleaning up the debug session
>> no server running on /tmp/tmux-0/default
>> Connection to 10.10.101.77 closed.
>>
>> The cleanup message comes from debugHooksClientScript that simply sets
>> up a bash trap on EXIT:
>>
>> https://github.com/juju/juju/blob/develop/worker/uniter/runner/debug/client.go#L51
>>
>> Judging by the code, it should be pretty safe to do so - unless there
>> is a debug session in a debug context for a particular unit, other
>> hooks will be executed regularly by an agent instead of creating a new
>> tmux window:
>>
>> https://github.com/juju/juju/blob/develop/worker/uniter/runner/runner.go#L225
>> debugctx := debug.NewHooksContext(runner.context.UnitName())
>> if session, _ := debugctx.FindSession(); session != nil &&
>> session.MatchHook(hookName) {
>> logger.Infof("executing %s via debug-hooks", hookName)
>> err = session.RunHook(hookName, runner.paths.GetCharmDir(), env)
>> } else {
>> err = runner.runCharmHook(hookName, env, charmLocation)
>> }
>> return runner.context.Flush(hookName, err)
>>
>> There are two scripts:
>>
>> - a client script