Re: Exiting an unconditional juju debug-hooks session

2017-06-04 Thread John Meinel
Doesn't the equivalent of ^A ^D (from screen) also work to just disconnect
all sessions? (http://www.dayid.org/comp/tm.html says it would be ^B d). Or
switching to session 0 and exiting that one first?

I thought we had a quick way to disconnect, but its possible you have to
exit 2x and that fast firing hooks always catch a new window before you can
exit a second time.

John
=:->


On Sun, Jun 4, 2017 at 5:56 PM, Dmitrii Shcherbakov <
dmitrii.shcherba...@canonical.com> wrote:

> Hi everybody,
>
> Currently if you do
>
> juju debug-hooks  # no event (hook) in particular
>
> each time there is a new event you will get a new tmux window open and
> this will be done serially as there is no parallelism in hook
> execution on a given logical machine. This is all good and intentional
> but when you've observed the charm behavior and want to let it work
> without your interference again, you need to end your tmux session.
> This can be hard via `exit [status]` shell builtin when you get a lot
> of events (think of an OpenStack HA deployment) - each time you do
>
> ./hooks/$JUJU_HOOK_NAME && exit
>
> you are dropped into a session '0' and a new session is created for a
> queued event for which you have to manually execute a hook and exit
> again until you process the backlog.
>
> tmux list-windows
> 0: bash- (1 panes) [239x62] [layout bbde,239x62,0,0,1] @1 # <---
> dropping here after `exit`
> 1: update-status* (1 panes) [239x62] [layout bbe0,239x62,0,0,3] @3 (active)
>
> https://jujucharms.com/docs/stable/authors-hook-debug#
> running-a-debug-session
> "Note: To allow Juju to continue processing events normally, you must
> exit the hook execution with a zero return code (using the exit
> command), otherwise all further events on that unit may be blocked
> indefinitely."
>
> My initial thought was something like this - send SIGTERM to a child
> of sshd which will terminate your ssh session:
> unset n ; p=`pgrep -f 'tmux attach-session.*'$JUJU_UNIT_NAME` ; while
> [ "$n" != "sshd" ] ; do pc=$p ; p=$(ps -o ppid= $p | tr -d ' ') ; echo
> $p ; n=`basename $(readlink /proc/$p/exe || echo -n none)` ; done &&
> kill $pc
>
> as an agent waits for an SSH client to exit:
> https://github.com/juju/juju/blob/develop/worker/uniter/
> runner/debug/server.go#L53
>
> After thinking about it some more, I thought it would be cleaner to
> just kill a specific tmux session:
>
> tmux list-sessions
> gluster/0: 2 windows (created Fri Jun  2 20:22:30 2017) [239x62] (attached)
>
> ./hooks/$JUJU_HOOK_NAME && tmux kill-session -t $JUJU_UNIT_NAME
> [exited]
> Cleaning up the debug session
> no server running on /tmp/tmux-0/default
> Connection to 10.10.101.77 closed.
>
> The cleanup message comes from debugHooksClientScript that simply sets
> up a bash trap on EXIT:
> https://github.com/juju/juju/blob/develop/worker/uniter/
> runner/debug/client.go#L51
>
> Judging by the code, it should be pretty safe to do so - unless there
> is a debug session in a debug context for a particular unit, other
> hooks will be executed regularly by an agent instead of creating a new
> tmux window:
> https://github.com/juju/juju/blob/develop/worker/uniter/
> runner/runner.go#L225
> debugctx := debug.NewHooksContext(runner.context.UnitName())
> if session, _ := debugctx.FindSession(); session != nil &&
> session.MatchHook(hookName) {
> logger.Infof("executing %s via debug-hooks", hookName)
> err = session.RunHook(hookName, runner.paths.GetCharmDir(), env)
> } else {
> err = runner.runCharmHook(hookName, env, charmLocation)
> }
> return runner.context.Flush(hookName, err)
>
> There are two scripts:
>
> - a client script executed via an ssh client when you run juju debug-hooks
> - a server script which is executed in the `RunHook` function by an
> agent and creates a new window for an existing tmux session.
>
> client side:
> https://github.com/juju/juju/blob/develop/cmd/juju/
> commands/debughooks.go#L137
> script := base64.StdEncoding.EncodeToString([]byte(unitdebug.ClientScript(
> debugctx,
> c.hooks)))
> innercmd := fmt.Sprintf(`F=$(mktemp); echo %s | base64 -d > $F; . $F`,
> script)
> args := []string{fmt.Sprintf("sudo /bin/bash -c '%s'", innercmd)}
> c.Args = args
> return c.sshCommand.Run(ctx)
>
> Server script:
> https://github.com/juju/juju/blob/develop/worker/uniter/
> runner/debug/server.go#L90
> Client script:
> https://github.com/juju/juju/blob/develop/worker/uniter/
> runner/debug/client.go#L49
>
> A worker waits until a client exists by monitoring a file lock at
> ClientExitFileLock:
> https://github.com/juju/juju/blob/dev

Exiting an unconditional juju debug-hooks session

2017-06-04 Thread Dmitrii Shcherbakov
Hi everybody,

Currently if you do

juju debug-hooks  # no event (hook) in particular

each time there is a new event you will get a new tmux window open and
this will be done serially as there is no parallelism in hook
execution on a given logical machine. This is all good and intentional
but when you've observed the charm behavior and want to let it work
without your interference again, you need to end your tmux session.
This can be hard via `exit [status]` shell builtin when you get a lot
of events (think of an OpenStack HA deployment) - each time you do

./hooks/$JUJU_HOOK_NAME && exit

you are dropped into a session '0' and a new session is created for a
queued event for which you have to manually execute a hook and exit
again until you process the backlog.

tmux list-windows
0: bash- (1 panes) [239x62] [layout bbde,239x62,0,0,1] @1 # <---
dropping here after `exit`
1: update-status* (1 panes) [239x62] [layout bbe0,239x62,0,0,3] @3 (active)

https://jujucharms.com/docs/stable/authors-hook-debug#running-a-debug-session
"Note: To allow Juju to continue processing events normally, you must
exit the hook execution with a zero return code (using the exit
command), otherwise all further events on that unit may be blocked
indefinitely."

My initial thought was something like this - send SIGTERM to a child
of sshd which will terminate your ssh session:
unset n ; p=`pgrep -f 'tmux attach-session.*'$JUJU_UNIT_NAME` ; while
[ "$n" != "sshd" ] ; do pc=$p ; p=$(ps -o ppid= $p | tr -d ' ') ; echo
$p ; n=`basename $(readlink /proc/$p/exe || echo -n none)` ; done &&
kill $pc

as an agent waits for an SSH client to exit:
https://github.com/juju/juju/blob/develop/worker/uniter/runner/debug/server.go#L53

After thinking about it some more, I thought it would be cleaner to
just kill a specific tmux session:

tmux list-sessions
gluster/0: 2 windows (created Fri Jun  2 20:22:30 2017) [239x62] (attached)

./hooks/$JUJU_HOOK_NAME && tmux kill-session -t $JUJU_UNIT_NAME
[exited]
Cleaning up the debug session
no server running on /tmp/tmux-0/default
Connection to 10.10.101.77 closed.

The cleanup message comes from debugHooksClientScript that simply sets
up a bash trap on EXIT:
https://github.com/juju/juju/blob/develop/worker/uniter/runner/debug/client.go#L51

Judging by the code, it should be pretty safe to do so - unless there
is a debug session in a debug context for a particular unit, other
hooks will be executed regularly by an agent instead of creating a new
tmux window:
https://github.com/juju/juju/blob/develop/worker/uniter/runner/runner.go#L225
debugctx := debug.NewHooksContext(runner.context.UnitName())
if session, _ := debugctx.FindSession(); session != nil &&
session.MatchHook(hookName) {
logger.Infof("executing %s via debug-hooks", hookName)
err = session.RunHook(hookName, runner.paths.GetCharmDir(), env)
} else {
err = runner.runCharmHook(hookName, env, charmLocation)
}
return runner.context.Flush(hookName, err)

There are two scripts:

- a client script executed via an ssh client when you run juju debug-hooks
- a server script which is executed in the `RunHook` function by an
agent and creates a new window for an existing tmux session.

client side:
https://github.com/juju/juju/blob/develop/cmd/juju/commands/debughooks.go#L137
script := 
base64.StdEncoding.EncodeToString([]byte(unitdebug.ClientScript(debugctx,
c.hooks)))
innercmd := fmt.Sprintf(`F=$(mktemp); echo %s | base64 -d > $F; . $F`, script)
args := []string{fmt.Sprintf("sudo /bin/bash -c '%s'", innercmd)}
c.Args = args
return c.sshCommand.Run(ctx)

Server script:
https://github.com/juju/juju/blob/develop/worker/uniter/runner/debug/server.go#L90
Client script:
https://github.com/juju/juju/blob/develop/worker/uniter/runner/debug/client.go#L49

A worker waits until a client exists by monitoring a file lock at
ClientExitFileLock:
https://github.com/juju/juju/blob/develop/worker/uniter/runner/debug/server.go#L34
https://github.com/juju/juju/blob/develop/worker/uniter/runner/debug/server.go#L53
A path of a lock itself for a particular session:
https://github.com/juju/juju/blob/develop/worker/uniter/runner/debug/common.go#L24

---

If this approach with killing a tmux session is fine then I could
create a PR for the doc repo and for the description in the
debugHooksServerScript to explicitly mention it.

I doubt it deserves a helper command but rather a more verbose explanation.

Have anybody else encountered the need to do the same?

Best Regards,
Dmitrii Shcherbakov

Field Software Engineer
IRC (freenode): Dmitrii-Sh

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


debug hooks

2014-10-23 Thread Vasiliy Tolstov
Hi =)!

I have succeseful deployed wordpress on precreated lxc containers.
After deploy to vps i have all services in error state with message
about install hook failed.

How can i debug this?
In logs i don't have any info:
2014-10-23 11:51:54 INFO juju.cmd supercommand.go:37 running jujud
[1.20.10-trusty-amd64 gc]
2014-10-23 11:51:54 DEBUG juju.agent agent.go:377 read agent config,
format 1.18
2014-10-23 11:51:54 INFO juju.jujud unit.go:78 unit agent
unit-wordpress-0 start (1.20.10-trusty-amd64 [gc])
2014-10-23 11:51:54 INFO juju.worker runner.go:260 start api
2014-10-23 11:51:54 INFO juju.state.api apiclient.go:242 dialing
wss://10.0.3.219:17070/
2014-10-23 11:51:54 INFO juju.state.api apiclient.go:250 error dialing
wss://10.0.3.219:17070/: websocket.Dial wss://10.0.3.219:17070/:
dial tcp 10.0.3.219:17070: connection refused
2014-10-23 11:51:54 ERROR juju.worker runner.go:218 exited api:
unable to connect to wss://10.0.3.219:17070/
2014-10-23 11:51:54 INFO juju.worker runner.go:252 restarting api in 3s
2014-10-23 11:51:57 INFO juju.worker runner.go:260 start api
2014-10-23 11:51:57 INFO juju.state.api apiclient.go:242 dialing
wss://10.0.3.219:17070/
2014-10-23 11:51:58 INFO juju.state.api apiclient.go:176 connection
established to wss://10.0.3.219:17070/
2014-10-23 11:51:35 INFO juju.worker runner.go:260 start upgrader
2014-10-23 11:51:35 INFO juju.worker runner.go:260 start logger
2014-10-23 11:51:35 DEBUG juju.worker.logger logger.go:35 initial log
config: root=DEBUG
2014-10-23 11:51:35 INFO juju.worker runner.go:260 start uniter
2014-10-23 11:51:35 DEBUG juju.worker.logger logger.go:60 logger setup
2014-10-23 11:51:35 INFO juju.worker runner.go:260 start apiaddressupdater
2014-10-23 11:51:35 INFO juju.worker runner.go:260 start rsyslog
2014-10-23 11:51:35 DEBUG juju.worker.rsyslog worker.go:75 starting
rsyslog worker mode 1 for unit-wordpress-0 
2014-10-23 11:51:35 DEBUG juju.worker.logger logger.go:45
reconfiguring logging from root=DEBUG to root=DEBUG;unit=DEBUG
2014-10-23 11:51:35 INFO juju.worker.upgrader upgrader.go:116 desired
tool version: 1.20.10
2014-10-23 11:51:35 DEBUG juju.worker.rsyslog worker.go:164 Reloading
rsyslog configuration
2014-10-23 11:51:35 INFO juju.worker.apiaddressupdater
apiaddressupdater.go:61 API addresses updated to [[public:10.0.3.219
local-machine:127.0.0.1 local-machine:::1
fe80::216:3eff:fe28:3970]]
2014-10-23 11:51:35 DEBUG juju.worker.uniter uniter.go:217 starting
juju-run listener on
unix:/var/lib/juju/agents/unit-wordpress-0/run.socket
2014-10-23 11:51:35 INFO juju.worker.uniter uniter.go:115 unit
wordpress/0 started
2014-10-23 11:51:35 DEBUG juju.worker.uniter runlistener.go:84
juju-run listener running
2014-10-23 11:51:35 INFO juju.worker.uniter modes.go:396 ModeContinue starting
2014-10-23 11:51:35 INFO juju.worker.uniter modes.go:31 loading uniter state
2014-10-23 11:51:35 INFO juju.worker.uniter modes.go:78 awaiting error
resolution for install hook
2014-10-23 11:51:35 DEBUG juju.worker.uniter modes.go:398 ModeContinue exiting
2014-10-23 11:51:35 INFO juju.worker.uniter modes.go:396 ModeHookError starting
2014-10-23 11:51:35 DEBUG juju.worker.uniter uniter.go:757 new
environment change
2014-10-23 11:51:35 DEBUG juju.worker.uniter.filter filter.go:507
charm check skipped, not yet installed.
2014-10-23 11:51:35 DEBUG juju.worker.uniter.filter filter.go:333 got
config change
2014-10-23 11:51:35 DEBUG juju.worker.uniter.filter filter.go:337
preparing new config event
2014-10-23 11:51:35 DEBUG juju.worker.uniter.filter filter.go:317 got
unit change
2014-10-23 11:51:35 DEBUG juju.worker.uniter.filter filter.go:341 got
relations change
2014-10-23 11:51:35 DEBUG juju.worker.uniter.filter filter.go:325 got
service change
2014-10-23 11:51:35 DEBUG juju.worker.uniter.filter filter.go:521 no
new charm event
2014-10-23 11:51:35 DEBUG juju.worker.uniter.filter filter.go:421 want
resolved event
2014-10-23 11:51:35 DEBUG juju.worker.uniter.filter filter.go:415 want
forced upgrade true
2014-10-23 11:51:35 DEBUG juju.worker.uniter.filter filter.go:521 no
new charm event
2014-10-23 11:51:59 INFO juju.worker.upgrader upgrader.go:116 desired
tool version: 1.20.10


-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru
jabber: v...@selfip.ru

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: debug hooks

2014-10-23 Thread David Britton
On Thu, Oct 23, 2014 at 04:09:31PM +0400, Vasiliy Tolstov wrote:
 Hi =)!
 
 I have succeseful deployed wordpress on precreated lxc containers.
 After deploy to vps i have all services in error state with message
 about install hook failed.
 

First, look on the unit in /var/log/juju:

  juju ssh unit
  ls -l /var/log/juju/unit-*

The unit-* logs are the intersting ones if you have gotten as far as
running hooks.

If that is not revealing, you can step through hook execution, see the
following link (thought this is typically not necessary to just see how
something failed, it's more if you are developing a charm):

  https://juju.ubuntu.com/docs/authors-hook-debug.html

HTH!

-- 
David Britton david.brit...@canonical.com

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev