Re: JUJU_UNIT_NAME no longer set in env

2017-05-22 Thread Ian Booth
FWIW, Juju itself still sets JUJU_UNIT_NAME

https://github.com/juju/juju/blob/develop/worker/uniter/runner/context/context.go#L582

On 23/05/17 05:59, James Beedy wrote:
> Juju 2.1.2
> 
> I'm getting this "JUJU_UNIT_NAME not in env" error on legacy-non-reactive
> xenial charm using service_name() from hookenv.
> 
> http://paste.ubuntu.com/24626263/
> 
> Did we remove this?
> 
> ~James
> 
> 
> 

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: PROPOSAL: stop recording 'executing update-status hook'

2017-05-22 Thread Ian Booth


On 23/05/17 06:39, Stuart Bishop wrote:
> On 22 May 2017 at 20:02, roger peppe  wrote:
> 
>> not to show in the status history.  Given that the motivation behind
>> the proposal is to reduce load on the database and on controllers, I
> 
> One of the motivations was to reduce load. Another motivation, that
> I'm more interested in, was to make the status log history readable.
> Currently it is page after page of noise about update-status running
> with occasional bits of information.
> 
> (I've leave it to others to argue if it is better to fix this when
> generating the report or by not logging the noise in the first place)
> 

Since Juju 2.1.1, the juju show-status-log command no longer shows
status-history entries by default. There's a new --include-status-updates flag
which can be used if those entries are required in the output.

There's also squashing of repeated log entries. These enhancements were meant to
address the "I don't want to see it problem".

The idea to not record it was meant to address the load issue (both retrieval
and recording). As part of the ongoing performance tuning and scaling efforts,
some hard numbers are being gathered to measure the impact of keeping
update-status in the database.

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


GUI Props

2017-05-22 Thread James Beedy
The latest Juju GUI release is a huge step in the right direction! Amongst
the great improvements, the account page is really the start to something
great! Massive props and many thanks to all who have put in hard work and
long hours to bring the GUI to where it is today.

Looking forward, I'm wondering if the account page would be a good place to
manage ssh keys too?

Either way, nice work here!
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


GUI Props

2017-05-22 Thread James Beedy
The latest Juju GUI release is a huge step in the right direction! Amongst
the great improvements, the account page is really the start to something
great! Massive props and many thanks to all who have put in hard work and
long hours to bring the GUI to where it is today.

Looking forward, I'm wondering if the account page would be a good place to
manage ssh keys too?

Either way, nice work here!
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Scale testing analysis

2017-05-22 Thread John Meinel
>
> ...
>


> We have most of the responsive nature of Juju is driven off the watchers.
> These watchers watch the mongo oplog for document changes. What happened
> was that there were so many mongo operations, the capped collection of the
> oplog was completely replaced between our polled watcher delays. The
> watchers then errored out in a new unexpected way.
>
> Effectively the watcher infrastructure needs an internal reset button that
> it can hit when this happens that invalidates all the watchers. This should
> cause all the workers to be torn down and restarted from a known good state.
>

Tim and I discussed this a bit. It probably wasn't the 'oplog' that
overflowed, but actually the 'txns.log' collection. Which is also a capped
collection at 10MB in size.
The issue is likely that the 'txnsLogWorker' automatically restarted on an
error, but the error actually meant that we're missing events, which means
that all the watchers/workers that are relying on the event stream should
be restarted. (we obviously can't know what events we're missing, cause
they're missing.)

So one argument is that txnsLogWorker should *not* be automatically
restarted. Instead failures of that worker should actually be critical
failures in the process and just cause the whole process to restart.
The alternative is that we introduce a mechanism to cause all workers to
restart (since they need to start fresh anyway), but restarting the agent
has a similar effect.

It is possible that we could whitelist some known errors that don't
indicate we need a full restart, but those really should be a whitelist.

John
=:->


>
> There was a model that got stuck being destroyed, this is tracked back to
> a worker that should be doing the destructions not noticing.
>
> All the CPU usage can be tracked back to the 139 models in the apiserver
> state pools each still running leadership and base watcher workers. The
> state pool should have removed all these instances, but it didn't notice
> they were gone.
>
> There are some other bugs around logging things as errors that really
> aren't errors that contributed to log noise, but the fundamental error here
> is not being robust in the face of too much change at once.
>
> This needs to be fixed for the 2.2 release candidate, so it may well push
> that out past the end of this week.
>
> Tim
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at: https://lists.ubuntu.com/mailm
> an/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Scale testing analysis

2017-05-22 Thread Tim Penhey

Hi folks,

We had another scale test today to analyse why the controller CPU usage 
didn't fall away as expected when the models were removed.


I'll be filing a bunch of bugs from the analysis process, but there is 
one bug that is, I believe, the culprit for the high CPU usage.


Interestingly enough, Juju developers were not able to reproduce the 
problem with smaller deployments. The scale that we were testing was 140 
models each with 10 machines and about 20 total units.


During the teardown process of the testing, all models were destroyed at 
once.


We have most of the responsive nature of Juju is driven off the 
watchers. These watchers watch the mongo oplog for document changes. 
What happened was that there were so many mongo operations, the capped 
collection of the oplog was completely replaced between our polled 
watcher delays. The watchers then errored out in a new unexpected way.


Effectively the watcher infrastructure needs an internal reset button 
that it can hit when this happens that invalidates all the watchers. 
This should cause all the workers to be torn down and restarted from a 
known good state.


There was a model that got stuck being destroyed, this is tracked back 
to a worker that should be doing the destructions not noticing.


All the CPU usage can be tracked back to the 139 models in the apiserver 
state pools each still running leadership and base watcher workers. The 
state pool should have removed all these instances, but it didn't notice 
they were gone.


There are some other bugs around logging things as errors that really 
aren't errors that contributed to log noise, but the fundamental error 
here is not being robust in the face of too much change at once.


This needs to be fixed for the 2.2 release candidate, so it may well 
push that out past the end of this week.


Tim

--
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


JUJU_UNIT_NAME no longer set in env

2017-05-22 Thread James Beedy
Juju 2.1.2

I'm getting this "JUJU_UNIT_NAME not in env" error on legacy-non-reactive
xenial charm using service_name() from hookenv.

http://paste.ubuntu.com/24626263/

Did we remove this?

~James
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


JUJU_UNIT_NAME no longer set in env

2017-05-22 Thread James Beedy
Juju 2.1.2

I'm getting this "JUJU_UNIT_NAME not in env" error on legacy-non-reactive
xenial charm using service_name() from hookenv.

http://paste.ubuntu.com/24626263/

Did we remove this?

~James
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: PROPOSAL: stop recording 'executing update-status hook'

2017-05-22 Thread roger peppe
Another optimisation possibility might be to send less status updates.
For example, the uniter could wait until the hook
has been executing for some period of time (e.g. 100ms) before
sending the "executing" status. If the hook finishes within that
period, it would send a bulk update with two updates (setting the status to
"executing" and back). The API server could see that the second update
doesn't affect the current status of the unit and just append
the two items to the history without needing to update the current unit status,
thus (usually, assuming fast hook execution) saving one transaction
per update and halving the number of API calls.

As an aside, I wonder if it's right that the status as set by set-status
is the same status that's set when executing a hook - ISTM that
it might be better if they were separate items of information
so that the status set by the hook code is always available regardless
of what hooks are executing.



On 22 May 2017 at 14:02, roger peppe  wrote:
> On 22 May 2017 at 09:55, Ian Booth  wrote:
>> On 22/05/17 18:23, roger peppe wrote:
>>> I think it's slightly unfortunate that update-status exists at all -
>>> it doesn't really need to,
>>> AFAICS, as a charm can always do the polling itself if needed; for example:
>>>
>>> while :; do
>>>  sleep 30
>>>  juju-run $UNIT 'status-set current-status "this is what is 
>>> happening"'
>>> done &
>>>
>>> Or (better) use juju-run to set the status when the workload
>>> executable starts and exits, avoiding the need for polling at all.
>>
>> It's not sufficient to just set the status when the workload starts and 
>> exits.
>> One example is a database which periodically goes offline for a short time 
>> for
>> maintenance. The workload executable itself should not have to know how to
>> communicate this to Juju. By the agent running update-status hook 
>> periodically,
>> it allows the charm itself to establish whether the database status should be
>> marked as "maintetance" for example.
>
> Of course there are many ways that a workload can become unavailable,
> and sometimes polling may be the only decent answer. My thought is simply
> that rather than baking in polling fundamentally, the charm can be the
> thing with that knowledge. Even without update-status, a charm could
> poll and update its status appropriately if it wants.
>
>> Using a hook provides a standard way all
>> charms can rely on to communicate workload status in a consistent way.
>
> Isn't that the "status-set" command? :)
>
> More generally, on the original proposal, I think I tend to agree with
> Alex Kavanagh - it would be unexpected for a hook to run but
> not to show in the status history.  Given that the motivation behind
> the proposal is to reduce load on the database and on controllers, I
> wonder if it might be possible to make the status recording mechanism
> more efficient.  For example, the document used to store status updates
> currently takes about 178 bytes. This could be reduced to about 75 bytes
> with some simple changes. It should be possible to reduce the number of
> database operations involved in each of the status update calls too - it
> might be possible to reduce a status update RPC call to a single insert
> operation.
>
> Making the mechanism more efficient would also help other cases too -
> for example where an application is continually running some hook other
> than update-status.
>
>   rog.
>
> PS I suspect that almost no-one knows that "juju-run" can be used to run
> non-hook-initiated charm actions - it's hard to search for, its documentation
> isn't available on the client, and it's easily confused with the
> client-side "juju run"
> command. So my reply was also about raising awareness of this.

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: PROPOSAL: stop recording 'executing update-status hook'

2017-05-22 Thread roger peppe
On 22 May 2017 at 09:55, Ian Booth  wrote:
> On 22/05/17 18:23, roger peppe wrote:
>> I think it's slightly unfortunate that update-status exists at all -
>> it doesn't really need to,
>> AFAICS, as a charm can always do the polling itself if needed; for example:
>>
>> while :; do
>>  sleep 30
>>  juju-run $UNIT 'status-set current-status "this is what is 
>> happening"'
>> done &
>>
>> Or (better) use juju-run to set the status when the workload
>> executable starts and exits, avoiding the need for polling at all.
>
> It's not sufficient to just set the status when the workload starts and exits.
> One example is a database which periodically goes offline for a short time for
> maintenance. The workload executable itself should not have to know how to
> communicate this to Juju. By the agent running update-status hook 
> periodically,
> it allows the charm itself to establish whether the database status should be
> marked as "maintetance" for example.

Of course there are many ways that a workload can become unavailable,
and sometimes polling may be the only decent answer. My thought is simply
that rather than baking in polling fundamentally, the charm can be the
thing with that knowledge. Even without update-status, a charm could
poll and update its status appropriately if it wants.

> Using a hook provides a standard way all
> charms can rely on to communicate workload status in a consistent way.

Isn't that the "status-set" command? :)

More generally, on the original proposal, I think I tend to agree with
Alex Kavanagh - it would be unexpected for a hook to run but
not to show in the status history.  Given that the motivation behind
the proposal is to reduce load on the database and on controllers, I
wonder if it might be possible to make the status recording mechanism
more efficient.  For example, the document used to store status updates
currently takes about 178 bytes. This could be reduced to about 75 bytes
with some simple changes. It should be possible to reduce the number of
database operations involved in each of the status update calls too - it
might be possible to reduce a status update RPC call to a single insert
operation.

Making the mechanism more efficient would also help other cases too -
for example where an application is continually running some hook other
than update-status.

  rog.

PS I suspect that almost no-one knows that "juju-run" can be used to run
non-hook-initiated charm actions - it's hard to search for, its documentation
isn't available on the client, and it's easily confused with the
client-side "juju run"
command. So my reply was also about raising awareness of this.

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Python-Django

2017-05-22 Thread James Beedy
The python-Django charm is largely outdated in many respects. I have been 
meaning to do a deep dive on it and turn it into something usable (there have 
been a few attempts at this over the years we can also look at). I can apply 
some of the goodies I've developed for my rails-layer in the Django charm too, 
see https://github.com/jamesbeedy/layer-rails-base.

@Lonroth I'll create a new layer for Django and share it when I get something 
up there. Should be sometime this week.

~James-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: PROPOSAL: stop recording 'executing update-status hook'

2017-05-22 Thread Alex Kavanagh
>From a 'principle of least surprise', I think that this proposal is
probably a bad idea.  The main driver is to (I guess) reduce the volume of
'chat' between the controller and 100's or 1000's of charms, and the effect
this has on the status log.  However, my 'guess' is also probably wrong as
it would continue to live in the debug log, which implies it is still being
sent/received.

I think it would be 'odd' for a status to be running, but not show up in
the status or status log - this would be unexpected.

My counter proposal is to have a toggle to disable 'update status' on a
model and/or application level, and/or have the ability to reduce the
frequency of the updates.

However, the downside of reducing/eliminating update-status is that (I
suspect) several charms rely on it to recover from errors (particularly)
racy ones.  Perhaps, this can be offset (during install) by usage of 'juju
run ...' and the fixing the charms with the bugs?

Another 'worry' is that if update-status is not reflected in 'juju-status'
will the workload status be updated (i.e. showing problems, fixing
problems, etc.)?

Alex.

On Mon, May 22, 2017 at 9:56 AM, Stuart Bishop 
wrote:

> On 22 May 2017 at 14:36, Tim Penhey  wrote:
> > On 20/05/17 19:48, Merlijn Sebrechts wrote:
> >>
> >> On May 20, 2017 09:05, "John Meinel"  >> > wrote:
> >>
> >> I would actually prefer if it shows up in 'juju status' but that we
> >> suppress it from 'juju status-log' by default.
> >>
> >>
> >> This is still very strange behavior. Why should this be default? Just
> pipe
> >> the output of juju status through grep and exclude update-status if
> that is
> >> really what you want.
> >>
> >> However, I would even argue that this isn't what you want in most
> >> use-cases.  "update-status" isn't seen as a special hook in
> charms.reactive.
> >> Anything can happen in that hook if the conditions are right. Ignoring
> >> update-status will have unforeseen consequences...
> >
> >
> > Hmm... there are (at least) two problems here.
> >
> > Firstly, update-status *should* be a special case hook, and it shouldn't
> > take long.
> >
> > The purpose of the update-status hook was to provide a regular beat for
> the
> > charm to report on the workload status. Really it shouldn't be doing
> other
> > things.
> >
> > The fact that it is a periodic execution rather than being executed in
> > response to model changes is the reason it isn't fitting so well into the
> > regular status and status history updates.
> >
> > The changes to the workload status would still be shown in the history of
> > the workload status, and the workload status is shown in the status
> output.
> >
> > One way to limit the execution of the update-status hook call would be to
> > put a hard timeout on it enforced by the agent.
> >
> > Thoughts?
>
> Unfortunately update-status got wired into charms.reactive like all
> the other standard hooks, and just means 'do whatever still needs to
> be done'. I think its too late to add timeouts or restrictions. But I
> do think special casing it in the status history is needed. Anything
> important will still end up in there due to workload status changes.
>
> --
> Stuart Bishop 
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at: https://lists.ubuntu.com/
> mailman/listinfo/juju-dev
>



-- 
Alex Kavanagh - Software Engineer
Cloud Dev Ops - Solutions & Product Engineering - Canonical Ltd
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: PROPOSAL: stop recording 'executing update-status hook'

2017-05-22 Thread Alex Kavanagh
>From a 'principle of least surprise', I think that this proposal is
probably a bad idea.  The main driver is to (I guess) reduce the volume of
'chat' between the controller and 100's or 1000's of charms, and the effect
this has on the status log.  However, my 'guess' is also probably wrong as
it would continue to live in the debug log, which implies it is still being
sent/received.

I think it would be 'odd' for a status to be running, but not show up in
the status or status log - this would be unexpected.

My counter proposal is to have a toggle to disable 'update status' on a
model and/or application level, and/or have the ability to reduce the
frequency of the updates.

However, the downside of reducing/eliminating update-status is that (I
suspect) several charms rely on it to recover from errors (particularly)
racy ones.  Perhaps, this can be offset (during install) by usage of 'juju
run ...' and the fixing the charms with the bugs?

Another 'worry' is that if update-status is not reflected in 'juju-status'
will the workload status be updated (i.e. showing problems, fixing
problems, etc.)?

Alex.

On Mon, May 22, 2017 at 9:56 AM, Stuart Bishop 
wrote:

> On 22 May 2017 at 14:36, Tim Penhey  wrote:
> > On 20/05/17 19:48, Merlijn Sebrechts wrote:
> >>
> >> On May 20, 2017 09:05, "John Meinel"  >> > wrote:
> >>
> >> I would actually prefer if it shows up in 'juju status' but that we
> >> suppress it from 'juju status-log' by default.
> >>
> >>
> >> This is still very strange behavior. Why should this be default? Just
> pipe
> >> the output of juju status through grep and exclude update-status if
> that is
> >> really what you want.
> >>
> >> However, I would even argue that this isn't what you want in most
> >> use-cases.  "update-status" isn't seen as a special hook in
> charms.reactive.
> >> Anything can happen in that hook if the conditions are right. Ignoring
> >> update-status will have unforeseen consequences...
> >
> >
> > Hmm... there are (at least) two problems here.
> >
> > Firstly, update-status *should* be a special case hook, and it shouldn't
> > take long.
> >
> > The purpose of the update-status hook was to provide a regular beat for
> the
> > charm to report on the workload status. Really it shouldn't be doing
> other
> > things.
> >
> > The fact that it is a periodic execution rather than being executed in
> > response to model changes is the reason it isn't fitting so well into the
> > regular status and status history updates.
> >
> > The changes to the workload status would still be shown in the history of
> > the workload status, and the workload status is shown in the status
> output.
> >
> > One way to limit the execution of the update-status hook call would be to
> > put a hard timeout on it enforced by the agent.
> >
> > Thoughts?
>
> Unfortunately update-status got wired into charms.reactive like all
> the other standard hooks, and just means 'do whatever still needs to
> be done'. I think its too late to add timeouts or restrictions. But I
> do think special casing it in the status history is needed. Anything
> important will still end up in there due to workload status changes.
>
> --
> Stuart Bishop 
>
> --
> Juju-dev mailing list
> juju-...@lists.ubuntu.com
> Modify settings or unsubscribe at: https://lists.ubuntu.com/
> mailman/listinfo/juju-dev
>



-- 
Alex Kavanagh - Software Engineer
Cloud Dev Ops - Solutions & Product Engineering - Canonical Ltd
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: PROPOSAL: stop recording 'executing update-status hook'

2017-05-22 Thread Stuart Bishop
On 22 May 2017 at 14:36, Tim Penhey  wrote:
> On 20/05/17 19:48, Merlijn Sebrechts wrote:
>>
>> On May 20, 2017 09:05, "John Meinel" > > wrote:
>>
>> I would actually prefer if it shows up in 'juju status' but that we
>> suppress it from 'juju status-log' by default.
>>
>>
>> This is still very strange behavior. Why should this be default? Just pipe
>> the output of juju status through grep and exclude update-status if that is
>> really what you want.
>>
>> However, I would even argue that this isn't what you want in most
>> use-cases.  "update-status" isn't seen as a special hook in charms.reactive.
>> Anything can happen in that hook if the conditions are right. Ignoring
>> update-status will have unforeseen consequences...
>
>
> Hmm... there are (at least) two problems here.
>
> Firstly, update-status *should* be a special case hook, and it shouldn't
> take long.
>
> The purpose of the update-status hook was to provide a regular beat for the
> charm to report on the workload status. Really it shouldn't be doing other
> things.
>
> The fact that it is a periodic execution rather than being executed in
> response to model changes is the reason it isn't fitting so well into the
> regular status and status history updates.
>
> The changes to the workload status would still be shown in the history of
> the workload status, and the workload status is shown in the status output.
>
> One way to limit the execution of the update-status hook call would be to
> put a hard timeout on it enforced by the agent.
>
> Thoughts?

Unfortunately update-status got wired into charms.reactive like all
the other standard hooks, and just means 'do whatever still needs to
be done'. I think its too late to add timeouts or restrictions. But I
do think special casing it in the status history is needed. Anything
important will still end up in there due to workload status changes.

-- 
Stuart Bishop 

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: PROPOSAL: stop recording 'executing update-status hook'

2017-05-22 Thread Stuart Bishop
On 22 May 2017 at 14:36, Tim Penhey  wrote:
> On 20/05/17 19:48, Merlijn Sebrechts wrote:
>>
>> On May 20, 2017 09:05, "John Meinel" > > wrote:
>>
>> I would actually prefer if it shows up in 'juju status' but that we
>> suppress it from 'juju status-log' by default.
>>
>>
>> This is still very strange behavior. Why should this be default? Just pipe
>> the output of juju status through grep and exclude update-status if that is
>> really what you want.
>>
>> However, I would even argue that this isn't what you want in most
>> use-cases.  "update-status" isn't seen as a special hook in charms.reactive.
>> Anything can happen in that hook if the conditions are right. Ignoring
>> update-status will have unforeseen consequences...
>
>
> Hmm... there are (at least) two problems here.
>
> Firstly, update-status *should* be a special case hook, and it shouldn't
> take long.
>
> The purpose of the update-status hook was to provide a regular beat for the
> charm to report on the workload status. Really it shouldn't be doing other
> things.
>
> The fact that it is a periodic execution rather than being executed in
> response to model changes is the reason it isn't fitting so well into the
> regular status and status history updates.
>
> The changes to the workload status would still be shown in the history of
> the workload status, and the workload status is shown in the status output.
>
> One way to limit the execution of the update-status hook call would be to
> put a hard timeout on it enforced by the agent.
>
> Thoughts?

Unfortunately update-status got wired into charms.reactive like all
the other standard hooks, and just means 'do whatever still needs to
be done'. I think its too late to add timeouts or restrictions. But I
do think special casing it in the status history is needed. Anything
important will still end up in there due to workload status changes.

-- 
Stuart Bishop 

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: PROPOSAL: stop recording 'executing update-status hook'

2017-05-22 Thread Ian Booth


On 22/05/17 18:23, roger peppe wrote:
> I think it's slightly unfortunate that update-status exists at all -
> it doesn't really need to,
> AFAICS, as a charm can always do the polling itself if needed; for example:
> 
> while :; do
>  sleep 30
>  juju-run $UNIT 'status-set current-status "this is what is 
> happening"'
> done &
> 
> Or (better) use juju-run to set the status when the workload
> executable starts and exits, avoiding the need for polling at all.
>

It's not sufficient to just set the status when the workload starts and exits.
One example is a database which periodically goes offline for a short time for
maintenance. The workload executable itself should not have to know how to
communicate this to Juju. By the agent running update-status hook periodically,
it allows the charm itself to establish whether the database status should be
marked as "maintetance" for example. Using a hook provides a standard way all
charms can rely on to communicate workload status in a consistent way.


-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: PROPOSAL: stop recording 'executing update-status hook'

2017-05-22 Thread roger peppe
I think it's slightly unfortunate that update-status exists at all -
it doesn't really need to,
AFAICS, as a charm can always do the polling itself if needed; for example:

while :; do
 sleep 30
 juju-run $UNIT 'status-set current-status "this is what is happening"'
done &

Or (better) use juju-run to set the status when the workload
executable starts and exits, avoiding the need for polling at all.

  cheers,
rog.

On 22 May 2017 at 08:36, Tim Penhey  wrote:
> On 20/05/17 19:48, Merlijn Sebrechts wrote:
>>
>> On May 20, 2017 09:05, "John Meinel" > > wrote:
>>
>> I would actually prefer if it shows up in 'juju status' but that we
>> suppress it from 'juju status-log' by default.
>>
>>
>> This is still very strange behavior. Why should this be default? Just pipe
>> the output of juju status through grep and exclude update-status if that is
>> really what you want.
>>
>> However, I would even argue that this isn't what you want in most
>> use-cases.  "update-status" isn't seen as a special hook in charms.reactive.
>> Anything can happen in that hook if the conditions are right. Ignoring
>> update-status will have unforeseen consequences...
>
>
> Hmm... there are (at least) two problems here.
>
> Firstly, update-status *should* be a special case hook, and it shouldn't
> take long.
>
> The purpose of the update-status hook was to provide a regular beat for the
> charm to report on the workload status. Really it shouldn't be doing other
> things.
>
> The fact that it is a periodic execution rather than being executed in
> response to model changes is the reason it isn't fitting so well into the
> regular status and status history updates.
>
> The changes to the workload status would still be shown in the history of
> the workload status, and the workload status is shown in the status output.
>
> One way to limit the execution of the update-status hook call would be to
> put a hard timeout on it enforced by the agent.
>
> Thoughts?
>
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: PROPOSAL: stop recording 'executing update-status hook'

2017-05-22 Thread roger peppe
I think it's slightly unfortunate that update-status exists at all -
it doesn't really need to,
AFAICS, as a charm can always do the polling itself if needed; for example:

while :; do
 sleep 30
 juju-run $UNIT 'status-set current-status "this is what is happening"'
done &

Or (better) use juju-run to set the status when the workload
executable starts and exits, avoiding the need for polling at all.

  cheers,
rog.

On 22 May 2017 at 08:36, Tim Penhey  wrote:
> On 20/05/17 19:48, Merlijn Sebrechts wrote:
>>
>> On May 20, 2017 09:05, "John Meinel" > > wrote:
>>
>> I would actually prefer if it shows up in 'juju status' but that we
>> suppress it from 'juju status-log' by default.
>>
>>
>> This is still very strange behavior. Why should this be default? Just pipe
>> the output of juju status through grep and exclude update-status if that is
>> really what you want.
>>
>> However, I would even argue that this isn't what you want in most
>> use-cases.  "update-status" isn't seen as a special hook in charms.reactive.
>> Anything can happen in that hook if the conditions are right. Ignoring
>> update-status will have unforeseen consequences...
>
>
> Hmm... there are (at least) two problems here.
>
> Firstly, update-status *should* be a special case hook, and it shouldn't
> take long.
>
> The purpose of the update-status hook was to provide a regular beat for the
> charm to report on the workload status. Really it shouldn't be doing other
> things.
>
> The fact that it is a periodic execution rather than being executed in
> response to model changes is the reason it isn't fitting so well into the
> regular status and status history updates.
>
> The changes to the workload status would still be shown in the history of
> the workload status, and the workload status is shown in the status output.
>
> One way to limit the execution of the update-status hook call would be to
> put a hard timeout on it enforced by the agent.
>
> Thoughts?
>
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: django charm broken or am I just doing it wrong?

2017-05-22 Thread Erik Lönroth
Hello Tim!

Thanx for interacting on the subject.

Would you be able to assist me in be able to deploy my website/django with
juju and/or improve the docs for the charm?

I am not an expert but love to help.

/Erik

Den 22 maj 2017 9:15 fm skrev "Tim Penhey" :

> Hmm.. I kinda do, but haven't touched it for a long time.
>
> I also use the subordinate charm method of delivering the webapp to the
> charm.
>
> I found that to use the charm properly you had to have the settings for
> the django project be a directory rather than a simple .py file.
>
> Tim
>
> On 22/05/17 04:15, John Meinel wrote:
>
>> I believe Tim Penhey makes active use of the python-django charm, but
>> it's possible he uses it in a different fashion.
>>
>> John
>> =:->
>>
>> On May 21, 2017 14:25, "Erik Lönroth"  erik.lonr...@gmail.com>> wrote:
>>
>> Hello!
>>
>> I'm trying out the django charm to deploy a django website I was
>> going to try to create with juju.
>>
>> * I followed the instructions here:
>> https://jujucharms.com/python-django/
>> 
>>
>> * My website repo is here: https://github.com/erik78se/dsite
>> 
>>
>> It has installed the website from github but there is an error:
>>
>> unit-dsite-1: 12:22:13 INFO unit.dsite/1.pgsql-relation-changed 0
>> upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
>> unit-dsite-1: 12:22:14 DEBUG unit.dsite/1.juju-log pgsql:2: found
>> django admin: /usr/bin/django-admin
>> unit-dsite-1: 12:22:14 DEBUG unit.dsite/1.juju-log pgsql:2:
>> PYTHONPATH=/srv/dsite/../ /usr/bin/django-admin syncdb --noinput
>> --settings=dsite.settings
>> unit-dsite-1: 12:22:14 ERROR unit.dsite/1.juju-log pgsql:2:
>> status=1, output=Traceback (most recent call last):
>>File "/usr/bin/django-admin", line 5, in 
>>  management.execute_from_command_line()
>>File
>> "/usr/lib/python2.7/dist-packages/django/core/management/__
>> init__.py",
>> line 399, in execute_from_command_line
>>  utility.execute()
>>File
>> "/usr/lib/python2.7/dist-packages/django/core/management/__
>> init__.py",
>> line 392, in execute
>>  self.fetch_command(subcommand).run_from_argv(self.argv)
>>File
>> "/usr/lib/python2.7/dist-packages/django/core/management/__
>> init__.py",
>> line 261, in fetch_command
>>  commands = get_commands()
>>File
>> "/usr/lib/python2.7/dist-packages/django/core/management/__
>> init__.py",
>> line 107, in get_commands
>>  apps = settings.INSTALLED_APPS
>>File "/usr/lib/python2.7/dist-packages/django/conf/__init__.py",
>> line 54, in __getattr__
>>  self._setup(name)
>>File "/usr/lib/python2.7/dist-packages/django/conf/__init__.py",
>> line 49, in _setup
>>  self._wrapped = Settings(settings_module)
>>File "/usr/lib/python2.7/dist-packages/django/conf/__init__.py",
>> line 132, in __init__
>>  % (self.SETTINGS_MODULE, e)
>> ImportError: Could not import settings 'dsite.settings' (Is it on
>> sys.path? Is there an import error in the settings file?): No module
>> named dsite.settings
>>
>> unit-dsite-1: 12:22:14 ERROR juju.worker.uniter.operation hook
>> "pgsql-relation-changed" failed: exit status 1
>> unit-dsite-1: 12:22:14 INFO juju.worker.uniter awaiting error
>> resolution for "relation-changed" hook
>>
>> As I'm not sure if I'm doing anything wrong or the charm is broken,
>> I wonder what your suggestion is in how to resolve this issue or if
>> I'm just doing it wrong.
>>
>> /Erik Lönroth
>>
>> --
>> Juju mailing list
>> Juju@lists.ubuntu.com 
>> Modify settings or unsubscribe at:
>> https://lists.ubuntu.com/mailman/listinfo/juju
>> 
>>
>>
>>
>>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: PROPOSAL: stop recording 'executing update-status hook'

2017-05-22 Thread Tim Penhey

On 20/05/17 19:48, Merlijn Sebrechts wrote:
On May 20, 2017 09:05, "John Meinel" > wrote:


I would actually prefer if it shows up in 'juju status' but that we
suppress it from 'juju status-log' by default.


This is still very strange behavior. Why should this be default? Just 
pipe the output of juju status through grep and exclude update-status if 
that is really what you want.


However, I would even argue that this isn't what you want in most 
use-cases.  "update-status" isn't seen as a special hook in 
charms.reactive. Anything can happen in that hook if the conditions are 
right. Ignoring update-status will have unforeseen consequences...


Hmm... there are (at least) two problems here.

Firstly, update-status *should* be a special case hook, and it shouldn't 
take long.


The purpose of the update-status hook was to provide a regular beat for 
the charm to report on the workload status. Really it shouldn't be doing 
other things.


The fact that it is a periodic execution rather than being executed in 
response to model changes is the reason it isn't fitting so well into 
the regular status and status history updates.


The changes to the workload status would still be shown in the history 
of the workload status, and the workload status is shown in the status 
output.


One way to limit the execution of the update-status hook call would be 
to put a hard timeout on it enforced by the agent.


Thoughts?

--
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: PROPOSAL: stop recording 'executing update-status hook'

2017-05-22 Thread Tim Penhey

On 20/05/17 19:48, Merlijn Sebrechts wrote:
On May 20, 2017 09:05, "John Meinel" > wrote:


I would actually prefer if it shows up in 'juju status' but that we
suppress it from 'juju status-log' by default.


This is still very strange behavior. Why should this be default? Just 
pipe the output of juju status through grep and exclude update-status if 
that is really what you want.


However, I would even argue that this isn't what you want in most 
use-cases.  "update-status" isn't seen as a special hook in 
charms.reactive. Anything can happen in that hook if the conditions are 
right. Ignoring update-status will have unforeseen consequences...


Hmm... there are (at least) two problems here.

Firstly, update-status *should* be a special case hook, and it shouldn't 
take long.


The purpose of the update-status hook was to provide a regular beat for 
the charm to report on the workload status. Really it shouldn't be doing 
other things.


The fact that it is a periodic execution rather than being executed in 
response to model changes is the reason it isn't fitting so well into 
the regular status and status history updates.


The changes to the workload status would still be shown in the history 
of the workload status, and the workload status is shown in the status 
output.


One way to limit the execution of the update-status hook call would be 
to put a hard timeout on it enforced by the agent.


Thoughts?

--
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: PROPOSAL: stop recording 'executing update-status hook'

2017-05-22 Thread Tim Penhey

On 19/05/17 19:21, roger peppe wrote:

On 19 May 2017 at 03:13, Tim Penhey  wrote:

Hi folks,

Currently juju will update the status of any hook execution for any unit to
show that it is busy doing things. This was all well and good until we do
things based on time.

Every five minutes (or so) each unit will have the update-status hook
executed to allow the unit to set or update the workload status based on
what is currently going on with that unit.

Since all hook executions are stored, this means that the show-status-log
will show the unit jumping from executing update-status to ready and back
every five minutes.

The proposal is to special case the update-status hook and show in status
(or the status-log) that the hook is being executed. debug-log will continue
to show the hook executing if you are looking.


Presumably you mean *not* show in status here?


Yes. Sorry.

--
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: PROPOSAL: stop recording 'executing update-status hook'

2017-05-22 Thread Tim Penhey

On 19/05/17 19:21, roger peppe wrote:

On 19 May 2017 at 03:13, Tim Penhey  wrote:

Hi folks,

Currently juju will update the status of any hook execution for any unit to
show that it is busy doing things. This was all well and good until we do
things based on time.

Every five minutes (or so) each unit will have the update-status hook
executed to allow the unit to set or update the workload status based on
what is currently going on with that unit.

Since all hook executions are stored, this means that the show-status-log
will show the unit jumping from executing update-status to ready and back
every five minutes.

The proposal is to special case the update-status hook and show in status
(or the status-log) that the hook is being executed. debug-log will continue
to show the hook executing if you are looking.


Presumably you mean *not* show in status here?


Yes. Sorry.

--
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: django charm broken or am I just doing it wrong?

2017-05-22 Thread Tim Penhey

Hmm.. I kinda do, but haven't touched it for a long time.

I also use the subordinate charm method of delivering the webapp to the 
charm.


I found that to use the charm properly you had to have the settings for 
the django project be a directory rather than a simple .py file.


Tim

On 22/05/17 04:15, John Meinel wrote:
I believe Tim Penhey makes active use of the python-django charm, but 
it's possible he uses it in a different fashion.


John
=:->

On May 21, 2017 14:25, "Erik Lönroth" > wrote:


Hello!

I'm trying out the django charm to deploy a django website I was
going to try to create with juju.

* I followed the instructions here:
https://jujucharms.com/python-django/


* My website repo is here: https://github.com/erik78se/dsite


It has installed the website from github but there is an error:

unit-dsite-1: 12:22:13 INFO unit.dsite/1.pgsql-relation-changed 0
upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
unit-dsite-1: 12:22:14 DEBUG unit.dsite/1.juju-log pgsql:2: found
django admin: /usr/bin/django-admin
unit-dsite-1: 12:22:14 DEBUG unit.dsite/1.juju-log pgsql:2:
PYTHONPATH=/srv/dsite/../ /usr/bin/django-admin syncdb --noinput
--settings=dsite.settings
unit-dsite-1: 12:22:14 ERROR unit.dsite/1.juju-log pgsql:2:
status=1, output=Traceback (most recent call last):
   File "/usr/bin/django-admin", line 5, in 
 management.execute_from_command_line()
   File
"/usr/lib/python2.7/dist-packages/django/core/management/__init__.py",
line 399, in execute_from_command_line
 utility.execute()
   File
"/usr/lib/python2.7/dist-packages/django/core/management/__init__.py",
line 392, in execute
 self.fetch_command(subcommand).run_from_argv(self.argv)
   File
"/usr/lib/python2.7/dist-packages/django/core/management/__init__.py",
line 261, in fetch_command
 commands = get_commands()
   File
"/usr/lib/python2.7/dist-packages/django/core/management/__init__.py",
line 107, in get_commands
 apps = settings.INSTALLED_APPS
   File "/usr/lib/python2.7/dist-packages/django/conf/__init__.py",
line 54, in __getattr__
 self._setup(name)
   File "/usr/lib/python2.7/dist-packages/django/conf/__init__.py",
line 49, in _setup
 self._wrapped = Settings(settings_module)
   File "/usr/lib/python2.7/dist-packages/django/conf/__init__.py",
line 132, in __init__
 % (self.SETTINGS_MODULE, e)
ImportError: Could not import settings 'dsite.settings' (Is it on
sys.path? Is there an import error in the settings file?): No module
named dsite.settings

unit-dsite-1: 12:22:14 ERROR juju.worker.uniter.operation hook
"pgsql-relation-changed" failed: exit status 1
unit-dsite-1: 12:22:14 INFO juju.worker.uniter awaiting error
resolution for "relation-changed" hook

As I'm not sure if I'm doing anything wrong or the charm is broken,
I wonder what your suggestion is in how to resolve this issue or if
I'm just doing it wrong.

/Erik Lönroth

--
Juju mailing list
Juju@lists.ubuntu.com 
Modify settings or unsubscribe at:
https://lists.ubuntu.com/mailman/listinfo/juju






--
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju