On Wed, Dec 17, 2014 at 9:02 PM, Marco Ceppi <ma...@ondina.co> wrote:
>
> I'm curious, what version of Juju are you currently using?
>

LTS, so Trusty :-)


> We're adding code that will reap empty machines after a short period of
> time. This will help save you with your case and others who are running in
> the cloud and don't want to spend money on cloud providers for machines
> doing nothing!
>

That's a nice feature, autopruning++


> This is something I'm actually working on addressing by adding `juju local
> suspend` and `juju local resume` commands via a `juju-local` plugin:
> https://github.com/juju-solutions/juju-local I hope to have this out for
> the new year. I'll also be cramming more functionality to make using the
> local provider much more reliable and easy.
>

I basically use my laptop for all work stuff so I see that a lot (and
learned how to ignore this issue), so I think I can be your guinea pig once
guinea pigs are needed.


> So, shell charms are fine, and we have a quite a few that are written
> well. We can discourage people from using them, but juju and charms is
> about choice and freedom. If an author wants to write charms in bash that's
> fine - we will just hold them to the same standard as all other charms.
> Something we've been diligently working on is charm testing. We're nearing
> the conclusion of the effort to add some semblance of testing to each charm
> and run those charms against all substrates and architectures we support.
> In doing so we can find poorly written charms and charms written well
> (regardless of language of charm).
>

Is this testing infra already ready to use and being enforced?

I saw two of my charms (one is a new subordinate charm for OpenID support
in Apache, another is a new API relation to Jenkins) failing because of
automatic tests and I thought "hmm ok, that wasn't here before", specially
because the failures didn't seem to be related to the actual code I was
introducing.


> There is no guarantee on the number of times a hook will execute.
>

Roger that (now)!


> 5. Juju should queue multiple deployment in order not to hurt performance,
>> both of disk and network IO. More than 3 deployments in parallel on my
>> machine makes it all really slow. I just leave Juju for a while and go get
>> some coffee because the system goes crazy. Or I have to break up manually
>> the deployments, while Juju could have just queued it all and the CLI could
>> simply display it as "queued" instead. I know it would need to analyse the
>> machine's hardware to guess a number different from 3 but think about it if
>> your deployments have about 10 different services... things that take 20
>> minutes can easily take over 1 hour.
>>
>
> This does severely affect performance on the local provider, but juju is
> designed to run events asynchronously in an environment. File a bug/feature
> request for this at http://bugs.launchpad.net/juju-core/+filebug to
> request that LXC deployments be done serially.
>

In a rush now so please excuse the brevity in the description:
https://bugs.launchpad.net/juju-core/+bug/1403674


> There is a way to query if relations are ready outside of that relation's
> hook. You will need to run a few commands to get to that point though.
>

Thank you very much for a example on how to query the relations!

7. When a hook fails (most usually during relations being set) I have to
>> manually run resolved unit/0 multiple times. It's not enough to call it
>> once and wait for Juju to get it straight. I have to babysit the unit and
>> keep running resolved unit/0, while I imagined this should be automatic
>> because I wanted it resolved for real anyway. If the failed hook was the
>> first in a chain, you'll have to re-run this for every other hook in the
>> sequence. Once for a relation, another for config-changed, then perhaps
>> another for the stop hook and another one for start hook, depending on your
>> setup.
>>
>
> What charm is causing this issue? This shouldn't happen, but presumably
> the failure is due to data or something else not being ready, which is why
> it's erroring. It sounds like the charm doesn't properly guard against data
> not being ready, which I'll cover, again below.
>

IIRC I think I saw that with Apache's and Jenkins' (and also my own charm
for an application).


> Instead, check for variables you need from the relation, and if they don't
> exist yet simply `exit 0`. Juju will re-queue the hook to execute when data
> on the wire is changed. IE: the remote unit finally runs the appropriate
> `relation-set` line.
>

Does it wait for that data to be available or it keeps on executing the
rest of the charm's hooks chain then come back to the first hook which
exited 0 and still need data?


> 9. If you want to cancel a deployment that just started you need to keep
>> running remove-service forever. Juju will simply ignore you if it's still
>> running some special bits of the charm or if you have previously asked it
>> to cancel the deployment during its setting up. No errors, no other
>> messages are printed. You need to actually open its log to see that it's
>> still stuck in a long apt-get installation and you have to wait until the
>> right moment to remove-service again. And if your connection is slow, that
>> takes time, you'll have to babysit Juju here because it doesn't really
>> control its services as I imagined. Somehow apt-get gets what it wants :-)
>>
>
> You can now force-kill a machine. So you can run `juju destroy-service
> $service` then `juju terminate-machine --force #machine_number`. Just make
> sure that nothing else exists on that machine! I'll raise an issue for
> having a way to add a --force flag to destroying a service so you can just
> say "kill this with fire, now plz"
>

I understand that, but I discovered it's way faster and less typing if I
simply destroy-environment and bootstrap it again. If you need to force
kill something every time you need to kill it, then perhaps somethings is
wrong?


> 10. I think there's something weird about relation-set and relation-get
>> between services when you add and remove relations multiple times. For
>> example, the first time I set a relation to a Postgres charm I get a
>> database back and my desired roles configured, but if I remove the relation
>> and then add it back I only get the database settings. The roles parameter
>> is missing setup, so I don't have the right permissions in the DB the
>> second time I set the relation. Anyone has seen this too with other charms?
>>
>
> This is a bug in the PostgreSQL charm. I'd file a bug so the author is
> aware of this. https://bugs.launchpad.net/charms/+source/postgresql
>

Thanks for confirming my suspicion! Done:
https://bugs.launchpad.net/charms/+source/postgresql/+bug/1403675


> Thank you so much for your usage and feedback of Juju thus far. We really
> want to make a tool that works best for you and everyone else. You've
> raised some good points, some things we're aware of and working on, some
> things we can improve upon. Please continue to pass feedback as you
> continue to use Juju and let us know anywhere else we can help improve!
>

Will do!
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju

Reply via email to