Re: Hooking into `charm build` to download Puppet dependencies

2016-11-24 Thread Marco Ceppi
charm build uses tactics during compilation to process files and tasks.
These tactics are pluggable, which allows you to create custom tactics in
your layer for things like you've desctibed. We have an example of this in
the Kubernetes charms, where a custom layer tactic is used to seed static
template files at charm build time:

Here's the layer.yaml:
https://github.com/juju-solutions/kubernetes/pull/84/files#diff-b8894e717eb49b702f8d267d084635c0
And here's the tactic:
https://github.com/juju-solutions/kubernetes/pull/84/files#diff-7bface8b28f9d781a51d0e302cef9245R74

This one is a little more complicated, since it can also be used as a
standalone script, which is why there's a bunch of additional code for
handling commandline parsing, the "UpdateAddonsTactic" class is the meat of
what you're looking for.

Marco

On Thu, Nov 24, 2016 at 12:02 PM Merlijn Sebrechts <
merlijn.sebrec...@gmail.com> wrote:

> Hi all
>
>
> Is it possible to hook a tool like librarian-puppet
>  into the `charm build`
> process so I can download Puppet dependencies at build time and ship them
> with a Charm?
>
>
>
> Kind regards
> Merlijn
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Juju & Mesos

2016-11-24 Thread Merlijn Sebrechts
OMGOMGOMG

2016-11-24 16:47 GMT+01:00 Tom Barber :

> Okay, it transpires some of my Java guys also know C  who knew.
>
> Anyway, they have been tasked with adding LXC/LXD support to Apache Mesos
> which we'll push upstream assuming they want it. My plan is to then extend
> Marathon to support LXD deployments and from that we'll then build a Juju
> provider for Juju 2 to do juju deploy to Mesos. who knows what pitfalls
> lie ahead but work has begun!.
>
> Tom
>
> On Fri, Nov 18, 2016 at 3:31 PM, Merlijn Sebrechts <
> merlijn.sebrec...@gmail.com> wrote:
>
>> 2016-11-18 15:43 GMT+01:00 Tom Barber :
>>
>>> You mention stateless, thats fine, but for example, if you have sessions
>>> in a web app, you'd need to share the sessions etc, so autoscaling isn't
>>> really any different to juju add-unit except you've got some stuff to
>>> monitor load and do it without user intervention. Also you'll find the flip
>>> side to the autoscaling argument where nodes could shutdown mid flow or
>>> ditch sessions etc, so whilst Im sure in a lot of places it works great,
>>> you still have to create containers that work properly in a scaling
>>> environment, which is exactly the same as when you design charms :)
>>>
>>>
>> Yes! Kubernetes autoscaling only works for stateless services. These
>> services should connect to an external datastore if they want stuff like
>> sessions.
>>
>> Applications that aren't "cloud-native" can't be autoscaled by Kubernetes
>> because they require more than "spin up another service and connect it to
>> the load balancer and the datastore". They need more complex configuration;
>> configuration that Juju is great at! Kubernetes is great at scheduling,
>> Juju is great at orchestration. Hence Juju + K8s = goodness.
>>
>>
>>> I agree with the image stuff to some extent, certainly I've used that as
>>> a selling point, the flip side of course is, do you run apt-get update when
>>> you start a container? Maybe, but most people won't, what about the latest
>>> security flaws in applications that will go unpatched? It also makes for
>>> complacency .
>>>
>>>
>> The sane way to use Docker is to build the image as part of a CI/CD
>> pipeline. Dev triggers a rebuild when code changes, ops triggers a rebuild
>> to fix security issues. After a rebuild, the ci system tests the image
>> heavily. If all tests succeed, the image gets deployed to production and
>> you are 100% sure that all prod images will be 100% what you just tested.
>> Reproducability so that wat runs in production is exactly what is tested.
>>
>> Although many people think that Docker means "what the dev runs on his
>> machine is what runs in production" which is obviously a bad idea and no
>> technology will fix that.
>>
>> Of course I agree, plenty of large businesses do run their stuff in
>>> docker, I use docker in production, I'm not saying don't use docker :) I'm
>>> just saying that in reality its not the panacea a lot of people who want to
>>> do high volume scale out apps think it is, not without writing a lot of
>>> code around it for your own solution :)
>>>
>>> On Fri, Nov 18, 2016 at 2:34 PM, Merlijn Sebrechts <
>>> merlijn.sebrec...@gmail.com> wrote:
>>>
 I'm mostly working with researchers and people developing early
 prototypes. I can't blame them for using technologies that aren't
 production ready. That said, I attended pragmatic docker days a while back
 and there were some companies, like Yelp, who found a good way to run
 Docker in production so it is possible, given you have a boatload of good
 ops people.

 Big Data Europe  seems to be going
 towards Docker containers for scalable Hadoop setups
 .
 Not that it's a production ready setup, but with a name like that and H2020
 funding, we (big data researchers) can't really ignore them...

 Juju is awesome for us (big data researchers) because we have a bunch
 of short-lived projects that use Hadoop etc. in a bunch of different
 setups, and

1. we don't want to be writing a new wrapper around the Hadoop chef
cookbook for every project;
2. we don't want to be creating a new "Hadoop + X" Docker container
for every setup.

 However, we can't ignore the advantages of Docker vs Juju:

1. image-based so the same setup is 100% reproducible if you have
the image;
2. auto scaling and failure recovery.

 So we want the stateless, auto-scalable, auto-recoverable images from
 Docker and we want Juju's relations and automatic configuration. So how to
 we get docker containers that can be configured at run-time by Juju? Ben is
 working on something to configure containers
 , but afaik, no integration
 with Juju planned.

 PS: We're interested in Mesos but, as al

Hooking into `charm build` to download Puppet dependencies

2016-11-24 Thread Merlijn Sebrechts
Hi all


Is it possible to hook a tool like librarian-puppet
 into the `charm build`
process so I can download Puppet dependencies at build time and ship them
with a Charm?



Kind regards
Merlijn
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Juju & Mesos

2016-11-24 Thread Tom Barber
Okay, it transpires some of my Java guys also know C  who knew.

Anyway, they have been tasked with adding LXC/LXD support to Apache Mesos
which we'll push upstream assuming they want it. My plan is to then extend
Marathon to support LXD deployments and from that we'll then build a Juju
provider for Juju 2 to do juju deploy to Mesos. who knows what pitfalls
lie ahead but work has begun!.

Tom

On Fri, Nov 18, 2016 at 3:31 PM, Merlijn Sebrechts <
merlijn.sebrec...@gmail.com> wrote:

> 2016-11-18 15:43 GMT+01:00 Tom Barber :
>
>> You mention stateless, thats fine, but for example, if you have sessions
>> in a web app, you'd need to share the sessions etc, so autoscaling isn't
>> really any different to juju add-unit except you've got some stuff to
>> monitor load and do it without user intervention. Also you'll find the flip
>> side to the autoscaling argument where nodes could shutdown mid flow or
>> ditch sessions etc, so whilst Im sure in a lot of places it works great,
>> you still have to create containers that work properly in a scaling
>> environment, which is exactly the same as when you design charms :)
>>
>>
> Yes! Kubernetes autoscaling only works for stateless services. These
> services should connect to an external datastore if they want stuff like
> sessions.
>
> Applications that aren't "cloud-native" can't be autoscaled by Kubernetes
> because they require more than "spin up another service and connect it to
> the load balancer and the datastore". They need more complex configuration;
> configuration that Juju is great at! Kubernetes is great at scheduling,
> Juju is great at orchestration. Hence Juju + K8s = goodness.
>
>
>> I agree with the image stuff to some extent, certainly I've used that as
>> a selling point, the flip side of course is, do you run apt-get update when
>> you start a container? Maybe, but most people won't, what about the latest
>> security flaws in applications that will go unpatched? It also makes for
>> complacency .
>>
>>
> The sane way to use Docker is to build the image as part of a CI/CD
> pipeline. Dev triggers a rebuild when code changes, ops triggers a rebuild
> to fix security issues. After a rebuild, the ci system tests the image
> heavily. If all tests succeed, the image gets deployed to production and
> you are 100% sure that all prod images will be 100% what you just tested.
> Reproducability so that wat runs in production is exactly what is tested.
>
> Although many people think that Docker means "what the dev runs on his
> machine is what runs in production" which is obviously a bad idea and no
> technology will fix that.
>
> Of course I agree, plenty of large businesses do run their stuff in
>> docker, I use docker in production, I'm not saying don't use docker :) I'm
>> just saying that in reality its not the panacea a lot of people who want to
>> do high volume scale out apps think it is, not without writing a lot of
>> code around it for your own solution :)
>>
>> On Fri, Nov 18, 2016 at 2:34 PM, Merlijn Sebrechts <
>> merlijn.sebrec...@gmail.com> wrote:
>>
>>> I'm mostly working with researchers and people developing early
>>> prototypes. I can't blame them for using technologies that aren't
>>> production ready. That said, I attended pragmatic docker days a while back
>>> and there were some companies, like Yelp, who found a good way to run
>>> Docker in production so it is possible, given you have a boatload of good
>>> ops people.
>>>
>>> Big Data Europe  seems to be going
>>> towards Docker containers for scalable Hadoop setups
>>> .
>>> Not that it's a production ready setup, but with a name like that and H2020
>>> funding, we (big data researchers) can't really ignore them...
>>>
>>> Juju is awesome for us (big data researchers) because we have a bunch of
>>> short-lived projects that use Hadoop etc. in a bunch of different setups,
>>> and
>>>
>>>1. we don't want to be writing a new wrapper around the Hadoop chef
>>>cookbook for every project;
>>>2. we don't want to be creating a new "Hadoop + X" Docker container
>>>for every setup.
>>>
>>> However, we can't ignore the advantages of Docker vs Juju:
>>>
>>>1. image-based so the same setup is 100% reproducible if you have
>>>the image;
>>>2. auto scaling and failure recovery.
>>>
>>> So we want the stateless, auto-scalable, auto-recoverable images from
>>> Docker and we want Juju's relations and automatic configuration. So how to
>>> we get docker containers that can be configured at run-time by Juju? Ben is
>>> working on something to configure containers
>>> , but afaik, no integration with
>>> Juju planned.
>>>
>>> PS: We're interested in Mesos but, as always, our time-to-put-into-it is
>>> limited..
>>>
>>>
>>>
>>> 2016-11-18 12:27 GMT+01:00 Tom Barber :
>>>
 I'll fork this so we're not hijacking another th

Re: problem when building a charm for giraph

2016-11-24 Thread Panagiotis Liakos
Thanks again Konstantinos for all your help.

I made some changes and am now able to pass the 01-giraph-test.py:
https://pastebin.ubuntu.com/23526882/

I have some more questions though:

1. I am using two environment variables ($HADOOP_CLASSPATH and
$LIBJARS) that I set in my smoke-test. However, I believe that it
would be more appropriate if these variables were set in .bashrc as
they are necessary for submitting giraph jobs to hadoop. Is this
possible?
2. I am quite sure that the first time I deployed the giraph charm
that I am building, there was more content in the
/usr/share/doc/giraph/ directory. As the installation of giraph is
pretty much automated through bigtop (if I am not missing something)
my question is where should I look to find out what gets installed and
where. Any ideas?
3. Is the documentation on the charm proof tool up-to-date? (
https://jujucharms.com/docs/2.0/authors-charm-writing#run-the-charm-proof-tool
)

Thank you,
Panagiotis

2016-11-24 9:12 GMT+02:00 Konstantinos Tsakalozos
:
> As it turned out this error was due to a version misalignment between the
> juju-deployer and amulet. Its been fixed now. Please, let us know if you
> face any more obstacles.
>
> Thanks,
> Konstantinos
>
> On Wed, Nov 23, 2016 at 4:00 PM, Panagiotis Liakos 
> wrote:
>>
>> Thanks a lot for the clarifications Konstantinos!
>>
>> I opted to use the mahout interface and managed to overcome the previous
>> issue.
>>
>> Then I bumped into this bug:
>> https://bugs.launchpad.net/juju-deployer/+bug/1575863
>>
>> I followed the suggested workaround (mkdir ~/.juju) and now I get an
>> "error getting env api endpoints". Any ideas?
>>
>> Panagiotis
>>
>> 2016-11-23 15:54:36 Starting deployment of lxd:admin/default
>> 2016-11-23 15:54:36 Error getting env api endpoints, env bootstrapped?
>> 2016-11-23 15:54:36 Command (juju api-endpoints -e lxd:admin/default)
>> Output:
>>
>>
>> 2016-11-23 15:54:36 Deployment stopped. run time: 0.15
>> E
>> ==
>> ERROR: setUpClass (__main__.TestDeploy)
>> --
>> Traceback (most recent call last):
>>   File "./tests/01-giraph-test.py", line 44, in setUpClass
>> cls.d.setup(timeout=3600)
>>   File "/usr/lib/python3/dist-packages/amulet/deployer.py", line 704, in
>> setup
>> subprocess.check_call(shlex.split(cmd))
>>   File "/usr/lib/python3.5/subprocess.py", line 581, in check_call
>> raise CalledProcessError(retcode, cmd)
>> subprocess.CalledProcessError: Command '['juju-deployer', '-W', '-c',
>> '/tmp/amulet-juju-deployer-o1zyf3xd/deployer-schema.json', '-e',
>> 'lxd:admin/default', '-t', '3700', 'lxd:admin/default']' returned
>> non-zero exit status 1
>>
>> --
>> Ran 0 tests in 4.738s
>>
>> FAILED (errors=1)
>>
>>
>>
>> 2016-11-22 16:31 GMT+02:00 Konstantinos Tsakalozos
>> :
>> > Cool! The error that you are getting now says that the giraph charm has
>> > no
>> > giraph relation [0].
>> >
>> > Looking at the metadata.yaml, you are using the mahout interface to
>> > relate
>> > to the hadoop client. So the relation call should look like:
>> > cls.d.relate('giraph:mahout', 'client:mahout') . This will work for now
>> > since both Mahout and Giraph just want to add jars on the hadoop-client
>> > charm. However, on the long run we should refactor charms such as
>> > hadoop-client, spark, pig, etc, so that they have an interface exactly
>> > for
>> > that purpose (adding jars to the file system). For now you have two
>> > options
>> > either keep using the mahout interface or use the "juju-info" [1]
>> > interface
>> > that is present in all charms. Note, however, that if you do use
>> > "juju-info"
>> > your charm would be a subordinate to any other charm even if that does
>> > not
>> > make sense.
>> >
>> > [0]
>> >
>> > http://pythonhosted.org/amulet/amulet.html?highlight=relate#amulet.deployer.Deployment.relate
>> > [1] https://jujucharms.com/docs/2.0/authors-implicit-relations
>> >
>> > On Tue, Nov 22, 2016 at 4:02 PM, Panagiotis Liakos 
>> > wrote:
>> >>
>> >> Thanks Konstantinos, you were right. Now the script progresses a little
>> >> further:
>> >>
>> >> $ python3 ./tests/01-giraph-test.py
>> >> E
>> >> ==
>> >> ERROR: setUpClass (__main__.TestDeploy)
>> >> --
>> >> Traceback (most recent call last):
>> >>   File "./tests/01-giraph-test.py", line 42, in setUpClass
>> >> cls.d.relate('giraph:giraph', 'client:mahout')
>> >>   File "/usr/lib/python3/dist-packages/amulet/deployer.py", line 431,
>> >> in
>> >> relate
>> >> raise ValueError('%s does not exist for %s' % (rel, srv))
>> >> ValueError: giraph does not exist for giraph
>> >>
>> >> --
>> >> Ran 0 tests 

Re: Controllers running out of disk space

2016-11-24 Thread roger peppe
Log size limits are great, and a necessary thing, but for me the
crucial thing is to have some final fail safe if disk space does
end up getting critically low for *any* reason. Almost all the really
borked Juju installations I've seen have been due to running out of disk space.

Shutting down the MongoDB instance might be sufficient, as it's the
corrupted database that is the real problem.

We should garbage-collect resources too if that doesn't already happen.

  cheers,
rog.


On 22 November 2016 at 10:38, John Meinel  wrote:
> Juju records a longer history in the database, but for ease of debugging we
> also save to a plain text file, which gets rotated. It is set to rotate at
> 300MB and save at most 2 backups. Which means it should go to potentially
> ~1GB but not grow beyond that. We could compress on rotation, which we don't
> currently do, but it is capped at how much we will save on disk.
>
> John
> =:->
>
> On Tue, Nov 22, 2016 at 11:50 AM, Jacek Nykis 
> wrote:
>>
>> On 21/11/16 23:26, Menno Smits wrote:
>> > On 18 November 2016 at 05:07, Nate Finch 
>> > wrote:
>> >
>> >> Resources are also stored in mongo and can be unlimited in size (not
>> >> much
>> >> different than fat charms, except that at least they're only pulled
>> >> down on
>> >> demand).
>> >>
>> >> We should let admins configure their max log size... our defaults may
>> >> not
>> >> be what they like, but I bet that's not really the issue, since we cap
>> >> them
>> >> smallish (the logs stored in mongo are capped, right?)
>> >>
>> >> Do we still store all logs from all machines on the controller?  Are
>> >> they
>> >> capped?  That has been a problem in the past with large models.
>> >>
>> >
>> > All logs for all agents for all models are stored in mongodb on the
>> > controller. They are pruned every 5 minutes to limit the space used by
>> > logs. Logs older than 3 days are always removed. The logs stored for all
>> > models is limited to 4GB with logs being removed fairly so that logs for
>> > a
>> > busy model don't cause logs for a quiet model to be prematurely removed.
>> >
>> > The 3 day and 4GB limits are currently fixed but have been implemented
>> > in
>> > such a way to allow them to become configuration options without too
>> > much
>> > fuss.
>> >
>> > - Menno
>>
>> Hi Menno,
>>
>> Do you know if log storage in mongodb needs to be enable in any way?
>>
>> I can see that my juju 2.0.1 controller still stores logs on disk
>> without compression:
>> /var/log/juju$ ls -lh logsink*
>> -rw--- 1 syslog syslog 300M Oct 11 07:15
>> logsink-2016-10-11T07-15-12.443.log
>> -rw--- 1 syslog syslog 300M Nov 9 08:24
>> logsink-2016-11-09T08-24-35.019.log
>> -rw--- 1 syslog syslog 287M Nov 22 09:36 logsink.log
>>
>> I think it's https://pad.lv/1494661
>>
>> --
>> Regards,
>> Jacek
>>
>>
>>
>> --
>> Juju mailing list
>> Juju@lists.ubuntu.com
>> Modify settings or unsubscribe at:
>> https://lists.ubuntu.com/mailman/listinfo/juju
>>
>
>
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju
>

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju