Re: Promulgate request: prometheus-alertmanager

2017-09-18 Thread Jacek Nykis
On 18/09/17 17:27, Jacek Nykis wrote:
> Hi Folks,
> 
> Could we get  cs:~prometheus-charmers/prometheus-alertmanager promulgated?
> 
> The code for the charm is
> https://git.launchpad.net/prometheus-alertmanager-charm and there's a
> team that maintains it https://launchpad.net/~prometheus-charmers. That
> team already maintains a number of other promulgated charms (mtail,
> prometheus, prometheus-pushgateway, grafana).
> 
> If anyone is interested in joining the team to help maintain
> prometheus-alertmanager and those other charms please let us know.
> 
> Thanks,
> Jacek

Sorry about the noise, it had already been done. Please ignore this request.

Jacek



signature.asc
Description: OpenPGP digital signature
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Promulgate request: prometheus-alertmanager

2017-09-18 Thread Jacek Nykis
Hi Folks,

Could we get  cs:~prometheus-charmers/prometheus-alertmanager promulgated?

The code for the charm is
https://git.launchpad.net/prometheus-alertmanager-charm and there's a
team that maintains it https://launchpad.net/~prometheus-charmers. That
team already maintains a number of other promulgated charms (mtail,
prometheus, prometheus-pushgateway, grafana).

If anyone is interested in joining the team to help maintain
prometheus-alertmanager and those other charms please let us know.

Thanks,
Jacek



signature.asc
Description: OpenPGP digital signature
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: monitoring your production juju controllers

2017-03-21 Thread Jacek Nykis
On 20/03/17 18:04, Rick Harding wrote:
> During The Juju Show #8 [1] last week we talked about how to setup your
> controllers to be provide metrics through Prometheus to Grafana to keep
> an eye on those controllers in production. 
> 
> I put together a blog post [2] about how to set it up, configure Juju,
> and links to the sample json for an out of the box dashboard to use.
> Give it a try and let me know what you would add to keep an eye on your
> own production Juju. 
> 
> Rick
> 
> 1: https://www.youtube.com/watch?v=tjp_JHSZCyA
> 2: http://mitechie.com/blog/2017/3/20/operations-in-production

Hi,

We also added mongodb exporter [0] for more in depth mongodb metrics. It
currently requires manual set up because juju mongo does not support
relations but once that is done it works great!

I have one question about your article - is there any specific reason
you configure grafana datasource by hand as opposed to using
"grafana-source" relation? This should work just fine:

juju add-relation prometheus:grafana-source grafana:grafana-source

Jacek

[0] https://github.com/dcu/mongodb_exporter



signature.asc
Description: OpenPGP digital signature
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: New in 2.1-beta5: Prometheus monitoring

2017-02-07 Thread Jacek Nykis
On 07/02/17 02:25, Andrew Wilkins wrote:
> Hi folks,
> 
> In the release notes there was an innocuous line about introspection
> endpoints added to the controller. What this really means is that you can
> now monitor Juju controllers with Prometheus. Juju controllers export
> metrics, including:
>  - API requests (total number and latencies by facade/method, grouped by
> error code)
>  - numbers of entities (models, users, machines, ...)
>  - mgo/txn op counts
> 
> We're working on getting the online docs updated. In the mean time, please
> refer to https://github.com/juju/docs/issues/1624 for instructions on how
> to set up Prometheus to scrape Juju. It would be great to get some early
> feedback.

Hi Andrew,

Thanks! Those metrics will be super useful, I will try to find some time
to look into them properly.

Some early feedback:
1. Your docs say the metrics endpoint requires authentication. I think
this can be problematic for people who run multiple controllers or
recycle them often. Secrets set up requires manual steps and they need
to be distributed to prometheus server. It would be very useful to allow
unauthenticated access and rely on firewalls to restrict access
(approach followed by most prometheus exporters I looked at).
2. You don't offer option to downgrade to HTTP which is problematic as
well IMO. Similar to above it's an obstacle users have to go through
before they can scrape targets, manual steps are required, CA certs need
to be shipped around. It would be very convenient if users could
explicitly fall back to http and let other layers to provide security.

Basically I think letting users enable unauthenticated HTTP endpoint for
prometheus metrics would be big usability win.

Thanks,
Jacek



signature.asc
Description: OpenPGP digital signature
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Controllers running out of disk space

2016-11-22 Thread Jacek Nykis
On 21/11/16 23:26, Menno Smits wrote:
> On 18 November 2016 at 05:07, Nate Finch  wrote:
> 
>> Resources are also stored in mongo and can be unlimited in size (not much
>> different than fat charms, except that at least they're only pulled down on
>> demand).
>>
>> We should let admins configure their max log size... our defaults may not
>> be what they like, but I bet that's not really the issue, since we cap them
>> smallish (the logs stored in mongo are capped, right?)
>>
>> Do we still store all logs from all machines on the controller?  Are they
>> capped?  That has been a problem in the past with large models.
>>
> 
> ​All logs for all agents for all models are stored in mongodb on the
> controller. They are pruned every 5 minutes to limit the space used by
> logs. Logs older than 3 days are always removed. The logs stored for all
> models is limited to 4GB with logs being removed fairly so that logs for a
> busy model don't cause logs for a quiet model to be prematurely removed.
> 
> The 3 day and 4GB limits are currently fixed but have been implemented in
> such a way to allow them to become configuration options without too much
> fuss.
> 
> - Menno

Hi Menno,

Do you know if log storage in mongodb needs to be enable in any way?

I can see that my juju 2.0.1 controller still stores logs on disk
without compression:
/var/log/juju$ ls -lh logsink*
-rw--- 1 syslog syslog 300M Oct 11 07:15
logsink-2016-10-11T07-15-12.443.log
-rw--- 1 syslog syslog 300M Nov 9 08:24
logsink-2016-11-09T08-24-35.019.log
-rw--- 1 syslog syslog 287M Nov 22 09:36 logsink.log

I think it's https://pad.lv/1494661

-- 
Regards,
Jacek




signature.asc
Description: OpenPGP digital signature
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Controllers running out of disk space

2016-11-17 Thread Jacek Nykis
On 17/11/16 08:20, Uros Jovanovic wrote:
> Hi all,
> 
> I'd like to start a discussion on how to handle storage issues on
> controller machines, especially what we can do when storage is getting 95%
> or even 98% full. There are many processes that are storing data, we have
> at least:
> - charms and resources in the blobstore
> - logs in mongo
> - logs on disk
> - audit.log
> 
> Even if we put all models in the controller into read-only mode to prevent
> upload of new charms or resources to blobstore, we still have to deal with
> logs which can fill the space quickly.
> 
> While discussing about this with Rick, given the work on model migrations,
> maybe the path forward would be to allow admin to put the whole controller
> in "maintenance" mode and put all agents on "hold".
> 
> How to deal with storage issue after that is open to discussion, maybe we
> need tools to "clear" blobstore, or export/compress/truncate logs, etc.
> 
> Thoughts?

We have this bug for mongodb disk space:
https://bugs.launchpad.net/juju/+bug/1492237

There is a workaround for juju 1.25 but I don't know if it will work for
juju 2:
https://bugs.launchpad.net/juju/+bug/1492237/comments/6

Some time ago I also filed this one for fix log handling in juju:
https://bugs.launchpad.net/juju/+bug/1494661

Feel free to me too!

Jacek



signature.asc
Description: OpenPGP digital signature
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: What are the best practices for stop hook handling?

2016-10-20 Thread Jacek Nykis
On 19/10/16 16:15, Marco Ceppi wrote:
>> 2. Don't colocate units if at all possible.  In separate containers on the
>> same machine, sure.  But there's absolutely no guarantee that colocated
>> units won't conflict with each other. What you're asking about is the very
>> problem colocation causes. If both units try to take over the same port, or
>> a common service, or write to the same file on disk, etc... the results
>> will very likely be bad.  Stop hooks should clean up everything they
>> started.  Yes, this may break other units that are colocated, but the
>> alternative is leaving machines in a bad state when they're not colocated.
>>
> 
> Colocation is a rare scenario, a more common one is manual provider.

Subordinate charms only make sense when collocated. And I would argue
that subordinates are extremely common, at least in production environments.

In any production deployment I expect some form of monitoring (nrpe,
telegraf). Many deployments will also use logstash-forwarder,
landscape-client, ntp, container-log-archive and other subordinate charms.

So you are looking at 3-4 or more services on each juju machine,
including LXC/LXD guests and manually provisioned systems.

In this context clean up is very important because it's not unusual for
operators to switch technologies. For example replace telegraf with
node-exporter or collectd.

Regards,
Jacek



signature.asc
Description: OpenPGP digital signature
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Planning for Juju 2.2 (16.10 timeframe)

2016-04-01 Thread Jacek Nykis
On 01/04/16 14:34, Mark Shuttleworth wrote:
> 
>> * cli for storage is not as nice as other juju commands. For example we
>> have the in the docs:
>>
>> juju deploy cs:~axwalk/postgresql --storage data=ebs-ssd,10G pg-ssd
>>
>> I suspect most charms will use single storage device so it may be
>> possible to optimize for that use case.
> 
> That, however, means you still have to know IF there's only one store.
> Or you have to know what the default store is. Better to just be explicit.

I think it's possible to handle all scenarios nicely.

For charms with just one store only require "--storage-size" and DTRT

For charms with multiple stores require "--store" parameter on top of
that. If not given error with "This charm supports more than one store,
please specify"

For charms without storage support when users provide one of storage
options error with "Storage not supported"

And for charms that do support storage but users don't ask for it print
something like "This charm supports storage, you can try it with --size
10G option"

>>  For example we could have:
>>
>> juju deploy cs:~axwalk/postgresql --storage-type=ebs-ssd --storage-size=10G
>>
>> If we come up with sensible defaults for different providers we could
>> make end users' experience even better by making --storage-type optional
> 
> What would sensible defaults look like for storage? The default we have
> is quite sensible, you get the root filesystem :)

I was thinking about defaults for block device backed storage. We could
allow users to skip "ebs-ssd" and pick the most sensible store type for
every supported cloud. And for clouds which support just one block
storage type use that automatically without need to specify anything.

>> * it would be good to have ability to use single storage stanza in
>> metadata.yaml that supports all types of storage. They way it is done
>> now [0] means I can't test block storage hooks in my local dev
>> environment. It also forces end users to look for storage labels that
>> are supported
>>
>> [0] http://paste.ubuntu.com./15414289/
> 
> I'm not sure what the issue is with this one.
> 
> If we have filesystem storage it's always at the same place.
> 
> If we have a single mounted block store, it's always at the same place.
> 
> If we can attach multiple block devices, THEN you need to handle them as
> they are attached.
> 
> Can you explain the problem more clearly? We do have an issue with the
> LXD provider and block devices, which we think will be resolved thanks
> to some good kernel work on a range of fronts, but that can't surely be
> what's driving your concerns.

It's my bad, I misunderstood how things worked, you can ignore this
point. Andrew Wilkins helpfully explained things to me earlier in this
thread (thanks Andrew)

>> * the way things are now hooks are responsible for creating filesystem
>> on block devices. I feel that as a charmer I shouldn't need to know that
>> much about storage internals. I would like to ask juju and get
>> preconfigured path back. Whether it's formatted and mounted block
>> device, GlusterFS or local filesystem it should not matter
> 
> Well, yes, that's the idea, but these things are quite subtle.
> 
> In some cases you very explicitly want the raw block. So we have to
> allow that. In other cases you just want a filesystem there, and IIRC
> that's the default behaviour in the common case. Finally, we have to
> deal with actual network filesystems (as opposed to block devices) and I
> don't think we have implemented that yet.

Sorry this was also me misunderstanding things, Andrew already clarified
them for me (thanks again)

Jacek

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Planning for Juju 2.2 (16.10 timeframe)

2016-03-29 Thread Jacek Nykis
On 19/03/16 03:20, Andrew Wilkins wrote:
> It seems like the issues you've noted below are all documentation issues,
> rather than limitations in the implementation. Please correct me if I'm
> wrong.
> 
> 
>> If we come up with sensible defaults for different providers we could
>> make end users' experience even better by making --storage-type optional
>>
> 
> Storage type is already optional. If you omit it, you'll get the provider
> default. e.g. for AWS, that's EBS magnetic disks.

Good to hear, it's a simple documentation fix then.

>> * it would be good to have ability to use single storage stanza in
>> metadata.yaml that supports all types of storage. They way it is done
>> now [0] means I can't test block storage hooks in my local dev
>> environment. It also forces end users to look for storage labels that
>> are supported
>>
>> [0] http://paste.ubuntu.com./15414289/
> 
> 
> Not quite sure what you mean here. If you have a "filesystem" type, you can
> use any storage provider that supports natively creating filesystems (e.g.
> "tmpfs") or block devices (e.g. "ebs"). If you specify the latter, Juju
> will manage the filesystem on the block device.

OK this makes sense. Documentation is really confusing on this subject.
I assumed that "location" was a pre existing local path for juju to use.

If juju will manage filesystem what's the point in having "location"
option? Paths can be easily autogenerated and that would remove need to
hardcode paths in metadata.yaml

> * the way things are now hooks are responsible for creating filesystem
>> on block devices. I feel that as a charmer I shouldn't need to know that
>> much about storage internals. I would like to ask juju and get
>> preconfigured path back. Whether it's formatted and mounted block
>> device, GlusterFS or local filesystem it should not matter
> 
> 
> That is exactly what it does, so again, I think this is an issue of
> documentation clarity. If you're using the "filesystem" type, Juju will
> create the filesystem; if you use "block", it won't.

I am glad to hear it's just docs. I'll be happy to review them when
fixed just let me know when it's done


-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Planning for Juju 2.2 (16.10 timeframe)

2016-03-19 Thread Jacek Nykis
On 08/03/16 23:51, Mark Shuttleworth wrote:
> *Storage*
> 
>  * shared filesystems (NFS, GlusterFS, CephFS, LXD bind-mounts)
>  * object storage abstraction (probably just mapping to S3-compatible APIS)
> 
> I'm interested in feedback on the operations aspects of storage. For
> example, whether it would be helpful to provide lifecycle management for
> storage being re-assigned (e.g. launch a new database application but
> reuse block devices previously bound to an old database  instance).
> Also, I think the intersection of storage modelling and MAAS hasn't
> really been explored, and since we see a lot of interest in the use of
> charms to deploy software-defined storage solutions, this probably will
> need thinking and work.

Hi Mark,

I took juju storage for a spin a few weeks ago. It is a great idea and
I'm sure it will simplify our models (no more need for
block-storage-broker and storage charms). It will also improve security
because block-storage-broker needs nova credentials to work

I only played with storage briefly but I hope my feedback and ideas will
be useful

* IMO it would be incredibly useful to have storage lifecycle
management. Deploying a new database using pre-existing block device you
mentioned would certainly be nice. Another scenario could be users who
deploy to local disk and decide to migrate to block storage later
without redeploying and manual data migration

One day we may even be able to connect storage with actions. I'm
thinking "storage snapshot" action followed by juju deploy to create up
to date database clone for testing/staging/dev

* I found documentation confusing. It's difficult for me to say exactly
what is wrong but I had to read it a few times before things became
clear. I raised some specific points on github:
https://github.com/juju/docs/issues/889

* cli for storage is not as nice as other juju commands. For example we
have the in the docs:

juju deploy cs:~axwalk/postgresql --storage data=ebs-ssd,10G pg-ssd

I suspect most charms will use single storage device so it may be
possible to optimize for that use case. For example we could have:

juju deploy cs:~axwalk/postgresql --storage-type=ebs-ssd --storage-size=10G

If we come up with sensible defaults for different providers we could
make end users' experience even better by making --storage-type optional

* it would be good to have ability to use single storage stanza in
metadata.yaml that supports all types of storage. They way it is done
now [0] means I can't test block storage hooks in my local dev
environment. It also forces end users to look for storage labels that
are supported

[0] http://paste.ubuntu.com./15414289/

* the way things are now hooks are responsible for creating filesystem
on block devices. I feel that as a charmer I shouldn't need to know that
much about storage internals. I would like to ask juju and get
preconfigured path back. Whether it's formatted and mounted block
device, GlusterFS or local filesystem it should not matter

* finally I hit 2 small bugs:

https://bugs.launchpad.net/juju-core/+bug/1539684
https://bugs.launchpad.net/juju-core/+bug/1546492


If anybody is interested in more details just ask, I'm happy to discuss
or try things out just note that I will be off next week so will most
likely reply on 29th


Regards,
Jacek



signature.asc
Description: OpenPGP digital signature
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Unit-get private-address

2014-05-27 Thread Jacek Nykis
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 27/05/14 06:06, Sameer Zeidat wrote:
> Hello,
> 
> I've created a couple of charms that I deploy to both MAAS and
> hpcloud environments.
> 
> I added a second network interface, eth1, to some of my MAAS nodes
> and now unit-get private-address returns the ip of eth1 instead of
> eth0 on these nodes. How do I work around this without changing my
> charms?
> 
> Does Juju use the new networks settings in MAAS? That would be
> ideal, i.e. I define what is private and public in MAAS networks
> and juju's unit-get matches it when looking for an ip.
> 
> Thanks,

Hi Sameer,

It could be caused by this bug:
https://bugs.launchpad.net/juju-core/+bug/1314442

- -- 
Jacek
-BEGIN PGP SIGNATURE-
Version: GnuPG v1
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iJwEAQECAAYFAlOETDsACgkQaaWACpCWrAD8KgQAhs5VOjmEeEucDe4tO7clMTjz
b7/GXISduKhTOUyJpgWoUDK+QXYxociq8MTwFSiQ69d/nTfn+DyRzyMiQyqqIwLw
aypsdbPYND+KsHo76zguCDCuMnnIRtWfnxdq7VKl09Om05lWAgTx3utUUnh4HJOG
0ngArB7/qi00xafA4WI=
=I3zd
-END PGP SIGNATURE-

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Best practices for "fat" charms

2014-04-02 Thread Jacek Nykis
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 01/04/14 21:01, Andreas Hasenack wrote:
> On Tue, Apr 1, 2014 at 4:07 PM, Jorge O. Castro 
> wrote:
>> Hi everyone,
>> 
>> Matt Bruzek and I have been doing some charm testing on a machine
>> that does not have general access to the internet. So charms that
>> pull from PPAs, github, etc. do not work.
>> 
>> We've been able to "fatten" the charms by doing things like
>> creating a /files directory in the charm itself and putting the
> 
> 
> 
> There is/was a concern from #is about this because git is used to 
> manage the charm content on the unit. If you have a fat tarball in
> it, and then upgrade the charm, it's like entropy, the total size
> will just keep increasing with every charm upgrade.
> 

Size is one problem but there is also high memory use by git which can
cause service outage:
https://bugs.launchpad.net/juju-core/+bug/1232304/

- -Jacek
-BEGIN PGP SIGNATURE-
Version: GnuPG v1
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iJwEAQECAAYFAlM7z9oACgkQaaWACpCWrADPVwP/TEBDeQldiuwDQlFifiEY74D+
W5MXxY20DkmQ1MY7Iym8Satsx2Jm+oM51bH0c5mqJOy+QRWO9or/1UHksZnM9OVm
lFEWb8xEA77329xCeeUmFHMUX8VTvT6vSPmC8wdbIHOUlyCFE9FEeE3DrUHjxB8a
femYgCQCjUnYmTXJ78s=
=vAu9
-END PGP SIGNATURE-

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju