Re: Specify an existing security group as model config?
two cents, typical real world requirements vary, in the enterprise you might have various tiering by architectural layer (front end waf elb ingress, waf servers, set of dmz components/web servers, set of app servers, set of dbs) all structured out with connectivity models. typically these map to a m:n on security group basis to service model, based on the model's responsibilities and consumers. On Fri, Jan 12, 2018 at 8:09 AM, Mark Shuttleworthwrote: > On 12/22/2017 03:03 AM, Marco Ceppi wrote: > > When it comes to scaling operations this can be tedious. I know there > > are configurations for VPC-ID - is there also a similar security-group > > setting where either the default model SG will be set based on user > > input instead of created or a setting where an additional "model" > > security group can be set so instances have it in addition to the > > model/instance security group? > > I think it makes sense that the model creation process might accept such > a parameter, yes. > > Does a security group per model make sense, or should it be per > application in the model (though that sounds like it might be wasteful). > > Mark > > -- > Juju mailing list > Juju@lists.ubuntu.com > Modify settings or unsubscribe at: https://lists.ubuntu.com/ > mailman/listinfo/juju > -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Re: Support AWS EIP
Hi James What's the use case your using them for? Elastic IPs in aws are a very limited commodity you get like 5 per region per account by default. IMO it's generally not a recommended practice to depend on them as effectively they represent public endpoints mapping to a single instance in aws. Using elb or r53 is typically better for scale out. Note eips are distinct from Enis re multiple private addresses and nats re shadow ips. Ps. My wishlist for juju and aws would be to support non static credentials per best practices. On Sun, Nov 6, 2016 at 5:18 AM Mark Shuttleworthwrote: On 05/11/16 17:42, James Beedy wrote: > How does everyone few about extending the AWS provider to support elastic ips? > > Capability to attach eips using juju would alleviate one more manual step I have to preform from the aws console every time I spin up an instance. > > I have created a feature request here -> > https://bugs.launchpad.net/juju/+bug/1639459 Yes, this would be excellent. Conceptually at least, in Juju, we have a place for "the internet" as a dedicated Network (networks are collections of spaces) and for "shadow-ip addresses" (which are addresses on one network that tunnel to addresses on another network). These concepts give us elastic IPs very naturally, but they also are important for cross-model relations in the private cloud, and I think we should map out and implement this carefully as one coherent hybrid cloud operations story. Mark -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev
Re: Support AWS EIP
Hi James What's the use case your using them for? Elastic IPs in aws are a very limited commodity you get like 5 per region per account by default. IMO it's generally not a recommended practice to depend on them as effectively they represent public endpoints mapping to a single instance in aws. Using elb or r53 is typically better for scale out. Note eips are distinct from Enis re multiple private addresses and nats re shadow ips. Ps. My wishlist for juju and aws would be to support non static credentials per best practices. On Sun, Nov 6, 2016 at 5:18 AM Mark Shuttleworthwrote: On 05/11/16 17:42, James Beedy wrote: > How does everyone few about extending the AWS provider to support elastic ips? > > Capability to attach eips using juju would alleviate one more manual step I have to preform from the aws console every time I spin up an instance. > > I have created a feature request here -> > https://bugs.launchpad.net/juju/+bug/1639459 Yes, this would be excellent. Conceptually at least, in Juju, we have a place for "the internet" as a dedicated Network (networks are collections of spaces) and for "shadow-ip addresses" (which are addresses on one network that tunnel to addresses on another network). These concepts give us elastic IPs very naturally, but they also are important for cross-model relations in the private cloud, and I think we should map out and implement this carefully as one coherent hybrid cloud operations story. Mark -- Juju-dev mailing list juju-...@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Re: Planning for Juju 2.2 (16.10 timeframe)
On Tue, Mar 8, 2016 at 6:51 PM, Mark Shuttleworthwrote: > Hi folks > > We're starting to think about the next development cycle, and gathering > priorities and requests from users of Juju. I'm writing to outline some > current topics and also to invite requests or thoughts on relative > priorities - feel free to reply on-list or to me privately. > > An early cut of topics of interest is below. > > > > *Operational concerns ** LDAP integration for Juju controllers now we > have multi-user controllers > * Support for read-only config > * Support for things like passwords being disclosed to a subset of > user/operators > * LXD container migration > * Shared uncommitted state - enable people to collaborate around changes > they want to make in a model > > There has also been quite a lot of interest in log control - debug > settings for logging, verbosity control, and log redirection as a systemic > property. This might be a good area for someone new to the project to lead > design and implementation. Another similar area is the idea of modelling > machine properties - things like apt / yum repositories, cache settings > etc, and having the machine agent setup the machine / vm / container > according to those properties. > > ldap++. as brought up in the user list better support for aws best practice credential management, ie. bootstrapping with transient credentials (sts role assume, needs AWS_SECURITY_TOKEN support), and instance role for state servers. > > > *Core Model * * modelling individual services (i.e. each database > exported by the db application) > * rich status (properties of those services and the application itself) > * config schemas and validation > * relation config > > There is also interest in being able to invoke actions across a relation > when the relation interface declares them. This would allow, for example, a > benchmark operator charm to trigger benchmarks through a relation rather > than having the operator do it manually. > > in priority order, relation config, config schemas/validation, rich status. relation config is a huge boon to services that are multi-tenant to other services, as is the workaround is to create either copies per tenant or intermediaries. > *Storage* > > * shared filesystems (NFS, GlusterFS, CephFS, LXD bind-mounts) > * object storage abstraction (probably just mapping to S3-compatible APIS) > > I'm interested in feedback on the operations aspects of storage. For > example, whether it would be helpful to provide lifecycle management for > storage being re-assigned (e.g. launch a new database application but reuse > block devices previously bound to an old database instance). Also, I think > the intersection of storage modelling and MAAS hasn't really been explored, > and since we see a lot of interest in the use of charms to deploy > software-defined storage solutions, this probably will need thinking and > work. > > it maybe out of band, but with storage comes backups/snapshots. also of interest, is encryption on block and object storage using cloud native mechanisms where available. > > > *Clouds and providers * > * System Z and LinuxONE > * Oracle Cloud > > There is also a general desire to revisit and refactor the provider > interface. Now we have seen many cloud providers get done, we are in a > better position to design the best provider interface. This would be a > welcome area of contribution for someone new to the project who wants to > make it easier for folks creating new cloud providers. We also see constant > requests for a Linode provider that would be a good target for a refactored > interface. > > > > > *Usability * * expanding the set of known clouds and regions > * improving the handling of credentials across clouds > Autoscaling, either tighter integration with cloud native features or juju provided abstraction. -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Re: Planning for Juju 2.2 (16.10 timeframe)
On Tue, Mar 8, 2016 at 6:51 PM, Mark Shuttleworthwrote: > Hi folks > > We're starting to think about the next development cycle, and gathering > priorities and requests from users of Juju. I'm writing to outline some > current topics and also to invite requests or thoughts on relative > priorities - feel free to reply on-list or to me privately. > > An early cut of topics of interest is below. > > > > *Operational concerns ** LDAP integration for Juju controllers now we > have multi-user controllers > * Support for read-only config > * Support for things like passwords being disclosed to a subset of > user/operators > * LXD container migration > * Shared uncommitted state - enable people to collaborate around changes > they want to make in a model > > There has also been quite a lot of interest in log control - debug > settings for logging, verbosity control, and log redirection as a systemic > property. This might be a good area for someone new to the project to lead > design and implementation. Another similar area is the idea of modelling > machine properties - things like apt / yum repositories, cache settings > etc, and having the machine agent setup the machine / vm / container > according to those properties. > > ldap++. as brought up in the user list better support for aws best practice credential management, ie. bootstrapping with transient credentials (sts role assume, needs AWS_SECURITY_TOKEN support), and instance role for state servers. > > > *Core Model * * modelling individual services (i.e. each database > exported by the db application) > * rich status (properties of those services and the application itself) > * config schemas and validation > * relation config > > There is also interest in being able to invoke actions across a relation > when the relation interface declares them. This would allow, for example, a > benchmark operator charm to trigger benchmarks through a relation rather > than having the operator do it manually. > > in priority order, relation config, config schemas/validation, rich status. relation config is a huge boon to services that are multi-tenant to other services, as is the workaround is to create either copies per tenant or intermediaries. > *Storage* > > * shared filesystems (NFS, GlusterFS, CephFS, LXD bind-mounts) > * object storage abstraction (probably just mapping to S3-compatible APIS) > > I'm interested in feedback on the operations aspects of storage. For > example, whether it would be helpful to provide lifecycle management for > storage being re-assigned (e.g. launch a new database application but reuse > block devices previously bound to an old database instance). Also, I think > the intersection of storage modelling and MAAS hasn't really been explored, > and since we see a lot of interest in the use of charms to deploy > software-defined storage solutions, this probably will need thinking and > work. > > it maybe out of band, but with storage comes backups/snapshots. also of interest, is encryption on block and object storage using cloud native mechanisms where available. > > > *Clouds and providers * > * System Z and LinuxONE > * Oracle Cloud > > There is also a general desire to revisit and refactor the provider > interface. Now we have seen many cloud providers get done, we are in a > better position to design the best provider interface. This would be a > welcome area of contribution for someone new to the project who wants to > make it easier for folks creating new cloud providers. We also see constant > requests for a Linode provider that would be a good target for a refactored > interface. > > > > > *Usability * * expanding the set of known clouds and regions > * improving the handling of credentials across clouds > Autoscaling, either tighter integration with cloud native features or juju provided abstraction. -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev
Re: Fwd: AWS Cross Account Roles
On Fri, Mar 4, 2016 at 8:25 PM, Kapil Thangavelu <kap...@gmail.com> wrote: > > > On Fri, Mar 4, 2016 at 7:27 PM, Mark Shuttleworth <m...@ubuntu.com> wrote: > >> On 04/03/16 12:17, Kapil Thangavelu wrote: >> > They can be refreshed prior to expiration to get equivalent immortality, >> > example using pysdk >> > https://gist.github.com/kapilt/ac8e222081f63ba64e93 >> > >> > Ideal usage is actually using Iam instance roles as well for instance >> > credentials which basically work the same way wrt to refresh intervals. >> As >> > perm credentials on servers violates aws best practices. >> >> TO test my understanding, is the idea that one might need to provide >> actual credentials when deploying a service or creating a model, but >> then the system actually keeps a token which it keeps refreshing rather >> than keeping the full credential? >> > > > Not exactly. So permanent user api credentials are just as verboten in > some orgs as system/machine credentials. > > Let's take a typical enterprise on aws for example, and granted there is > some variation here but afaict this is pretty standard [0], they'll auth > via federated auth integration (saml) to aws for console access. For user > api access, its on the basis of temporary credentials from sso login. These > typically gets written out in a standard format (~/.aws/credentials) and > config (~/.aws/config) readable across sdks and can be referenced via a > profile name as short cut to specifying api key, secret, and session token. > If the user is then provisioning a server that wants api access, they'll > select an iam instance role for the server, instead of using their personal > time limited credentials for an application. > > Fwiw, given the wide variety of software that interacts with aws, most > enterprise companies have an exception process for creating long standing > keys for given apps, albeit with credentials rotation. > > The other use case this comes up in per the original email, is with cross > account usage which is fairly common either between orgs or within orgs > that have multiple accounts. In that scenario the user or app uses their > credentials to sts role assume (ie get temporary credentials) into a > different account they've been given access to. In some circles that's a > best practice even for non enterprise to guard the primary account (aka > bastion accounts) per second article in [0] > > alot of the legwork for this comes for free with standard sdks including > the golang one which juju doesn't use. albeit almost all of those are > autogen'd/dynamically constructed beyond the core for actual services apis > from json file descriptors. > > cheers, > Kapil > > [0] - some additional articles on aws security > - https://99designs.com/tech-blog/blog/2015/10/26/aws-vault/ > - https://cloudonaut.io/your-single-aws-account-is-a-serious-risk/ > - > https://d0.awsstatic.com/whitepapers/compliance/AWS_CIS_Foundations_Benchmark.pdf > > > > also fwiw this was filed as a feature request/bug a while ago https://bugs.launchpad.net/juju-core/+bug/1316602 -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Re: Fwd: AWS Cross Account Roles
On Fri, Mar 4, 2016 at 7:27 PM, Mark Shuttleworth <m...@ubuntu.com> wrote: > On 04/03/16 12:17, Kapil Thangavelu wrote: > > They can be refreshed prior to expiration to get equivalent immortality, > > example using pysdk > > https://gist.github.com/kapilt/ac8e222081f63ba64e93 > > > > Ideal usage is actually using Iam instance roles as well for instance > > credentials which basically work the same way wrt to refresh intervals. > As > > perm credentials on servers violates aws best practices. > > TO test my understanding, is the idea that one might need to provide > actual credentials when deploying a service or creating a model, but > then the system actually keeps a token which it keeps refreshing rather > than keeping the full credential? > Not exactly. So permanent user api credentials are just as verboten in some orgs as system/machine credentials. Let's take a typical enterprise on aws for example, and granted there is some variation here but afaict this is pretty standard [0], they'll auth via federated auth integration (saml) to aws for console access. For user api access, its on the basis of temporary credentials from sso login. These typically gets written out in a standard format (~/.aws/credentials) and config (~/.aws/config) readable across sdks and can be referenced via a profile name as short cut to specifying api key, secret, and session token. If the user is then provisioning a server that wants api access, they'll select an iam instance role for the server, instead of using their personal time limited credentials for an application. Fwiw, given the wide variety of software that interacts with aws, most enterprise companies have an exception process for creating long standing keys for given apps, albeit with credentials rotation. The other use case this comes up in per the original email, is with cross account usage which is fairly common either between orgs or within orgs that have multiple accounts. In that scenario the user or app uses their credentials to sts role assume (ie get temporary credentials) into a different account they've been given access to. In some circles that's a best practice even for non enterprise to guard the primary account (aka bastion accounts) per second article in [0] alot of the legwork for this comes for free with standard sdks including the golang one which juju doesn't use. albeit almost all of those are autogen'd/dynamically constructed beyond the core for actual services apis from json file descriptors. cheers, Kapil [0] - some additional articles on aws security - https://99designs.com/tech-blog/blog/2015/10/26/aws-vault/ - https://cloudonaut.io/your-single-aws-account-is-a-serious-risk/ - https://d0.awsstatic.com/whitepapers/compliance/AWS_CIS_Foundations_Benchmark.pdf -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Re: Fwd: AWS Cross Account Roles
They can be refreshed prior to expiration to get equivalent immortality, example using pysdk https://gist.github.com/kapilt/ac8e222081f63ba64e93 Ideal usage is actually using Iam instance roles as well for instance credentials which basically work the same way wrt to refresh intervals. As perm credentials on servers violates aws best practices. On Fri, Mar 4, 2016 at 12:37 AM John Meinelwrote: > At the moment I don't believe we do. We just use your access key and > secret key to identify you to EC2 when we make requests. We don't support > using temporary credentials via Assume role > For those of us wanting to know more here is AWS page > > http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user_externalid.html > > The big internal technical limitation is that AssumeRole based access > needs to be refreshed periodically (the temporary keys are good for at most > 1 hour). > > John > =:-> > On Mar 3, 2016 10:46 PM, "Paul Eipper" wrote: > >> Hello, >> >> Does Juju work with AWS Cross Account Roles? Specifically, IAM users >> that need to set the "External ID" string to assume the role? >> >> AWS Cli support is enabled by configuring a profile: >> >> https://docs.aws.amazon.com/cli/latest/userguide/cli-roles.html#cli-roles-xaccount >> >> and then specifying it on the command line: >> ``` >> aws s3 ls --profile marketingadmin >> ``` >> >> Is something like that supported in the Juju EC2 environment config? >> >> att, >> -- >> Paul Eipper >> >> -- >> Juju mailing list >> Juju@lists.ubuntu.com >> Modify settings or unsubscribe at: >> https://lists.ubuntu.com/mailman/listinfo/juju >> > -- > Juju mailing list > Juju@lists.ubuntu.com > Modify settings or unsubscribe at: > https://lists.ubuntu.com/mailman/listinfo/juju > -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Re: "environment" vs "model" in the code
On Mon, Jan 18, 2016 at 8:24 PM, Rick Harding <rick.hard...@canonical.com> wrote: > No, there's not been a public note yet. It's work going into the 2.0 > updates currently. > > The gist of the reason is that as support for things such as networking, > storage, and workloads expand out the idea is that Juju is doing more to > model your infrastructure and workloads vs an environment. > networking and storage are very much part of an application's deployment environment. ie. there different from dev, stage, prod. not sure what workloads are (renamed or built on actions i presume). > So far it's helped one of the issue that Juju has had in that it takes > time to explain what it's actually doing before folks 'get it'. > Starting from the point of 'take what you have running and let Juju model > it' seems to be clicking with new folks more. > so its a verb, its an instance/noun, does it also apply to templates (previously known as bundles)? i'm curious to try out the re-branding on some guinea pigs. re what's commonly running to model, autoscale groups, elbs, multiple networks, security groups, iam roles, rds. thanks, Kapil > > On Mon, Jan 18, 2016 at 9:15 AM Kapil Thangavelu <kap...@gmail.com> wrote: > >> out of curiosity is there any public explanation on the reason for the >> change? environments map fairly naturally to various service topology >> stages, ie my prod, qa, dev environments. while model is a rather opaque >> term that doesn't convey much. >> >> On Thu, Jan 14, 2016 at 7:16 PM, Menno Smits <menno.sm...@canonical.com> >> wrote: >> >>> Hi all, >>> >>> We've committed to renaming "environment" to "model" in Juju's CLI and >>> API but what do we want to do in Juju's internals? I'm currently adding >>> significant new model/environment related functionality to the state >>> package which includes adding new database collections, structs and >>> functions which could include either "env/environment" or "model" in their >>> names. >>> >>> One approach could be that we only use the word "model" at the edges - >>> the CLI, API and GUI - and continue to use "environment" internally. That >>> way the naming of environment related things in most of Juju's code and >>> database stays consistent. >>> >>> Another approach is to use "model" for new work[1] with a hope that >>> it'll eventually become the dominant name for the concept. This will >>> however result in a long period of widespread inconsistency, and it's >>> unlikely that things we'll ever completely get rid of all uses of >>> "environment". >>> >>> I think we need arrive at some sort of consensus on the way to tackle >>> this. FWIW, I prefer the former approach. Having good, consistent names for >>> things is important[2]. >>> >>> Thoughts? >>> >>> - Menno >>> >>> [1] - but what defines "new" and what do we do when making significant >>> changes to existing code? >>> [2] - http://martinfowler.com/bliki/TwoHardThings.html >>> >>> -- >>> Juju-dev mailing list >>> Juju-dev@lists.ubuntu.com >>> Modify settings or unsubscribe at: >>> https://lists.ubuntu.com/mailman/listinfo/juju-dev >>> >>> >> -- >> Juju-dev mailing list >> Juju-dev@lists.ubuntu.com >> Modify settings or unsubscribe at: >> https://lists.ubuntu.com/mailman/listinfo/juju-dev >> > -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev
Re: "environment" vs "model" in the code
out of curiosity is there any public explanation on the reason for the change? environments map fairly naturally to various service topology stages, ie my prod, qa, dev environments. while model is a rather opaque term that doesn't convey much. On Thu, Jan 14, 2016 at 7:16 PM, Menno Smitswrote: > Hi all, > > We've committed to renaming "environment" to "model" in Juju's CLI and API > but what do we want to do in Juju's internals? I'm currently adding > significant new model/environment related functionality to the state > package which includes adding new database collections, structs and > functions which could include either "env/environment" or "model" in their > names. > > One approach could be that we only use the word "model" at the edges - the > CLI, API and GUI - and continue to use "environment" internally. That way > the naming of environment related things in most of Juju's code and > database stays consistent. > > Another approach is to use "model" for new work[1] with a hope that it'll > eventually become the dominant name for the concept. This will however > result in a long period of widespread inconsistency, and it's unlikely that > things we'll ever completely get rid of all uses of "environment". > > I think we need arrive at some sort of consensus on the way to tackle > this. FWIW, I prefer the former approach. Having good, consistent names for > things is important[2]. > > Thoughts? > > - Menno > > [1] - but what defines "new" and what do we do when making significant > changes to existing code? > [2] - http://martinfowler.com/bliki/TwoHardThings.html > > -- > Juju-dev mailing list > Juju-dev@lists.ubuntu.com > Modify settings or unsubscribe at: > https://lists.ubuntu.com/mailman/listinfo/juju-dev > > -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev
Re: state of networking?
The network support listed in the 1.25 alpha release notes ( https://lists.ubuntu.com/archives/juju-dev/2015-August/004721.html ) looks pretty good, i'll give it a whirl. On Tue, Sep 15, 2015 at 7:11 AM, Kapil Thangavelu <kap...@gmail.com> wrote: > Its been hard to see much progress on this and i wanted to checkin wrt to > the current state. > > The requirement of public ip on for the subnets sort of defeats the > purpose of supporting non default vpcs. The use of vpc is typically around > network segmentation and isolation semantics, ie db and app tier subnets > don't have public ips by design. In fact at larger orgs, its not typical > that an app deployment team would even have access to rearrange the network > topology on demand. The learned model of inform juju of the network > topology for a given env by defining/importing netspace/zone from extant > subnets is more typical. > > cheers, > kapil > > > On Fri, Jul 24, 2015 at 3:44 PM, Dimiter Naydenov < > dimiter.nayde...@canonical.com> wrote: > >> -BEGIN PGP SIGNED MESSAGE- >> Hash: SHA1 >> >> On 23.07.2015 22:57, Kapil Thangavelu wrote: >> > I've talked to a few folk at some conferences, but i'm curious >> > what's been happening in networking? >> > >> > it feels like its been fairly long time w/ little visible progress >> > on end user features. particularly i'm curious about aws (ie. the >> > worlds biggest cloud :-).. more concretely - can i use existing >> > (non default) vpcs? - can i create/use extant subnets across zones >> > and specify them for services? - can i control routing between >> > subnets or alternatively control/enforce iptables for a service >> > based on extant relations (optional)? >> > >> > afaics most of the network progress was in various client libs >> > afaics over the last year (and a maas centric core network >> > model)... are there any plans to switch out to the aws api sdk >> > instead of maintaining a separate client lib? >> > >> > thanks, >> > >> > Kapil >> > >> > >> > >> Hey Kapil, >> >> I can report some progress on the points you've asked about: >> 1) non-default VPC support is mostly done - see bug >> https://bugs.launchpad.net/juju-core/+bug/1321442, which I have mostly >> finished fixing. In brief, there will be a "vpc-id" environ setting >> that can be used to specify a non-default (but compatible) VPC to use. >> By compatible at this stage I mean 2 things: at least one subnet per >> AZ, all subnets in the VPC have MapPublicIPOnLaunch set. >> 2) the AWS VPC support is ongoing in a feature branch, the MVP >> proposal will include: add existing subnet to juju (make juju aware of >> it); create a space including one or more subnets; deploy a service >> within a space. >> 3) it's on the roadmap to do more sophisticated >> routing/ACL/firewalling between spaces, but it won't happen until the >> 16.04 time frame most likely. >> >> HTH, >> Dimiter >> - -- >> Dimiter Naydenov <dimiter.nayde...@canonical.com> >> Juju Core Sapphire team <http://juju.ubuntu.com> >> -BEGIN PGP SIGNATURE- >> Version: GnuPG v2.0.22 (GNU/Linux) >> >> iQEcBAEBAgAGBQJVspWBAAoJENzxV2TbLzHwoxkH/RM5JcXSNtL3wyLxafGbaCos >> XMNEQAnMSE/EtQerDEfuu2GFA+Un1Rc0ng6gN6322uc0Ey3OSY9IQ2s8fGhaKFJh >> NiBVHgLtlC77lKoIMGDyGf6OXXTRqZC/T/kM2Z2xrdWNcyVMySi2jH1+2kab+Ljr >> 3hwKc546DjVpaigqLx/Tq66G2yoyrS8ITdudgK8K6LmPf7hUWLMPCbYam/Dw+yuC >> PSiD2J8VNklOsg8U7zDPAyMcL+3ymyyIbp6aZRn5o6Hmkgfo64P+9J6waqFCfGTz >> qtEE3PTHneajXES9ewOiTbY0NKn7joT2T5qlxJjrMVXeIYqzGId0iaIVwIgi1DI= >> =p2Df >> -END PGP SIGNATURE- >> >> -- >> Juju mailing list >> Juju@lists.ubuntu.com >> Modify settings or unsubscribe at: >> https://lists.ubuntu.com/mailman/listinfo/juju >> > > -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Re: state of networking?
Its been hard to see much progress on this and i wanted to checkin wrt to the current state. The requirement of public ip on for the subnets sort of defeats the purpose of supporting non default vpcs. The use of vpc is typically around network segmentation and isolation semantics, ie db and app tier subnets don't have public ips by design. In fact at larger orgs, its not typical that an app deployment team would even have access to rearrange the network topology on demand. The learned model of inform juju of the network topology for a given env by defining/importing netspace/zone from extant subnets is more typical. cheers, kapil On Fri, Jul 24, 2015 at 3:44 PM, Dimiter Naydenov < dimiter.nayde...@canonical.com> wrote: > -BEGIN PGP SIGNED MESSAGE- > Hash: SHA1 > > On 23.07.2015 22:57, Kapil Thangavelu wrote: > > I've talked to a few folk at some conferences, but i'm curious > > what's been happening in networking? > > > > it feels like its been fairly long time w/ little visible progress > > on end user features. particularly i'm curious about aws (ie. the > > worlds biggest cloud :-).. more concretely - can i use existing > > (non default) vpcs? - can i create/use extant subnets across zones > > and specify them for services? - can i control routing between > > subnets or alternatively control/enforce iptables for a service > > based on extant relations (optional)? > > > > afaics most of the network progress was in various client libs > > afaics over the last year (and a maas centric core network > > model)... are there any plans to switch out to the aws api sdk > > instead of maintaining a separate client lib? > > > > thanks, > > > > Kapil > > > > > > > Hey Kapil, > > I can report some progress on the points you've asked about: > 1) non-default VPC support is mostly done - see bug > https://bugs.launchpad.net/juju-core/+bug/1321442, which I have mostly > finished fixing. In brief, there will be a "vpc-id" environ setting > that can be used to specify a non-default (but compatible) VPC to use. > By compatible at this stage I mean 2 things: at least one subnet per > AZ, all subnets in the VPC have MapPublicIPOnLaunch set. > 2) the AWS VPC support is ongoing in a feature branch, the MVP > proposal will include: add existing subnet to juju (make juju aware of > it); create a space including one or more subnets; deploy a service > within a space. > 3) it's on the roadmap to do more sophisticated > routing/ACL/firewalling between spaces, but it won't happen until the > 16.04 time frame most likely. > > HTH, > Dimiter > - -- > Dimiter Naydenov <dimiter.nayde...@canonical.com> > Juju Core Sapphire team <http://juju.ubuntu.com> > -BEGIN PGP SIGNATURE- > Version: GnuPG v2.0.22 (GNU/Linux) > > iQEcBAEBAgAGBQJVspWBAAoJENzxV2TbLzHwoxkH/RM5JcXSNtL3wyLxafGbaCos > XMNEQAnMSE/EtQerDEfuu2GFA+Un1Rc0ng6gN6322uc0Ey3OSY9IQ2s8fGhaKFJh > NiBVHgLtlC77lKoIMGDyGf6OXXTRqZC/T/kM2Z2xrdWNcyVMySi2jH1+2kab+Ljr > 3hwKc546DjVpaigqLx/Tq66G2yoyrS8ITdudgK8K6LmPf7hUWLMPCbYam/Dw+yuC > PSiD2J8VNklOsg8U7zDPAyMcL+3ymyyIbp6aZRn5o6Hmkgfo64P+9J6waqFCfGTz > qtEE3PTHneajXES9ewOiTbY0NKn7joT2T5qlxJjrMVXeIYqzGId0iaIVwIgi1DI= > =p2Df > -END PGP SIGNATURE- > > -- > Juju mailing list > Juju@lists.ubuntu.com > Modify settings or unsubscribe at: > https://lists.ubuntu.com/mailman/listinfo/juju > -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Re: [ Docs ] - Charming with Docker
i think participating in the burgeoning docker ecosystem is a worthwhile goal by making it easier to write charms that utilize docker. I do have some concerns though about the complexity of the layering that's taking place in the charm ecosystem. I've found that juju has been fairly hard to teach and adapt to real world usage and the layering of frameworks at the charm authoring level makes learning and productivity for new users even harder to achieve (ie those docs reference, charm helpers, reactive charm helpers, docker reactive 'layer', charm composition), all of which are fairly advanced concepts to a new user, and frankly is all of that nescessary to understand running a shell script https://github.com/juju-solutions/layer-docker/blob/master/scripts/install_docker.sh which is effectively the jujuized version of (curl get.docker.com | sh) .. now say i want to configure an insecure registry or any other docker cli param i have to break apart the layer abstraction anyways. It does seem like a useful intro to some of the advanced/additional concepts in charm authoring ecosystem, but at the same time the burden seems high (ie. kiss violation) for how to run a docker container. cheers, Kapil On Fri, Sep 11, 2015 at 11:40 AM, Charles Butler < charles.but...@canonical.com> wrote: > If anyone here is interested in delivering Docker App Containers with > Juju, mbruzek and I have put together some documents around a hot new > process using composer layers and the reactive framework. > > This is interesting because you the charm author will only be concerned > with how to deliver your application layers logic. You don't need to worry > about installing docker, or how to scaffold out a full charm boilerplate. > This entire process reduces the total cost of ownership of the author to > just managing their app layer + container. > > https://github.com/juju/docs/pull/672 > > And we've constructed/linked to an example charm using this process. > There's probably holes in this document, and welcome feedback directly on > the pull request to suss them out. > > We're fully interested in receiving your feedback about this, as its > important that we are properly engaging our users that are interested in > delivering their dockerized app with Juju, and that we've given you the > proper lessons to do so easily, and made the right decisions in tooling. > > All the best, > > > Charles Butler- Juju Charmer > Come see the future of datacenter orchestration: http://jujucharms.com > > -- > Juju mailing list > Juju@lists.ubuntu.com > Modify settings or unsubscribe at: > https://lists.ubuntu.com/mailman/listinfo/juju > > -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
state of networking?
I've talked to a few folk at some conferences, but i'm curious what's been happening in networking? it feels like its been fairly long time w/ little visible progress on end user features. particularly i'm curious about aws (ie. the worlds biggest cloud :-).. more concretely - can i use existing (non default) vpcs? - can i create/use extant subnets across zones and specify them for services? - can i control routing between subnets or alternatively control/enforce iptables for a service based on extant relations (optional)? afaics most of the network progress was in various client libs afaics over the last year (and a maas centric core network model)... are there any plans to switch out to the aws api sdk instead of maintaining a separate client lib? thanks, Kapil -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Re: Openstack bootstrap error
The console log of the instance would be useful (available via horizon or cli). re bootstrap, it creates a security group, and launches an instances with user data, the user data functionality of the openstack cloud must be functional for this to work. the user data for the instance setups ssh keys, the juju client will then connect to the bootstrap server via ssh to complete initialization in the foreground. Subsequently all apis cloud to the iaas layer are made by the bootstrap server (or subsequent api servers if running w/ ha). cheers, Kapil On Thu, Jul 2, 2015 at 8:32 AM, dinesh.senap...@wipro.com wrote: The image-metadata and agent-metadata URLs are addressible because while juju bootstrap is running, I can see the instance coming up successfully in the dashboard. Even I was able to ping and ssh to that instance . But the problem is the bootstrap command ends up failing showing a message. Below is the screenshot of the error. I am not understanding what is juju trying to do after the instance coming up. Because I am getting the above error at the end. Also please explain what juju bootstrap does in openstack environment. *From:* Marco Ceppi [mailto:ma...@ondina.co] *Sent:* Thursday, July 02, 2015 5:47 PM *To:* Dinesh Kumar Senapaty (WT01 - Global Media Telecom); m...@ubuntu.com; juju@lists.ubuntu.com *Subject:* Re: Openstack bootstrap error While I'm not 100% confident, I believe that the image-metadata and agent-metadata URLs need to be a URL that is addressible from within the OpenStack cloud as they're used by cloud-init when bootstrapping which may explain why you're not completing the process. Marco On Thu, Jul 2, 2015 at 7:21 AM dinesh.senap...@wipro.com wrote: I have tried on manual provisioning and local provider and I was able to ssh to the machine. Below is my 'Environments.yaml' for openstack openstack: type: openstack auth-url: http://10.200.8.203:5000/v2.0 auth-mode: userpass use-floating-ip: true use-default-secgroup: true admin-secret: 81a1e7429e6847c4941fda7591246594 image-metadata-url: file:///home/controller/opt/stack/images agent-metadata-url: file:///home/controller/opt/stack/tools network: demo-net region: regionOne username: admin password: password tenant-name: admin -Original Message- From: Mark Shuttleworth [mailto:m...@ubuntu.com] Sent: Thursday, July 02, 2015 4:40 PM To: Dinesh Kumar Senapaty (WT01 - Global Media Telecom); juju@lists.ubuntu.com Subject: Re: Openstack bootstrap error On 02/07/15 06:50, dinesh.senap...@wipro.com wrote: Hi, I have been trying to bootstrap juju for 'openstack' environment. The instance being created and floating ip is also associated and also I am able to ping that instance but later bootstrap fails when it is attempting to connect giving the error failed to bootstrap environment: waited for 10m0s without being able to connect: ssh: connect to host .Can anyone help me regarding this issue? Are you able to deploy trusty with MAAS and ssh to the machine? Are you otherwise able to bootstrap on public clouds? Mark The information contained in this electronic message and any attachments to this message are intended for the exclusive use of the addressee(s) and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you should not disseminate, distribute or copy this e-mail. Please notify the sender immediately and destroy all copies of this message and any attachments. WARNING: Computer viruses can be transmitted via email. The recipient should check this email and any attachments for the presence of viruses. The company accepts no liability for any damage caused by any virus transmitted by this email. www.wipro.com -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju The information contained in this electronic message and any attachments to this message are intended for the exclusive use of the addressee(s) and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you should not disseminate, distribute or copy this e-mail. Please notify the sender immediately and destroy all copies of this message and any attachments. WARNING: Computer viruses can be transmitted via email. The recipient should check this email and any attachments for the presence of viruses. The company accepts no liability for any damage caused by any virus transmitted by this email. www.wipro.com -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Re: Messaging to new users
On Wed, Jul 1, 2015 at 9:53 PM, Tim Penhey tim.pen...@canonical.com wrote: I have been wondering for a while how we message to new users. I raise this because I see quite a few messages on stack overflow that go something like this: I'm really new to Juju and I'm trying to set up MaaS. I feel that if we are getting to this, we are doing something wrong. Perhaps we should have better docs around initial evaluation testing? How do we direct people more to the local provider and manual provisioning on existing hardware over setting up their own MaaS to try Juju? The unfortunate issue is that manual and local both target arbitrary environments, which means they are the least repeatable (local firewall, pre-existing software interference). Getting machine 0 to not special in local provider will help somewhat as would firewall checks, and foreground lxc/lxd image download. In the meantime directing new users to a cloud provider seems best whenever possible. Partly it also depends on their goal, if their intent is to get to an openstack setup, then becoming familiar with maas is also a useful endeavour for which a virtualbox setup is useful. cheers, Kapil ps. Googling getting started with juju brings up this video from two years ago, https://www.youtube.com/watch?v=9h5hgfnZcBQ -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Re: Openstack bootstrap error
Hmm.. That's odd. Sometimes machine console logs take a little time to propogate (ie. takes 10m after machine launch in aws). Another thing to try and get more debug info is to try bootstrapping with --debug ( https://jujucharms.com/docs/stable/troubleshooting#bootstrap-fails) and paste a link to the pastebin/gist here. cheers, Kapil On Thu, Jul 2, 2015 at 9:24 AM, dinesh.senap...@wipro.com wrote: Re-bootstrapped and checked the console log, it is empty *From:* Kapil Thangavelu [mailto:kap...@gmail.com] *Sent:* Thursday, July 02, 2015 6:30 PM *To:* Dinesh Kumar Senapaty (WT01 - Global Media Telecom) *Cc:* Marco Ceppi; Mark Shuttleworth; Juju email list *Subject:* Re: Openstack bootstrap error The console log of the instance would be useful (available via horizon or cli). re bootstrap, it creates a security group, and launches an instances with user data, the user data functionality of the openstack cloud must be functional for this to work. the user data for the instance setups ssh keys, the juju client will then connect to the bootstrap server via ssh to complete initialization in the foreground. Subsequently all apis cloud to the iaas layer are made by the bootstrap server (or subsequent api servers if running w/ ha). cheers, Kapil On Thu, Jul 2, 2015 at 8:32 AM, dinesh.senap...@wipro.com wrote: The image-metadata and agent-metadata URLs are addressible because while juju bootstrap is running, I can see the instance coming up successfully in the dashboard. Even I was able to ping and ssh to that instance . But the problem is the bootstrap command ends up failing showing a message. Below is the screenshot of the error. I am not understanding what is juju trying to do after the instance coming up. Because I am getting the above error at the end. Also please explain what juju bootstrap does in openstack environment. *From:* Marco Ceppi [mailto:ma...@ondina.co] *Sent:* Thursday, July 02, 2015 5:47 PM *To:* Dinesh Kumar Senapaty (WT01 - Global Media Telecom); m...@ubuntu.com; juju@lists.ubuntu.com *Subject:* Re: Openstack bootstrap error While I'm not 100% confident, I believe that the image-metadata and agent-metadata URLs need to be a URL that is addressible from within the OpenStack cloud as they're used by cloud-init when bootstrapping which may explain why you're not completing the process. Marco On Thu, Jul 2, 2015 at 7:21 AM dinesh.senap...@wipro.com wrote: I have tried on manual provisioning and local provider and I was able to ssh to the machine. Below is my 'Environments.yaml' for openstack openstack: type: openstack auth-url: http://10.200.8.203:5000/v2.0 auth-mode: userpass use-floating-ip: true use-default-secgroup: true admin-secret: 81a1e7429e6847c4941fda7591246594 image-metadata-url: file:///home/controller/opt/stack/images agent-metadata-url: file:///home/controller/opt/stack/tools network: demo-net region: regionOne username: admin password: password tenant-name: admin -Original Message- From: Mark Shuttleworth [mailto:m...@ubuntu.com] Sent: Thursday, July 02, 2015 4:40 PM To: Dinesh Kumar Senapaty (WT01 - Global Media Telecom); juju@lists.ubuntu.com Subject: Re: Openstack bootstrap error On 02/07/15 06:50, dinesh.senap...@wipro.com wrote: Hi, I have been trying to bootstrap juju for 'openstack' environment. The instance being created and floating ip is also associated and also I am able to ping that instance but later bootstrap fails when it is attempting to connect giving the error failed to bootstrap environment: waited for 10m0s without being able to connect: ssh: connect to host .Can anyone help me regarding this issue? Are you able to deploy trusty with MAAS and ssh to the machine? Are you otherwise able to bootstrap on public clouds? Mark The information contained in this electronic message and any attachments to this message are intended for the exclusive use of the addressee(s) and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you should not disseminate, distribute or copy this e-mail. Please notify the sender immediately and destroy all copies of this message and any attachments. WARNING: Computer viruses can be transmitted via email. The recipient should check this email and any attachments for the presence of viruses. The company accepts no liability for any damage caused by any virus transmitted by this email. www.wipro.com -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju The information contained in this electronic message and any attachments to this message are intended for the exclusive use of the addressee(s) and may contain proprietary, confidential or privileged information
Re: any ways to update unit public address ?
On Tue, May 26, 2015 at 4:36 PM, Andrew Wilkins andrew.wilk...@canonical.com wrote: On Tue, May 26, 2015 at 11:45 PM, Vasiliy Tolstov v.tols...@selfip.ru wrote: Hi! users sometimes can changed their server ip address, does it possible to change unit public address? Or only way is to edit mongodb database on state server? Hi Vasiliy, Do you mean in relation settings? The charm's config-changed hook will be invoked when the machine addresses change. You can use this hook to update a unit's relation settings. There was a long thread about automatically updating the address, but we didn't go there because it would break proxy charms; charms that manage remote services, presenting addresses for remote machines. relation settings only propagates private ip, which juju sets, and its imo bug that juju should update and invoke relation change hook if it changes and has the previously set value (ie. thus works w/ proxy charms). at least that was my summary going in and out of the that monster thread. at the moment juju doesn't update the addresses it set, but it will invoke config-changed when addresses changed afaicr but effectively zero charms are ready to handle that. re public address, unit-get public-address makes it available as info into the charm, but there isn't a mechanism per-se to modify the address or notify wrt to that information. you can manually modify the relation settings that convey private/public address info with juju run ala https://gist.github.com/kapilt/a61efcb4eaef9e685397 might be helpful if you could clarify the context that the public address changed and how its causing a problem with a concrete example. cheers, Kapil Cheers, Andrew -- Vasiliy Tolstov, e-mail: v.tols...@selfip.ru -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev
Re: Upcoming change in 1.24: tags in EC2
On Thu, May 21, 2015 at 10:26 PM, Andrew Wilkins andrew.wilk...@canonical.com wrote: Hi all, Just a small announcement, in case anyone cares. In the EC2 provider, from 1.24, we will start tagging instances and volumes with their Juju-internal names and the Juju environment UUID. Instances, for example, will have a name of machine-0, machine-1, etc., corresponding to the ID in juju status. There is not currently an upgrade step to tag existing resources, so only newly created resources will be tagged. The environment UUID can be used to identify all instances and volumes that are part of an environment. This could be used to for billing purposes; to charge the infrastructure costs for a Juju environment to a particular user/organisation. We will be looking at doing the same for OpenStack for 1.24 also. Cheers, Andrew That's super awesome, and very helpful for real world usage. A few suggestions, For users with multiple environments, seeing a bunch machine-0 in the ui, is rather confusing, i'd suggest prefixing with the env name. Potentially even more useful is actually naming the machines not for their pet names, but their cattle names (workload name), ie. name with the primary unit that caused the machine to exist, or was the first unit assigned to the machine (minus state servers). cheers, Kapil -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev
Re: Upcoming change in 1.24: tags in EC2
On Mon, May 25, 2015 at 9:23 PM, Andrew Wilkins andrew.wilk...@canonical.com wrote: On Tue, May 26, 2015 at 12:18 PM, Richard Harding rick.hard...@canonical.com wrote: On Tue, 26 May 2015, Andrew Wilkins wrote: On Tue, May 26, 2015 at 4:05 AM, Mark Shuttleworth m...@ubuntu.com wrote: On 25/05/15 18:57, Kapil Thangavelu wrote: That's super awesome, and very helpful for real world usage. A few suggestions, For users with multiple environments, seeing a bunch machine-0 in the ui, is rather confusing, i'd suggest prefixing with the env name. Potentially even more useful is actually naming the machines not for their pet names, but their cattle names (workload name), ie. name with the primary unit that caused the machine to exist, or was the first unit assigned to the machine (minus state servers). Agreed; for full chargeback we need environment uuid, for social debugging we need some sort of environment name, unit names and charm(s) deployed, including in containers on the machine. For EBS it would be the store name, uuid, and unit identity. Kapil, Mark, thanks for the suggestions. Sounds good, I'll look at doing that. A concern I have is that these resources can be reassigned (units added, volume assigned to different store) so those tags would then be misleading. That's the main reason why I avoided including information about the workload/store in the name. I suppose the benefit outweighs, and we could look at updating tags later on. Cheers, Andrew One suggestion is being careful about what tags might already exist on a machine that a user might have set through their own control UI. If Juju is tracking tags it sets we should make sure it never messed with ones it did not set. When we come to updating existing machines, we won't touch existing tags. The tags we do add, apart from Name, will be prefixed with Juju so they're obviously under the management of Juju. Change them at your own peril. This does highlight a problem of how we identify whether or not Juju can update the resource's Name tag though. We would either never update it, and live with possibly-wrong machine names, or alternatively we'd have a sufficiently unique name format that Juju will replace only if it matches. one option is to leave the machine id static at allocation (ie what it is now) and then do workloads dynamically in an additional tag under the juju namespace. At least with aws, this does degrade console usability as the user will be looking at a flat list of machines and will have to poke into details to learn workload on an individual. There's some mitigation in that the aws console added a nice feature earlier this year for browsing resource groups via query by tag albeit with additional end user setup. Compound values (all services on a machine) can be searched via contains. ie. i could find all machines running units of service xyz in a given env with juju-env=uuid and juju-units=contains xyz. There's some usability/discoverability issues with env uuid vs name. aws limits to 10 tags and 255 length utf8 values, most orgs reserve some set of tags for their own use. the alternative for dynamic values means storing both the name tag as desired vs last known state and reconciling with extant value to avoid the overwrite or via verification of value format convention as you said. In addition to env tag it would be very good to allow the user to specify static tags to be associated with all env resources so that juju can interop with existing org classification and chargeback schemes. Fwiw Gce has more traditional tag impl of just tag value and instance names are static. Gce provides for a chargeback mechanism ootb at the gce project level in addition to account. cheers, Kapil Cheers, Andrew -- Juju-dev mailing list juju-...@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Re: Deploying Juju charms from Github
On Tue, Apr 14, 2015 at 6:09 AM, Robert Day robert@metaswitch.com wrote: Thanks for the response! I’ve: * found https://code.launchpad.net/~whitmo/juju-deployer/git-clone-fix and applied those changes to my local copy of juju-deployer * removed the “--depth 1” from vcs.py * corrected my bundle to the below services: clearwater-bono: charm: clearwater-bono branch: https://github.com/Metaswitch/clearwater-juju@46785ea8368a47c4351a516bb9a76763f6d4a952 I then get this error: 2015-04-14 09:33:21 Using deployment clearwater 2015-04-14 09:33:21 Starting deployment of clearwater 2015-04-14 09:33:21 Invalid config charm clearwater-bono zone=clearwater.local 2015-04-14 09:33:21 Deployment stopped. run time: 0.52 I’m pretty sure that this is because juju-deployer expects the charm’s metadata.yaml file (or config.yaml, etc.) to be in precise/clearwater-bono/metadata.yaml, whereas due to the way my Git repository is laid out, it’s actually in precise/clearwater-bono/charms/precise/clearwater-bono/metadata.yaml (the Git repository is checked out correctly to precise/clearwater-bono, but my charms are all in the charms/precise subdirectory of that repository – it’s at https://github.com/Metaswitch/clearwater-juju if you want to see what I mean). specifically the error is referencing the key 'zone' is not found in the charms's config.yaml. if you have a single directory of charms in vcs, its a bit easier to not have deployer do vcs, ie. checkout, set JUJU_REPOSITORY env var to root of checkout dir, omit vcs info from the bundle config, and then just juju-deployer. Granted that has the downside of requiring additional instructions when sharing. I think I could correct this by having one repository per charm, but I’d like to keep all my charms in the same repository if possible – there are six or seven of them and they’re quite closely related. It feels like having an option in the bundle to say “the charm files are in this subdirectory of the Git repository” would be useful – I might try and put that patch together, unless there are reasons why that wouldn’t be a good idea (or unless something similar already exists). Something like: clearwater: services: clearwater-bono: charm: clearwater-bono branch: https://github.com/Metaswitch/clearwater-juju@46785ea8368a47c4351a516bb9a76763f6d4a952 charms_subdirectory: charms/precise/clearwater-bono clearwater-sprout: charm: clearwater-sprout branch: https://github.com/Metaswitch/clearwater-juju@46785ea8368a47c4351a516bb9a76763f6d4a952 charms_subdirectory: charms/precise/clearwater-sprout actually i think a different syntax would be nice. clearwater: charm_repository: https://github.com/Metaswitch/clearwater@987132asdf services: charm: clearwater-bono that repo directory would get checked out, and set as the JUJU_REPOSITORY for the rest of the services, ie. propagate for all local charms automatically, but they could override as needed. patches welcome. cheers, Kapil Thanks, Rob From: Kapil Thangavelu [mailto:kap...@gmail.com] Sent: 14 April 2015 05:57 To: Robert Day Cc: juju@lists.ubuntu.com Subject: Re: Deploying Juju charms from Github There's a few things going on. The charm key in the bundle should just be the charm name. The git support has a bug (there's a pending merge proposal for this) re parent dir/checkout dir which is the cause of those tracebacks. The @ syntax works for revision and tags but not branches. Given its common for branch co-location with git unlike bzr that seems reasonable, the @rev specification will likely need some additional syntax to support branches. Another issue is deployer is optimizing for the deployment case so its taking shallow copies of repos (--depth 1 on git clone). With git that seems to preclude fetching remote branch info. The shallow clone is probably of marginal utility given git's speed, with bzr its a significant speed up. Its going to take a few days to sort out changes to support this. -kapil On Mon, Apr 13, 2015 at 11:11 AM, Robert Day robert@metaswitch.com wrote: Hi all, I'm trying to deploy a Juju bundle and set of charms hosted on Github rather than Launchpad, but I'm running into some problems. Although juju-deployer does appear to support this, I haven't found many examples, so I'm not certain I'm doing the right thing - if anyone has this working, and could point me at their Github repository, that'd be great. . I'm using juju-deployer version 0.4.3-0ubuntu1~ubuntu12.04.1~ppa1, juju-core version 1.22.0-0ubuntu1~12.04.2~juju1 and Git version 2.3.3. The bundle I'm using is at https://raw.githubusercontent.com/Metaswitch/clearwater-juju/github_everywhere/charms/bundles/clearwater/bundle/bundles.yaml. The Github repository/branch is https://github.com/Metaswitch/clearwater-juju/, which contains several charms and the bundle
Re: Deploying Juju charms from Github
There's a few things going on. The charm key in the bundle should just be the charm name. The git support has a bug (there's a pending merge proposal for this) re parent dir/checkout dir which is the cause of those tracebacks. The @ syntax works for revision and tags but not branches. Given its common for branch co-location with git unlike bzr that seems reasonable, the @rev specification will likely need some additional syntax to support branches. Another issue is deployer is optimizing for the deployment case so its taking shallow copies of repos (--depth 1 on git clone). With git that seems to preclude fetching remote branch info. The shallow clone is probably of marginal utility given git's speed, with bzr its a significant speed up. Its going to take a few days to sort out changes to support this. -kapil On Mon, Apr 13, 2015 at 11:11 AM, Robert Day robert@metaswitch.com wrote: Hi all, I'm trying to deploy a Juju bundle and set of charms hosted on Github rather than Launchpad, but I'm running into some problems. Although juju-deployer does appear to support this, I haven't found many examples, so I'm not certain I'm doing the right thing - if anyone has this working, and could point me at their Github repository, that'd be great. . I'm using juju-deployer version 0.4.3-0ubuntu1~ubuntu12.04.1~ppa1, juju-core version 1.22.0-0ubuntu1~12.04.2~juju1 and Git version 2.3.3. The bundle I'm using is at https://raw.githubusercontent.com/Metaswitch/clearwater-juju/github_everywhere/charms/bundles/clearwater/bundle/bundles.yaml. The Github repository/branch is https://github.com/Metaswitch/clearwater-juju/, which contains several charms and the bundle. What I want is: - to check out the dnsaas branch of https://github.com/Metaswitch/clearwater-juju.git - to deploy the charm in the subdirectory 'charms/precise/clearwater-bono' so I have the following in my bundle: services: clearwater-bono: charm: charms/precise/clearwater-bono branch: https://github.com/Metaswitch/clearwater-juju@dnsaas; When I run juju-deployer, it fails with No such file or directory when trying to deploy the charm at https://github.com/Metaswitch/clearwater-juju/tree/dnsaas/charms/precise/clearwater-bono (though there's nothing special about this charm, it's just the first one in the bundle): $ juju-deployer -c https://raw.githubusercontent.com/Metaswitch/clearwater-juju/github_everywhere/charms/bundles/clearwater/bundle/bundles.yaml 2015-04-13 13:50:20 Using deployment clearwater 2015-04-13 13:50:20 Starting deployment of clearwater Traceback (most recent call last): File /usr/bin/juju-deployer, line 9, in module load_entry_point('juju-deployer==0.4.3', 'console_scripts', 'juju-deployer')() File /usr/local/lib/python2.7/dist-packages/deployer/cli.py, line 130, in main run() File /usr/local/lib/python2.7/dist-packages/deployer/cli.py, line 228, in run importer.Importer(env, deployment, options).run() File /usr/local/lib/python2.7/dist-packages/deployer/action/importer.py, line 188, in run self.get_charms() File /usr/local/lib/python2.7/dist-packages/deployer/action/importer.py, line 63, in get_charms no_local_mods=self.options.no_local_mods) File /usr/local/lib/python2.7/dist-packages/deployer/deployment.py, line 139, in fetch_charms os.mkdir(charm.series_path) OSError: [Errno 2] No such file or directory: 'precise/charms/precise' If I create that directory (with mkdir -p precise/charms/precise) I get a different error: $ juju-deployer -c https://raw.githubusercontent.com/Metaswitch/clearwater-juju/github_everywhere/charms/bundles/clearwater/bundle/bundles.yaml 2015-04-13 13:50:58 Using deployment clearwater 2015-04-13 13:50:58 Starting deployment of clearwater Traceback (most recent call last): File /usr/bin/juju-deployer, line 9, in module load_entry_point('juju-deployer==0.4.3', 'console_scripts', 'juju-deployer')() File /usr/local/lib/python2.7/dist-packages/deployer/cli.py, line 130, in main run() File /usr/local/lib/python2.7/dist-packages/deployer/cli.py, line 228, in run importer.Importer(env, deployment, options).run() File /usr/local/lib/python2.7/dist-packages/deployer/action/importer.py, line 188, in run self.get_charms() File /usr/local/lib/python2.7/dist-packages/deployer/action/importer.py, line 63, in get_charms no_local_mods=self.options.no_local_mods) File /usr/local/lib/python2.7/dist-packages/deployer/deployment.py, line 140, in fetch_charms charm.fetch() File /usr/local/lib/python2.7/dist-packages/deployer/charm.py, line 132, in fetch self.vcs.update(self.rev) File /usr/local/lib/python2.7/dist-packages/deployer/vcs.py, line 99, in update self._call(params, self.err_update) File /usr/local/lib/python2.7/dist-packages/deployer/vcs.py, line 30, in _call args, cwd=cwd or
Re: Using juju with Scaleway
Prior to the rename they had a manual provider based cli plugin, but it looks likes it needs to be updated for the latest api version. https://github.com/online-labs/juju-onlinelabs On Fri, Apr 3, 2015 at 10:59 AM Zygmunt Krynicki zygmunt.kryni...@canonical.com wrote: Hi Is it possible to use juju with the recently announced [1] scaleway [2] ARM cloud? Best regards ZK [1] https://insights.ubuntu.com/2015/04/02/ubuntu-arm-and-the- public-cloud/ [2] https://www.scaleway.com/ -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/ mailman/listinfo/juju -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Re: Reg Juju Api
On Wed, Feb 25, 2015 at 8:05 PM, Rajendar K k.rajen...@gmail.com wrote: Hi Mark, Thanks for your kind reply. It makes me to understand better about juju.. Here are my few queires.. *(i) Each charm starts with a blank machine, like centos6 or trusty orwindows8, and then does what it needs to do to add the service itdescribes to that machine. So for each cloud you just need to know whatthe blank machine image is for the OS versions your charms will use.IN future we might support creating snapshots which can be reused forfaster startup of additional machines. [Mark]* - I very much impressed with the way of the drag and drop of the VM deployment. My question is about, how the deployment of VMs made from the charm store. [ the way i drag and drop from charm store]. It means the image format handled at charm store is neutral or how it is being handled to cater across the clouds.? The charms specify a symbolic identifier for an os name version aka the series which is resolved in each cloud using simple stream tools which maps those symbolic names to actual images in each cloud. For all the major public clouds simple streams are published for users. For private clouds or smaller public clouds, the user has to manage that process themselves. Specifically the case of smaller public clouds typically entails either re-using the manual provider (ie client side api automation with ssh or userdata initialization) or writing a cloud provider for juju. There are several client side plugins using manual provider extant (digitalocean, softlayer, etc). The source tree for juju (https://github.com/juju/juju) contains the various provider implementations extant. (ii) Reg configuration management ( building relations) How the configuration management is handled? [ not for creation of new charms]. Existing charms how the configuration is being made ( by drawing the relation between the VMs]? For eg : Wordpress with Mysql IS the relation already pre-defined on each charm.[how the IP and hostname being exchanged across the VMs]. relations form a bi-directional communcation channel across which this information is carried and actions taken out. ie. mysql upon receiving a new client relation will create a new logical db, user, password and pass along with its address along the relation to its client. the charms themselves declare in metadata their relations by interface. ie. mysql declares it provides the mysql interface, wordpress declares it requires the mysql interface, alternate implementations of those interfaces are readily interchagable. ie. using amazon aurora/rds for mysql. (iii) How to download and use charms from charmstore? atm the charm store contains charm artifacts pulled from vcs in launchpad. there are a few mirrors or dual published charms to github as well. I need to download charm from charmstore and boot on KVM/hypervisor. the juju environment will automatically do that for you, else you can checkout one out via vcs and deploy it as a local charm. (see docs). Also it would be useful, if you could let details the format (image format) managed at charm store. there are no images being managed by the charm store, just the symbolic identifier as described above embedded in the charm typically at publishing time. cheers, Kapil with thanks and regards, Raj On Wed, Feb 25, 2015 at 10:19 PM, Mark Shuttleworth m...@ubuntu.com wrote: On 25/02/15 11:30, Rajendar K wrote: Quite new to this forum. Welcome! I would like to know the details about the juju-API for communication to public clouds (Amazon, etc).(where i can download and start using) There is code built into Juju that knows about each cloud; we call that a provider, it's like a driver for the cloud, and it maps what Juju needs to the API of that particular cloud. Those are usually written in Go and built into Juju core itself; the libraries can typically be reused in your own Go project easily enough and we would take patches if they were helpful for others too. If you have a cloud that speaks an entirely new API, there is a short-cut to getting up and running, which is called a plug-in. The plug-in runs on the client, not the server, and basically allows you to use shell scripts that talk to your cloud, and have Juju call those when it needs to do things like start a new machine. The machines are started by your shell script, then Juju remotely logs in to the machine and manually configures it. There are a few sets of plug-ins for popular clouds that don't yet have full providers built-in to Juju. I have my own cloud infrastructure, is it possible to call those APIs for managing VMs across Cloud platforms? Yes, if your cloud talks a common API like AWS or OpenStack, then you can probably use the native API support built in to Juju, otherwise I would suggest you start with a plug-in and then write a Go
Re: juju and openstack reseting secgroups automatically overnight
in some sense this is expected behavior, juju syncs the iaas resource it creates to its internal state for them. Re workarounds.. At least for openstack (or ec2 vpc) if you want manually created security rules, you should ideally create a separate group + rules and attach to the relevant instances. The other option is for services that are exposed, you can use 'juju run' on either a service/unit/all machines to open-port 22. hth, -kapil On Thu, Feb 12, 2015 at 6:52 AM, Caio Begotti caio1...@gmail.com wrote: Thanks, Michael. I see you filed the bug last night (I went away after posting my message) but I just added some findings and my scenario to the report. In case others want to check it out: https://bugs.launchpad.net/juju-core/+bug/1420996 — Caio Begotti [ˈka.jo | be.ˈgɔ.t͡ʃi] On Wed, Feb 11, 2015 at 6:39 PM, Michael Nelson michael.nel...@canonical.com wrote: On Thu, Feb 12, 2015 at 5:39 AM, Caio Begotti caio1...@gmail.com wrote: Hi folks, I wonder if any of you have had this problem before but Juju and Openstack are resetting my secgroup rules every night. I hope this is comprehensible without much details as it involves private deployment info... I know this is not strictly speaking 100% Juju but anyway... I've just checked my ec2 test deployments, and I'm seeing the same behaviour on the secgroups there. Definitely worth a bug Caio (I'll do it if you don't get around to it, I don't see one at https://bugs.launchpad.net/juju-core/?field.searchtext=secgroup ). -Michael Juju creates the secgroup for Nova, right? I am manually setting a nova secgroup-add-rule for port 22 like the following: nova secgroup-add-rule groupname tcp 22 22 ipaddress/32 However, my other rules (ICMP etc) are kept between days, but SSH rules for port 22 are being reset and disappearing overnight. Is it a known issue or expected behavior with Juju and Openstack? I was told Juju or Openstack (no idea who is at faul here, really) might reset the secgroups from time to time (when exactly?) if the specified port in the rule is not open in the Juju units. Ok, so I have created this charm https://jujucharms.com/u/caio1982/open-port/ and I confirm that now port 22 is open in all the related units whose IPs are in the secgroup rules. Still, all SSH rules for port 22 are being reset every single night. Does it make sense? Right now I have an extra secgroup rule for 0.0.0.0/0 too, just to see what happens tonight. I would really love to understand why Juju and Openstack are not playing nice together with my secgroup rules :-( — Caio Begotti [ˈka.jo | be.ˈgɔ.t͡ʃi] -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Re: Availability of alpha Easy Juju on Azure disk image
That's pretty awesome for a cloud integrated quickstart. nice work. Sounds like it work for aws marketplace and gce click to deploy as well. cheers, Kapil On Wed, Feb 11, 2015 at 3:16 AM, Samuel Cozannet samuel.cozan...@canonical.com wrote: Thanks Marco, that is awesome :) For those interested, this image comes with a blog post on our insights site http://insights.ubuntu.com/2015/02/10/create-devops-magic-on-azure-with-canonicals-juju/ Also available on the Azure channel9 blog http://channel9.msdn.com/Blogs/AzurePartner/Guest-Post-Create-DevOps-Magic-on-Azure-with-Canonicals-Juju Have fun reading and don't forget to share the love! :) Best, Sam Le 11 févr. 2015 04:26, Marco Ceppi ma...@ondina.co a écrit : Hey Andrew, I agree with you on the feedback, since it's an HTML page at the moment it's hard to incorporate those items, but I've forked the repo and started building a lightweight Python app to provide that feedback to the user. Marco On Tue Feb 10 2015 at 9:37:48 PM Andrew Wilkins andrew.wilk...@canonical.com wrote: Hi Samuel, Looks neat. A few things: 1. Once the VM is ready, the entire landing page keeps refreshing at 1s intervals. That doesn't leave a lot of time to add the .publishsettings file and upload. 2. There's no feedback to say whether or not the .publishsetttings file has been uploaded, and whether bootstrapping is underway. I guess I did it twice because of point 1, because now I have two juju bootstrap processes on the machine :) 3. When the GUI eventually came up, it wouldn't accept the password that the page displayed. This is related to point 2: the page gave me the link to the GUI for one env, and the password for the other. It'd be great if the upload credentials bit greyed out once uploaded, and then under 3. Wait for a few minutes... there was something describing the current status. e.g. Installing Juju, Deploying Juju GUI. Cheers, Andrew On Tue, Feb 10, 2015 at 11:01 PM, Samuel Cozannet samuel.cozan...@canonical.com wrote: Dear All, Yesterday we released an alpha of a new Ubuntu image on MS Azure VM Depot: https://vmdepot.msopentech.com/Vhd/Show?vhdId=50248 If you use that image and spin a VM with it, you'll be able to upload your Azure .publishsettings file to its web interface. From that moment, the VM will boostrap a Juju environment, install the Juju GUI. The main web page auto refreshes, and presents the link to the Juju GUI with a password when it's ready. The whole process from upload to Juju GUI takes about 10min to complete, and does not require any knowledge of Juju or Ubuntu to start playing. It leaves you with a fully functional Juju environment using your default subscription as your main Azure provider. We hope you enjoy that new and easy way to start with Juju. We aim at making this image available from the Marketplace when we gather enough feedback and fix bugs that remain. The whole code and explanations are available on https://github.com/SaMnCo/juju-azure. Please use that repo to send feedback while we make it better. You can also answer to this thread on the list to ask any question or feature request you may have. Best, Samuel -- Samuel Cozannet Cloud, Big Data and IoT Strategy Team Business Development - Cloud and ISV Ecosystem Changing the Future of Cloud Ubuntu http://ubuntu.com / Canonical UK LTD http://canonical.com / Juju https://jujucharms.com samuel.cozan...@canonical.com mob: +33 616 702 389 skype: samnco Twitter: @SaMnCo_23 -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/ mailman/listinfo/juju -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
keeping state in units
Hi Folks, Just merged a new charm helper that I wanted to highlight. I've seen quite a few charms tracking state in various ways, from ad hoc files per setting to the config settings helper, which leads to some charms having a half-dozen state tracking files. Ideally charms can be written such that they just consider current state and write out and effect. However its quite natural for some charms to have the need for local state as they need to consider the application of deltas from multiple sources to the state they've already achieved on a unit. The charmhelper unitdata.py provides a versioned, transactional, kv store for a unit backed by a single sqlite file. It provides methods for returning deltas against previously known values to simplify delta calculation and application by charm. Docs embedded in implementation link below, feedback welcome. http://goo.gl/B0AdTR cheers, Kapil -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
new release of python-jujuclient
Hi Folks, I just pushed a new release of python-jujuclient (0.5.0) .. @ https://pypi.python.org/pypi/jujuclient Change highlights. Docs on Getting started, api signatures, and examples @ http://python-jujuclient.readthedocs.org It now has coverage for all the client facade apis (backups, ha, annotations, key managers, etc) and will auto-negotiate for the best version available (if any) against a given environment. Its compatible with python 2 python 3 Enjoy, Kapil -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Re: Rackspace OpenStack configuration
re the delta from juju openstack driver to rackspace, the nutshell was a slightly different auth mechanism for keystone (trivial) and no api for network security groups (optional openstack extension). fwiw, swift originated at rackspace, so definitely have object storage and they do have tenant exposed neutron network capabilities. On Wed, Jan 28, 2015 at 2:32 AM, John Meinel j...@arbash-meinel.com wrote: Unfortunately the Openstack that Rackspace exposes is not quite the same as the official openstack releases. (I believe they don't expose storage, and a couple of the other APIs aren't the same.) John =:- On Wed, Jan 28, 2015 at 10:11 AM, Sajith Vijesekara saji...@hsenidmobile.com wrote: Hi all, I spent lot of time to find configuration guide to rackspace openstack public cloud.I have cloud space in rackspace. So i want to bootstrap juju in to open stack cloud space. I have followed general configuration for OpenStack.(https://juju.ubuntu.com/docs/config-openstack.html). But i only got access-key and username. So i need to know what is the auth_url for rackspace and how to create secret-key in Rackspace. access-key: * admin-secret: [generated key] auth-mode: keypair auth-url: https://identity.api.rackspacecloud.com/v2.0/ control-bucket: [generated key] default-series: trusty region: OS_REGION_NAME tenant-name: OS_TENANT_NAME type: openstack use-floating-ip: false username: myusername Is this information enough to bootstrap juju in open stack. Thanks Sajith -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Re: How best to install Spark?
On Wed, Jan 28, 2015 at 1:54 PM, Ken Williams ke...@theasi.co wrote: Hi Sam/Amir, I've been able to 'juju ssh spark-master/0' and I successfully ran the two simple examples for pyspark and spark-shell, ./bin/pyspark sc.parallelize(range(1000)).count() 1000 ./bin/spark-shell scala sc.parallelize(1 to 1000).count() 1000 Now I want to run some of the spark examples in the spark-exampes*.jar file, which I have on my local machine. How do I copy the jar file from my local machine to the AWS machine ? I have tried 'scp' and 'juju scp' from the local command-line but both fail (below), root@adminuser:~# scp /tmp/spark-examples-1.2.0-hadoop2.4.0.jar ubuntu@ip-172-31-59:/tmp ssh: Could not resolve hostname ip-172-31-59: Name or service not known lost connection root@adminuser:~# juju scp /tmp/spark-examples-1.2.0-hadoop2.4.0.jar ubuntu@ip-172-31-59:/tmp ERROR exit status 1 (nc: getaddrinfo: Name or service not known) Any ideas ? juju scp /tmp/spark-examples-1.2.0-hadoop2.4.0.jar spark-master/0:/tmp Ken On 28 January 2015 at 17:29, Samuel Cozannet samuel.cozan...@canonical.com wrote: Glad it worked! I'll make a merge request to the upstream so that it works natively from the store asap. Thanks for catching that! Samuel Best, Samuel -- Samuel Cozannet Cloud, Big Data and IoT Strategy Team Business Development - Cloud and ISV Ecosystem Changing the Future of Cloud Ubuntu http://ubuntu.com / Canonical UK LTD http://canonical.com / Juju https://jujucharms.com samuel.cozan...@canonical.com mob: +33 616 702 389 skype: samnco Twitter: @SaMnCo_23 On Wed, Jan 28, 2015 at 6:15 PM, Ken Williams ke...@theasi.co wrote: Hi Sam (and Maarten), Cloning Spark 1.2.0 from github seems to have worked! I can install the Spark examples afterwards. Thanks for all your help! Yes - Andrew and Angie both say 'hi' :-) Best Regards, Ken On 28 January 2015 at 16:43, Samuel Cozannet samuel.cozan...@canonical.com wrote: Hey Ken, So I had a closer look to your Spark problem and found out what went wrong. The charm available on the charmstore is trying to download Spark 1.0.2, and the versions available on the Apache website are 1.1.0, 1.1.1 and 1.2.0. There is another version of the charm available on GitHub that actually will deploy 1.2.0 1. On your computer, the below folders get there: cd ~ mkdir charms mkdir charms/trusty cd charms/trusty 2. Branch the Spark charm. git clone https://github.com/Archethought/spark-charm spark 3. Deploy Spark from local repository juju deploy --repository=~/charms local:trusty/spark spark-master juju deploy --repository=~/charms local:trusty/spark spark-slave juju add-relation spark-master:master spark-slave:slave Worked on AWS for me just minutes ago. Let me know how it goes for you. Note that this version of the charm does NOT install the Spark examples. The files are present though, so you'll find them in /var/lib/juju/agents/unit-spark-master-0/charm/files/archive Hope that helps... Let me know if it works for you! Best, Sam Best, Samuel -- Samuel Cozannet Cloud, Big Data and IoT Strategy Team Business Development - Cloud and ISV Ecosystem Changing the Future of Cloud Ubuntu http://ubuntu.com / Canonical UK LTD http://canonical.com / Juju https://jujucharms.com samuel.cozan...@canonical.com mob: +33 616 702 389 skype: samnco Twitter: @SaMnCo_23 On Wed, Jan 28, 2015 at 4:44 PM, Ken Williams ke...@theasi.co wrote: Hi folks, I'm completely new to juju so any help is appreciated. I'm trying to create a hadoop/analytics-type platform. I've managed to install the 'data-analytics-with-sql-like' bundle (using this command) juju quickstart bundle:data-analytics-with-sql-like/data-analytics-with-sql-like This is very impressive, and gives me virtually everything that I want (hadoop, hive, etc) - but I also need Spark. The Spark charm (http://manage.jujucharms.com/~asanjar/trusty/spark) and bundle ( http://manage.jujucharms.com/bundle/~asanjar/spark/spark-cluster) however do not seem stable or available and I can't figure out how to install them. Should I just download and install the Spark tar-ball on the nodes in my AWS cluster, or is there a better way to do this ? Thanks in advance, Ken -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Re: New charms client
root issue ended up being a little different. my client wasn't explicitly passing api facade versions, which meant i was getting version 0 of the facade, per go default int value. all of which worked fine except when the facade didn't have a version 0 as is the case for the Annotations, Charms, HA on trunk. thanks Kapil On Sun, Jan 25, 2015 at 4:18 PM, roger peppe rogpe...@gmail.com wrote: On 25 January 2015 at 16:53, Kapil Thangavelu kapil.thangav...@canonical.com wrote: odd, i don't show any deltas (godeps/install and output below).. and i'm only getting it on a few of the facades (charms and annotations) not all. i'll play around with it a bit more in a bit. good to know about the functional api tests ( i was wondering). thanks for the tips. kapil@realms-slice:~/src/github.com/juju/juju$ godeps -u dependencies.tsv kapil@realms-slice:~/src/github.com/juju/juju$ godeps -u dependencies.tsv kapil@realms-slice:~/src/github.com/juju/juju$ go install -v github.com/juju/juju kapil@realms-slice:~/src/github.com/juju/juju$ Note that that last line only installs the top level juju Go package, not the juju command. Better would be go install github.com/juju/juju/... (or within the juju project directory, just go install ./...) to install everything. go install github.com/juju/juju/cmd/juju would install the command only. cheers, rog. On Sun, Jan 25, 2015 at 10:04 AM, Andrew Wilkins andrew.wilk...@canonical.com wrote: On Fri, Jan 23, 2015 at 11:32 PM, Kapil Thangavelu kapil.thangav...@canonical.com wrote: I'm having some problems actually using this api, is it enabled? or does it need a feature flag? return self.rpc._rpc({ Type: Charms, Request: List, Params: {Names: names}}) gets jujuclient.EnvError: Env Error - Details: { u'Error': u'unknown object type Charms', u'ErrorCode': u'not implemented', u'RequestId': 1, u'Response': { }} same code works for every other facade, using a trunk checkout. I do see the Charms facade in the login data, ie. Did you run godeps -u dependencies.tsv? I was seeing weird behaviour similar to this (different facade tho), updated dependencies and it went away. Cheers, Andrew {u'EnvironTag': u'environment-fb933e3d-5293-486a-8ff9-7ac565271c35', u'Facades': [{u'Name': u'Action', u'Versions': [0]}, {u'Name': u'Agent', u'Versions': [0, 1]}, {u'Name': u'AllWatcher', u'Versions': [0]}, {u'Name': u'Annotations', u'Versions': [1]}, {u'Name': u'Backups', u'Versions': [0]}, {u'Name': u'CharmRevisionUpdater', u'Versions': [0]}, {u'Name': u'Charms', u'Versions': [1]}, On Mon, Jan 19, 2015 at 1:59 AM, Anastasia Macmood anastasia.macm...@canonical.com wrote: Hi I have just landed a new charms client. This client can list charms. The intention is to have a dedicate charms client for 1.23, deprecating old client. However, at the moment the only ported method from old client is CharmInfo. Sincerely Yours, Anastasia -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev
Re: New charms client
odd, i don't show any deltas (godeps/install and output below).. and i'm only getting it on a few of the facades (charms and annotations) not all. i'll play around with it a bit more in a bit. good to know about the functional api tests ( i was wondering). thanks for the tips. kapil@realms-slice:~/src/github.com/juju/juju$ godeps -u dependencies.tsv kapil@realms-slice:~/src/github.com/juju/juju$ godeps -u dependencies.tsv kapil@realms-slice:~/src/github.com/juju/juju$ go install -v github.com/juju/juju kapil@realms-slice:~/src/github.com/juju/juju$ On Sun, Jan 25, 2015 at 10:04 AM, Andrew Wilkins andrew.wilk...@canonical.com wrote: On Fri, Jan 23, 2015 at 11:32 PM, Kapil Thangavelu kapil.thangav...@canonical.com wrote: I'm having some problems actually using this api, is it enabled? or does it need a feature flag? return self.rpc._rpc({ Type: Charms, Request: List, Params: {Names: names}}) gets jujuclient.EnvError: Env Error - Details: { u'Error': u'unknown object type Charms', u'ErrorCode': u'not implemented', u'RequestId': 1, u'Response': { }} same code works for every other facade, using a trunk checkout. I do see the Charms facade in the login data, ie. Did you run godeps -u dependencies.tsv? I was seeing weird behaviour similar to this (different facade tho), updated dependencies and it went away. Cheers, Andrew {u'EnvironTag': u'environment-fb933e3d-5293-486a-8ff9-7ac565271c35', u'Facades': [{u'Name': u'Action', u'Versions': [0]}, {u'Name': u'Agent', u'Versions': [0, 1]}, {u'Name': u'AllWatcher', u'Versions': [0]}, {u'Name': u'Annotations', u'Versions': [1]}, {u'Name': u'Backups', u'Versions': [0]}, {u'Name': u'CharmRevisionUpdater', u'Versions': [0]}, {u'Name': u'Charms', u'Versions': [1]}, On Mon, Jan 19, 2015 at 1:59 AM, Anastasia Macmood anastasia.macm...@canonical.com wrote: Hi I have just landed a new charms client. This client can list charms. The intention is to have a dedicate charms client for 1.23, deprecating old client. However, at the moment the only ported method from old client is CharmInfo. Sincerely Yours, Anastasia -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev
Re: A cautionary tale of names
On Mon, Jan 12, 2015 at 10:03 AM, roger peppe roger.pe...@canonical.com wrote: On 12 January 2015 at 15:43, Gustavo Niemeyer gust...@niemeyer.net wrote: A few quick notes: - Having an understandable name in a resource useful It's also good to be clear about what a name actually signifies. Currently (unless things have changed since I last looked) it's entirely possible to start an environment with one name, then send the resulting .jenv file to someone else, who can store it under some other name and still access the environment under the different name. Local aliases/names are nice - no worry about global name space clashes. But I agree that meaningful resource names are useful too. One possibility is that the UUID could incorporate the original environment name (I guess it would technically no longer be a UUID then, but UUID standards are overrated IMHO). Another possibility is to provide some other way to give a name at environment bootstrap time (e.g. a config option) that would be associated with resources created by the environment. This is effectively what happens albeit implicitly, the name is associated at bootstrap, and is used by the state server when provisioning resources. ie. in this context (aws) we don't actually use native tag facilities (part of why all instances allocated by juju are missing names in the aws console), but instead use a security group for implicit tagging. the secgroup name corresponds to this initial bootstrap name, other users can name the env how they want as further provisioning is done by the state servers which will continue to use the initial bootstrap name.. there are still niggles here around destroy-environment force if its clientside. the secgroup name in aws can be up to 255 chars. it would be good if we used tags better for aws resources (instances, drives, etc) as it can help usability (aws console) and cost accounting (very common to roll up charges by tags for chargeback). -k -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev
Re: Announcement: gocharm
very cool, thanks roger. On Fri, Dec 19, 2014 at 9:25 AM, roger peppe roger.pe...@canonical.com wrote: For those Juju fans that also like Go: http://rogpeppe.wordpress.com/2014/12/19/gocharm-juju-charms-in-go/ Enjoy! -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
separating out golang juju client from
one of the issues with having it in tree, means client usage falls under the AGPL. We want to have the client used widely under a more permissive license. I've already had contributions to other projects n'acked due to license on our libraries. I'd like to see it moved to a separate repo so that's possible. Thoughts? cheers, Kapil -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev
Re: separating out golang juju client from
On Fri, Dec 19, 2014 at 7:02 AM, Nate Finch nate.fi...@canonical.com wrote: While I am generally for using more permissive licenses, I'm not sure how useful that might be... most significant changes require modifications to both the client and the server, or at least to libraries used by both. That sort of misses the point of building apps that use juju apis. Yes the two packages need to be updated together for new changes same as today. There's not that much code under cmd/juju compared to the whole rest of the repo. Again its not about that code, its about building other applications and facilitating integrations. cheers, Kapil On Fri, Dec 19, 2014 at 6:03 AM, Kapil Thangavelu kapil.thangav...@canonical.com wrote: one of the issues with having it in tree, means client usage falls under the AGPL. We want to have the client used widely under a more permissive license. I've already had contributions to other projects n'acked due to license on our libraries. I'd like to see it moved to a separate repo so that's possible. Thoughts? cheers, Kapil -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev
Re: vCloud Director Support
On Wed, Dec 17, 2014 at 7:23 AM, Vahric Muhtaryan vah...@doruk.net.tr wrote: Hello All MAAS and JUJU is looks like very good product , I would like to ask as an infrastructure will you add vCloud Director , we are vmwre vcloud air network partner and I m thinking to integrate the juju with our platform , and dev or product manage in this list ? There are dev and product managers on this list. a vmware provider is a substantial development and maintenance effort. Its definitely of interest, but its not currently in scope on the core roadmap. Contributions for it would be welcome. Vmware has been working on go api bindings which will make that work easier https://github.com/vmware/govcloudair and https://github.com/vmware/govmomi cheers, Kapil -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Fwd: [Calico] Project Calico and Juju Charms
-- Forwarded message -- From: Cory Benfield cory.benfi...@metaswitch.com Date: Fri, Dec 12, 2014 at 4:26 AM Subject: [Calico] Project Calico and Juju Charms To: cal...@lists.projectcalico.org cal...@lists.projectcalico.org Hi everybody, I'm excited to announce that, after a fairly substantial amount of work, our first preview release of Juju Charms for Project Calico is available! Juju is a tool written by Canonical that makes it extremely easy to build and orchestrate complex services. One of its most powerful features is that it can combine with Metal-as-a-Service, another Canonical tool, to make it extremely simple to deploy OpenStack in a number of configurations. For the past couple of months we've been working on making it possible to build an OpenStack deployment using Juju that uses Calico to provide the OpenStack networking. Today marks the public availability of the first set of beta charms for doing just that. As of right now you can go online and get started with OpenStack Calico using Juju. It's never been easier! If you've already got a MAAS + Juju system up and running, you can get started right now. Attached to this email is the bundle we sent to the OpenStack Interoperability Lab. You should be able to drag and drop this bundle into your Juju GUI and immediately get OpenStack with Calico running. This particular bundle deploys OpenStack very densely, loading almost all the management services onto a single server, and using two more to act as compute nodes. If you want a more spread out deployment, feel free to edit the bundle: if you need assistance in doing that, please don't hesitate to ask for help on this list. If you're not familiar with Juju or MAAS and want to give them a try, you can find out more on the Juju website[0] and the MAAS website[1]. Both products are free and open source, and are absolutely worth checking out. The source code for the charms is available on Launchpad. The links are below[2-7]. In the next few weeks we aim to get our modifications to existing charms upstreamed into those existing charms. We also plan to add our custom charms to the charm store, though that will require a bit more work. The eventual goal is that you will be able to install OpenStack with Calico seamlessly with Juju, making it easier than ever to simplify and scale your OpenStack deployment. Keep watching this space! Cory (on behalf of the Project Calico team). [0]: https://jujucharms.com/ [1]: https://maas.ubuntu.com/ [2]: https://code.launchpad.net/~cory-benfield/charms/trusty/bird/trunk [3]: https://code.launchpad.net/~cory-benfield/charms/trusty/calico-acl-manager/trunk [4]: https://code.launchpad.net/~cory-benfield/charms/trusty/neutron-api/trunk [5]: https://code.launchpad.net/~cory-benfield/charms/trusty/nova-cloud-controller/trunk [6]: https://code.launchpad.net/~cory-benfield/charms/trusty/nova-compute/trunk [7]: https://code.launchpad.net/~cory-benfield/charms/trusty/neutron-calico/trunk ___ Calico mailing list cal...@lists.projectcalico.org http://lists.projectcalico.org/listinfo/calico -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Re: local provider
On Fri, Dec 12, 2014 at 11:26 AM, Nate Finch nate.fi...@canonical.com wrote: It seems like a lot of people get confused by Juju, because it is different than the tools they know. They want to deploy stuff with Juju, and so they get a machine from AWS/Digital Ocean/whatever, ssh into the machine, install juju, and run the local provider and then wonder why they can't access their services from outside the machine. I think this stems from two things - one is the people are used to chef/puppet/etc where you ssh into the machine and then run the install there (note: I know nothing about these tools, so may be mis-characterizing them). Whereas with Juju, you are perfectly able to orchestrate an install on a remote machine in the cloud from your laptop. The other is the local provider. The intent of the local provider is to give users a way to easily try out Juju without needing to spin up real machines in the cloud. It's also very useful for testing out charms during charm development and/or testing service deployments. It's not very useful for real production environments... but yet some people still try to shoehorn it into that job. I think one easy thing we could do to better indicate the purpose of the local provider is to simply rename it. If we named it the demo provider, it would be much more clear to users that it is not expected to be used for deploying a production environment. This could be as easy as aliasing local to demo and making the default environments.yaml print out with the local provider renamed to demo. (feel free to s/demo/testing/ or any other not ready for production word) What do you think? no, that's a bad idea, imo. first as you say its people first experience with juju and the way its deployment usage fits very well with some folks production needs ( ie. i have a big machine in the corner and juju can deploy workloads on it). I think the issue primarily is that of implementation, and the mindset among developers/implementers that we don't support it. Most of the reasons why its different on an implementation level disappear with lxd, at which point we should support it for dev and prod. -k -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev
Re: Custom AWS Endpoint / Region
It requires a custom compilation atm, as the list of valid aws endpoints [0] is embedded in a library dependency @ launchpad.net/goamz (the originator of 50+ forks on github). It would be nice to have the endpoints constructed at runtime from its environment.yaml config parameters, the primary issue is that endpoint configuration is typically a bit more indepth even for ec2+s3 then an end user typically would engage with (as an example https://github.com/mitchellh/goamz/blob/master/aws/aws.go). [0] https://github.com/juju/juju/blob/master/provider/ec2/config.go#L123 On Wed, Dec 3, 2014 at 9:06 AM, Michael Hempel michael.hem...@xmsoft.de wrote: Hello all, we are running an internal AWS like cloud and I would like to use Juju to deploy services. Is there a way to configure a custom AWS like endpoint which can be used in the environments,yaml? Any help is much appreciated. Thanks Michael -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Re: juju vs. Puppet/Chef/Salt/Ansible/etc.
On Mon, Dec 8, 2014 at 6:35 PM, Eric Snow eric.s...@canonical.com wrote: The reaction I get most often from folks that aren't familiar with juju and skim through the juju site is that it looks like a competitor to the various configuration management tools out there like Puppet or Salt. However, my experience is that while they have some overlap, they sit at different layers. Agreed. I think the messaging on the sites messaging could use some work, I just went through the front page of juju.ubuntu.com and jujucharms.com and i couldn't tell you from those why its not a config management tool. The word orchestration doesn't in appear in the front page of either site! The new site is better but has gems like Why use juju - 'Reduce workloads from days to minutes' what does that even mean, its a runtime workload accelerator? Have I grown out of touch? Conceivably those projects have or are working on juju-like functionality that I'm not aware of. They aren't, there's a whole new set of tools though that are working on orchestration features, though it may be a rather ambiguous term yet, they still advertise themselves as such. the growth of containers/docker has reinforced the value of orchesrtation tools since image delivery obviates most config management, ie. having a bunch of containers that don't talk to each across nodes other is obviously a problem in want of a a solution (discovery, connectivity, topology composition) aka orchestration. If not (or even if so), what's the best way to educate people on what juju is and how it will help them when they're already steeped in the lower-layer config. management world? Explain orchestration as a higher level construct which focuses on services management via iaas provisioning, service discovery, service automation (db creation, etc) in a reusable way. Coupled with an ecosystem of service definitions that offers user composed multi-node solutions. Try showing them a deployer config/bundle and ask them to compare to the comparable lower level tool config. Related to that, how can we help those same folks wrap all their existing recipes, etc. in charms? It's got to be easy enough that they can justify the effort. michael's reply goes through the most cm tool used in charms, ansible. we've got production and example charms written with several different tools. nutshell using cm tools in solo single host with facts/vars fed in via config, and relations, and executed in place of hooks. cheers, Kapil -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Re: Dynamic osd-devices selection for Ceph charm
On Sat, Nov 29, 2014 at 11:25 AM, John McEleney john.mcele...@netservers.co.uk wrote: Hi all, I've been working on the Ceph charm with the intention of making it much more powerful when it comes to the selection of OSD devices. I wanted to knock a few ideas around to see what might be possible. The main problem I'm trying to address is that with the existing implementation, when a new SAS controller is added, or drive caddies get swapped around, drive letters (/dev/sd[a-z]) get swapped around. As the current charm just asks for a list of devices, and that list of devices is global across the entire cluster, it pretty-much requires all machines to be identical, and unchanging. I also looked into used /dev/disk/by-id, but found this to be too inflexible. Below I've pasted a patch I wrote as a stop-gap for myself. This patch allows you to list model numbers for your drives instead of /dev/ devices. It then dynamically generates the list of /dev/ devices on each host. The patch is pretty unsophisticated, but it solves my immediate problem. However, I think we can do better than this. I've been thinking that xpath strings might be a better way to go. I played around with this idea a little. This will give some idea how it could work: == root@ceph-store1:~# lshw -xml -class disk /tmp/disk.xml root@ceph-store1:~# echo 'cat //node[contains(product,MG03SCA400)]/logicalname/text()'|xmllint --shell /tmp/disk.xml|grep '^/dev/' /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdk /dev/sdl == So, that takes care of selecting by model number. How about selecting drives that are larger than 3TB? == root@ceph-store1:~# echo 'cat //node[size3]/logicalname/text()'|xmllint --shell /tmp/disk.xml|grep '^/dev/' /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdk /dev/sdl == Just to give some idea of the power of this, take a look at the info lshw compiles: node id=disk:3 claimed=true class=disk handle=GUID:-a5c7-4657-924d-8ed94e1b1aaa descriptionSCSI Disk/description productMG03SCA400/product vendorTOSHIBA/vendor physid0.3.0/physid businfoscsi@1:0.3.0/businfo logicalname/dev/sdf/logicalname dev8:80/dev versionDG02/version serialX470A0XX/serial size units=bytes4000787030016/size capacity units=bytes5334969415680/capacity configuration setting id=ansiversion value=6 / setting id=guid value=-a5c7-4657-924d-8ed94e1b1aaa / setting id=sectorsize value=512 / /configuration capabilities capability id=7200rpm 7200 rotations per minute/capability capability id=gpt-1.00 GUID Partition Table version 1.00/capability capability id=partitioned Partitioned disk/capability capability id=partitioned:gpt GUID partition table/capability /capabilities /node So, you could be selecting your drives by vendor, size, model, sector size, or any combination of these and other attributes. The only reason I didn't go any further with this idea yet is that lshw -C disk is incredibly slow. I tried messing around with disabling tests, but it still crawls along. I figure that this wouldn't be that big a deal if you could cache the resulting xml file, but that's not fully satisfactory either. What if I want to hot-plug a new hard-drive into the system? lshw would need to be run again. I though that maybe udev could be used for doing this, but I certainly don't want udev running lshw once per drive at boot time as the drives are detected. I'm really wondering if anyone else has any advice on either speeding up lshw, or if there's any other simple way of pulling this kind of functionality off. Maybe I'm worrying too much about this. As long as the charm only fires this hook rarely, and caches the data for the duration of the hook run, maybe I don't need to worry? i'm wondering if instead of lshw and the time consumption there we could continue with lsblk, there's a bit more information there (size, model, rotational) etc which seems to satisfy most of the lshw examples you've given and is relatively fast in comparison. ie. https://gist.github.com/kapilt/d0485d6fac3be6caaed2 another option, here's a script around a similiar use case does a hierarchical info of drives from controller on down and supports layered block devs. http://www.spinics.net/lists/raid/msg34460.html current implementation @ https://github.com/pturmel/lsdrv/blob/master/lsdrv cheers, Kapil John Patch to match against model number (NOT REGRESSION TESTED): === modified file 'config.yaml' --- config.yaml 2014-10-06 22:07:41 + +++ config.yaml 2014-11-29 15:42:41 + @@ -42,16 +42,35 @@ These devices are the range of devices that will
Re: new release of jujucharms.com with docs and perf improvements.
one more suggestion, given the site is ssl.. spdy (future http/2) would be a significant improvement with connection reuse and pipelining. On Tue, Nov 25, 2014 at 8:45 PM, Kapil Thangavelu kapil.thangav...@canonical.com wrote: definitely helps a bit on speed. already covered on irc, but please cache the html on the site. the design on the solutions page needs some work, either dropping the icons or at min. switching out to png and progressive icon loading (ala infinite page/scroll plugins). -k On Tue, Nov 25, 2014 at 7:11 PM, Rick Harding rick.hard...@canonical.com wrote: The full blog post is here: http://jujugui.wordpress.com/2014/11/26/jujucharms-com-large-sweeping-update-1/ The big thing is that we've gotten out first big iteration of the new site out the door which includes a number of performance improvements and the big thing to note is that the site is now ingesting the Juju docs ( https://jujucharms.com/docs/). We'll be working with folks to update links to the new site. The urls should be consistent just with the different domain name. Currently they auto ingest and update every 15min, like charms/bundles, and are rebuilt. We've talked with the docs team about making sure we can support their upcoming work towards versioned docs and we'll also be working with UX on an experience for the start of a 'global juju search' that includes the charm/bundle data as well as docs and any other content as we move the new home site of Juju forward. Please make sure to try it out and file bugs at https://github.com/CanonicalLtd/jujucharms.com/issues I want to thank the team for their hard work on this update! -- Rick Harding Juju UI Engineering https://launchpad.net/~rharding @mitechie -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Re: RFC: unmounting /mnt on AWS instances
+smoser On Mon, Nov 17, 2014 at 8:13 PM, Andrew Wilkins andrew.wilk...@canonical.com wrote: Hi all, I am working on introducing storage as a first-class primitive in Juju. Charms will be able to indicate that they require storage (block devices, filesystems...), and when you deploy that charm you will be able to specify some parameters in order to fulfil the storage requirement. One thing I'd like to do is provide users a means of assigning ephemeral disks to units. The AMIs we use on AWS currently auto-mount the first ephemeral disk (if there is one) at /mnt. I think Azure does something similar; not sure about other providers. Are you relying on /mnt being there? If so, would it cause you a headache if this were taken away, by performing a umount /mnt after booting? (Bear in mind you can still mount it afterwards if you want to). Cheers, Andrew -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Re: Feature Request: show running relations in 'juju status'
On Mon, Nov 17, 2014 at 11:23 PM, Ian Booth ian.bo...@canonical.com wrote: On 17/11/14 15:47, Stuart Bishop wrote: On 17 November 2014 07:13, Ian Booth ian.bo...@canonical.com wrote: The new Juju Status work planned for this cycle will hopefully address the main concern about knowing when a deployed charm is fully ready to do the work for which it was installed. ie the current situation whereby a unit is marked as Started but it not ready. Charms are able to mark themselves as Busy and also set a status message to indicate they are churning and not ready to run. Charms can also indicate that they are Blocked and require manual intervention (eg a service needs a database and no relation has been established yet to provide the database), or Waiting (the database on which the service relies is busy but will resolve automatically when the database is available again). As long as the 'ready' state is managed by juju and not the unit, I'll stand happily corrected :-) The focus I'd seen had been on the unit declaring its own status, and there is no way for a unit to know that is ready because it has no way of knowing that, for example, there are another 10 peer units being provisioned that will need to be related. You are correct that the initial scope of work is more about the unit, and less about the deployment as a whole. There are plans though to address the issue. We're throwing around the concept of a goal state, which is conceptually akin to looking forward in time to be able to inform units what relations they will expect to participate in and what units will be deployed. They'd likely be something like a relation-goals hook tool (to compliment relation-list and relation-ids), as well as hook(s) for when the goal state changes. There's ongoing work in the uniter by William to get the architecture right so this work can be considered. There's still a lot of value in the current Juju Status work, but as you point out, it's not the full story. for clusters... its not a question of futures but being informed of known unit count to establish quorum. ie 1 to 3 or n+1. leader election helps, but actually knowing the unit count is critical to being able to establish a clear state without throwing away data (aka race on peer knowing quorum and leader) as adhoc leader election has to throw away data from non leaders who may already be serving clients due to lack of quorum knowledge. So although there are not currently plans to show the number of running hooks in the first phase of this work, mechanisms are being provided to allow charm authors to better communicate the state of their charms to give much clearer and more accurate feedback as to 1) when a charm is fully ready to do work, 2) if a charm is not ready to do work, why not. A charm declaring itself ready is part of the picture. What is more important is when the system is ready. You don't want to start pumping requests through your 'ready' webserver, only to have it torn away as a new block device is mounted on your database when its storage-joined hook is invoked and returned to 'ready' state again once the storage-changed hook has completed successfully. Also being thrown around is the concept of a new agent-state called Idle, which would be used when there are no pending hooks to run. There are plans as well for the next phase of the Juju status work to allow collaborating services to notify when they are busy, and mark relationships as down. So if the database had it's storage-attached hook invoked, it would mark itself as Busy, mark its relation to the webserver as Down, thus allowing the webserver to put itself into Waiting. Or, if we are talking about the initial install phase, the database would not initially mark itself as Running until its declared storage requirements were met, so the webserver would go from Installing to Waiting and then to Running one the database became Running. status per future impl helps, as does explicitly marking units.. but pending cluster count is a missing and important property to properly establish quorum in a peer rel from one to n that is only resolved by knowing recorded units count for a svc. cheers, Kapil -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev
Re: supplement open--port/close-port with ensure-these-and-only-these-ports?
On Sat, Nov 1, 2014 at 12:58 PM, John Meinel j...@arbash-meinel.com wrote: I believe there is already opened-ports to tell you what ports Juju is currently tracking. That's cool and news to me, it looks like it landed in trunk earlier on october 2nd (ie 1.21) and hasn't made release notes or docs yet. Especially for charm environment changes we really need corresponding docs as charm env changes are not easily discover-able otherwise. Really great to see that land as its been a common issue for charms and one that previously forced them into state management. cheers, Kapil As for the rest, open-port only takes a single port (or range), which means that if you wanted only 80 and 8080 open, you would need a different syntax. (something that lets you specify multiple ports/ranges to be opened). I can see a point to it, but we do already have opened-ports if you're looking for the behavior you want. John =:- On Sat, Nov 1, 2014 at 6:13 PM, Aaron Bentley aaron.bent...@canonical.com wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Hi all, I take a stateful approach to writing charms. That is, my charms determine what the target state is, then take whatever actions are needed to get the unit into that state, paying as little attention to the current state as is possible. open-port / close-port require knowledge of the current state; if I know that I want only port 314 open, then I need to know whether any other ports are open and close them. In most cases, a charm only opens specific ports, so I know which ports to close. Right now, I'm writing an update to the Apache2 charm that would allow the user to specify which ports to serve http on, which means that when a user changes the port, I may need to close the old port and open the new one. If I want to use close-port / open-port, I need to track what ports are open. But juju already knows this, so I shouldn't have to track it separately-- that violates DRY. The smallest change would be to provide a way to list the open ports, so that charms can close any open ports they no longer want open. But that leaves a bunch of work for a stateful charm author. What they actually want is a command that ensures specific ports are open and closes all others. ensure-these-and-only-these-ports was the first thing I thought of, but we could extend open-port instead. open-port would need to accept multiple ports, not just ranges, and it would need to accept a - --close-all-others flag, that would close all open ports not listed. Does that seem like a sensible change? Aaron -BEGIN PGP SIGNATURE- Version: GnuPG v1 iQEcBAEBAgAGBQJUVOo3AAoJEK84cMOcf+9h2acIAL5ogJIy4O23TKa/RiWUcv0E wX9NHpNj9r7P8LoEHwUN/0nIeLi0UPQtDMN/w2orKGK01oXsPvvoVy/SPmMH+8G+ yjOWQY1ppjB42vFsdLlP1d6VFutI94hiLEFgfT1ss9JSbPZXteakoKmhG3Og+W4e pZSrvVjccZPp3IhSsGclfVxVJLD+lMYxXL7NA/x4ji74YMiUE8pH3OCbCeOjderw oHlDMPClItugqvgAtCiHpr/n79yB75y1FARalsbXelXullgBLpiRxTQHgBq/yfn+ o22d1uCmp+xqIveyUS433RffEzMDDt61UaZTuyui8ZG9n4/Jy9xOpKN9wGDhhvE= =gzrL -END PGP SIGNATURE- -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev
Re: Handling relation-get gives null data issue
You've setup a race condition. First you should be receiving data in relation-changed not relation-joined. If you try to receive set values in relation-joined and the sending side comes up second, you'll never see the data. Also you should make things resistant to races by exiting if the data you want is not present, relation-changed will be called again when the data on the remote side changes. ie. in pseudo code. charm x / foo-relation-joined relation-set x=1 charm y / bar-relation-changed x=relation-get x if not x: # other side not ready, exit, and we'll be called again when it is. exit 0 # do some stuff with the data On Fri, Oct 31, 2014 at 6:06 AM, Malshan Peiris mals...@hsenidmobile.com wrote: Hi all, I have two charms which is joined by a relation, and supposed to send data both ways. In the relation-joined hook of the charms i have put appropriate relation-set in one charm and relation-get in the other. Same is done vice versa. However, randomly atleast one charm fails to get the values (hence have null values). I tried to have loops with sleep command which would do relation-get till not-null values are returned, but they don't return data after number of tries. There's no useful info in machine and unit logs. Is there a standard/known way to handle this. environment: LXC on ubuntu 14.04 x86_64 juju: 1.20.0-trusty-amd64 Have a nice day. -- http://www.hsenidmobile.com/Malshan Peiris,Implementation EngineerTel: 94-77-5525110 Head Office (Singapore)*P*: +65 65 332 140*M*: +65 00 000 000*F*: +65 65 332 140 RD (Sri Lanka)*P*: +94 11 268 3751*M*: +94 00 000 000*F*: +94 11 268 3951 *www.hSenidMobile.com* http://www.hSenidMobile.com https://www.facebook.com/hSenidMobileSolutions https://twitter.com/hSenidMobile http://www.linkedin.com/company/hsenid-mobile-solutions-pvt-ltd http://www.youtube.com/user/hSenidmobile http://www.hsenidmobile.com/events/telecom-application-developer-summit-2014/ Disclaimer: This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to which they are addressed. The content and opinions contained in this email are not necessarily those of hSenid Mobile Solutions. If you have received this email in error please contact the sender. -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Re: move already deployed service to new vps with different ip address.
On Tue, Oct 21, 2014 at 6:26 AM, Vasiliy Tolstov v.tols...@selfip.ru wrote: 2014-10-21 14:18 GMT+04:00 Kapil Thangavelu kapil.thangav...@canonical.com: it looks like the machine agent isn't starting up, can you pastebin the log machine-0.log from /var/log/juju I find the issue - agent.conf have internal ip address =(. Does it possible to manually enter all ip addresses / override it via command line? Now i'm use sed to change ip address. the other option to try, rather than using manual on localhost, is to use manual on lxc containers created within host, this isolates all the juju components to things in containers off the lxc bridge, which won't change addresses as you image and reinstance. that still leaves one extra piece of writing something to scan status and to forward traffic to exposed services via iptables. as an example of doing something like that https://github.com/kapilt/juju-lxc/blob/master/juju_lxc/add.py#L37 .. or alternatively via cli something like lxc-create -t ubuntu-cloud -n trusty-base -- -r trusty -S ~/.ssh/id_rsa.pub lxc-clone -B aufs trusty-base myenv-m1 lxc-clone -B aufs trusty-base myenv-m2 lxc-clone -B aufs trusty-base myenv-m3 lxc-start -d -n myenv-m1 ... juju bootstrap ubuntu@myenv-m1 juju add-machine ubuntu@myenv-m2 juju add-machine ubuntu@myenv-m3 -k -- Vasiliy Tolstov, e-mail: v.tols...@selfip.ru jabber: v...@selfip.ru -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Re: Set a juju state message
not at the moment, it is something being worked on right now, so hooks/charms can report more feedback regarding state and blockers directly to status. cheers, Kapil On Thu, Oct 16, 2014 at 10:08 AM, Stein Myrseth stein.myrs...@gmail.com wrote: Regarding agent-state agent-state-info I’m my install hook I check for curtain conditions to be met before preceding with the installation. If not met I “exit 1” the hook and the hook fails. Is there any possible way to set the agent-state-info a more appropriate message then 'hook failed: config-changed' The agent-state: error will only tell the audience that a error has occurred. I would like to know if I can set the agent-state-info message a descriptive text about what went wrong and how to fix it, or what’s missing etc. which is easy readably from juju status, and from the UI. Stein Myrseth Bjørkesvingen 6J 3408 Tranby mob: +47 909 62 763 mailto:stein.myrs...@gmail.com stein.myrs...@gmail.com -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Re: Multiple Juju Relationships to a Single Charm
On Wed, Oct 15, 2014 at 10:42 AM, Maarten Ectors maarten.ect...@canonical.com wrote: Hi Kapil, The problem Mike is trying to solve is that one Apache charm might host multiple tenants and websites and each website needs to be protected differently [e.g. different owner]. So might the solution be to have a subordinate charm per tenant and like this each subordinate charm can have the specific tenant configuration? Yup, that's a totally viable alternative, effectively move the config from a relation to a service subordinate instance. The one issue with subordinates for tenants, they can't be removed, but you could potentially blank their config as mitigation. -k Thanks, Maarten Ectors Cloud, Big Data and IoT Strategy Director Changing the Future of Cloud Ubuntu http://ubuntu.com / Canonical http://canonical.com UK LTD maarten.ect...@canonical.com Fixed: +44 (0) 207 630 2435 Mobile: +44 (0) 791 860 8145 On Sat, Oct 4, 2014 at 1:04 PM, Kapil Thangavelu kapil.thangav...@canonical.com wrote: Hi Michael, Thanks for elaborating. Afaics, the crux is two fold. The primary of being able to establish multiple relations between apache and identity providers per virtual host. This is supported today via api and cli. From a juju terminology apache is an IDP interface requirer (aka client) and the IDP is a provider (aka server). Simply doing juju add-relation apache idp multiple times suffices to add multiple relations between apache and different identity provider. Part of the confusion about this may have been a result of the gui not supporting this. The algorithm i used in the gui for dimming non valid relation targets, tries to simplify the common case and provide a guide to users and wont consider 'require' relation endpoints already satisfied as needing further relations established. Potentially the gui needs some sort of option/key press to enable an 'advanced' mode when creating relations that provides for this (i just filed bug http://pad.lv/1377414 for it). The secondary issue is that providing for configuration of the virtualhost idp mapping this way is currently tedious, as the config for the idp relation and virtualhost needs to flow from the service config or other charm accessible data source and then mapped onto the individual relation instances by the charm. This has come up in the context of other relation workflows/use cases as well. There are tentative plans to address it via providing for relation configuration that can be provided by the admin and managed as part of the relation lifecycle. ie add-relation apache idp --config=vhost=http://myapp.com acct=0123 Fwiw. The majority of the juju developers are sprinting this week on code and feature futures and relation config is on the agenda. cheers, Kapil On Fri, Oct 3, 2014 at 3:09 PM, Michael Schwartz m...@gluu.org wrote: Kapil, Here is a picture of a Juju Deployment of the Gluu Server: http://www.gluu.org/blog/wp-content/uploads/2014/10/juju- screenshot-gluu-apache.png In this digram, the Gluu Server is where the person is authenticated. It is the Central Identity Provider or IDP. Everything's great right? The Apache Server uses the Gluu Server for Authentication... nice and simple. The only problem.. the world is not quite so simple. Apache has a widely used feature to support virtual hosting. So if you are an ISP, unless you want to deploy one apache server for every customer, the above relationship doesn't do you much good. In the real world, there are multiple IDPs. Many domains have their own IDP. Google is really just another domain on the Internet. Many companies also use google to authenticate their people. So in this diagram: http://www.gluu.org/blog/wp- content/uploads/2014/10/juju_apache_charm.png I was showing a situation where a single Apache Web server might have multiple folders for different websites that it is serving, and each website may have a different IDP. Does that help? Can juju provide a nice interface or CLI controls for this? thx, Mike On 2014-10-03 13:30, Kapil Thangavelu wrote: not quite clear why you think it doesn't work, could you outline what you'd like to do and where the difficulty arises. a picture is worth a thousand words, but some words as context are useful to frame it. -k On Fri, Oct 3, 2014 at 1:15 PM, Michael Schwartz m...@gluu.org wrote: Juju'ers: If you consider virtual hosting on a web server, each web folder may be a different client, who may have their own OpenID Provider. I made a quick diagram: http://www.gluu.org/blog/wp-content/uploads/2014/10/juju_ apache_charm.png [1] As far as I can tell, there is no really good way to do this in Juju. Any ideas? thx, Mike - Michael Schwartz Gluu Founder / CEO @gluufederation -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com
Re: Multiple Juju Relationships to a Single Charm
On Tue, Oct 21, 2014 at 7:05 AM, Kapil Thangavelu kapil.thangav...@canonical.com wrote: On Wed, Oct 15, 2014 at 10:42 AM, Maarten Ectors maarten.ect...@canonical.com wrote: Hi Kapil, The problem Mike is trying to solve is that one Apache charm might host multiple tenants and websites and each website needs to be protected differently [e.g. different owner]. So might the solution be to have a subordinate charm per tenant and like this each subordinate charm can have the specific tenant configuration? Yup, that's a totally viable alternative, effectively move the config from a relation to a service subordinate instance. The one issue with subordinates for tenants, they can't be removed, but you could potentially blank their config as mitigation. maarten brought up that we now allow subordinate relation removal so that solution works well. -k -k Thanks, Maarten Ectors Cloud, Big Data and IoT Strategy Director Changing the Future of Cloud Ubuntu http://ubuntu.com / Canonical http://canonical.com UK LTD maarten.ect...@canonical.com Fixed: +44 (0) 207 630 2435 Mobile: +44 (0) 791 860 8145 On Sat, Oct 4, 2014 at 1:04 PM, Kapil Thangavelu kapil.thangav...@canonical.com wrote: Hi Michael, Thanks for elaborating. Afaics, the crux is two fold. The primary of being able to establish multiple relations between apache and identity providers per virtual host. This is supported today via api and cli. From a juju terminology apache is an IDP interface requirer (aka client) and the IDP is a provider (aka server). Simply doing juju add-relation apache idp multiple times suffices to add multiple relations between apache and different identity provider. Part of the confusion about this may have been a result of the gui not supporting this. The algorithm i used in the gui for dimming non valid relation targets, tries to simplify the common case and provide a guide to users and wont consider 'require' relation endpoints already satisfied as needing further relations established. Potentially the gui needs some sort of option/key press to enable an 'advanced' mode when creating relations that provides for this (i just filed bug http://pad.lv/1377414 for it). The secondary issue is that providing for configuration of the virtualhost idp mapping this way is currently tedious, as the config for the idp relation and virtualhost needs to flow from the service config or other charm accessible data source and then mapped onto the individual relation instances by the charm. This has come up in the context of other relation workflows/use cases as well. There are tentative plans to address it via providing for relation configuration that can be provided by the admin and managed as part of the relation lifecycle. ie add-relation apache idp --config=vhost=http://myapp.com acct=0123 Fwiw. The majority of the juju developers are sprinting this week on code and feature futures and relation config is on the agenda. cheers, Kapil On Fri, Oct 3, 2014 at 3:09 PM, Michael Schwartz m...@gluu.org wrote: Kapil, Here is a picture of a Juju Deployment of the Gluu Server: http://www.gluu.org/blog/wp-content/uploads/2014/10/juju- screenshot-gluu-apache.png In this digram, the Gluu Server is where the person is authenticated. It is the Central Identity Provider or IDP. Everything's great right? The Apache Server uses the Gluu Server for Authentication... nice and simple. The only problem.. the world is not quite so simple. Apache has a widely used feature to support virtual hosting. So if you are an ISP, unless you want to deploy one apache server for every customer, the above relationship doesn't do you much good. In the real world, there are multiple IDPs. Many domains have their own IDP. Google is really just another domain on the Internet. Many companies also use google to authenticate their people. So in this diagram: http://www.gluu.org/blog/wp- content/uploads/2014/10/juju_apache_charm.png I was showing a situation where a single Apache Web server might have multiple folders for different websites that it is serving, and each website may have a different IDP. Does that help? Can juju provide a nice interface or CLI controls for this? thx, Mike On 2014-10-03 13:30, Kapil Thangavelu wrote: not quite clear why you think it doesn't work, could you outline what you'd like to do and where the difficulty arises. a picture is worth a thousand words, but some words as context are useful to frame it. -k On Fri, Oct 3, 2014 at 1:15 PM, Michael Schwartz m...@gluu.org wrote: Juju'ers: If you consider virtual hosting on a web server, each web folder may be a different client, who may have their own OpenID Provider. I made a quick diagram: http://www.gluu.org/blog/wp-content/uploads/2014/10/juju_ apache_charm.png [1] As far as I can tell, there is no really good way to do this in Juju. Any ideas? thx, Mike
Re: move already deployed service to new vps with different ip address.
On Tue, Oct 21, 2014 at 8:08 AM, Vasiliy Tolstov v.tols...@selfip.ru wrote: 2014-10-21 14:37 GMT+04:00 Kapil Thangavelu kapil.thangav...@canonical.com: the other option to try, rather than using manual on localhost, is to use manual on lxc containers created within host, this isolates all the juju components to things in containers off the lxc bridge, which won't change addresses as you image and reinstance. that still leaves one extra piece of writing something to scan status and to forward traffic to exposed services via iptables. as an example of doing something like that https://github.com/kapilt/juju-lxc/blob/master/juju_lxc/add.py#L37 .. or alternatively via cli something like lxc-create -t ubuntu-cloud -n trusty-base -- -r trusty -S ~/.ssh/id_rsa.pub lxc-clone -B aufs trusty-base myenv-m1 lxc-clone -B aufs trusty-base myenv-m2 lxc-clone -B aufs trusty-base myenv-m3 lxc-start -d -n myenv-m1 ... juju bootstrap ubuntu@myenv-m1 juju add-machine ubuntu@myenv-m2 juju add-machine ubuntu@myenv-m3 This is not works =( sudo lxc-clone -B aufs trusty-base juju lxc_container: aufs is only for snapshot clones lxc_container: failed getting pathnames for cloned storage: /var/lib/lxc/trusty-base/rootfs lxc_container: Error copying storage clone failed try with -s ie. $ sudo lxc-clone -s -B aufs trusty trusty-new Created container trusty-new as snapshot of trusty -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Re: move already deployed service to new vps with different ip address.
those instructions were meant as pseudo.. hence the phrase 'something like. for bootstrap you'll need to configure the manual provider per normal via host and user in environment.yaml. for add machine you'll need juju add-machine ssh:user@container_ip On Tue, Oct 21, 2014 at 9:22 AM, Vasiliy Tolstov v.tols...@selfip.ru wrote: 2014-10-21 17:11 GMT+04:00 Kapil Thangavelu kapil.thangav...@canonical.com: try with -s ie. $ sudo lxc-clone -s -B aufs trusty trusty-new Created container trusty-new as snapshot of trusty Ok, juju bootstrap ubuntu@juju error: unrecognized args: [ubuntu@juju] -- Vasiliy Tolstov, e-mail: v.tols...@selfip.ru jabber: v...@selfip.ru -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Re: mongodb log file size
That should be fine, the dictates here are from mongodb default semantics, we've tweaked them minorly but for the most part there per upstream recommends. The amount of data juju uses is miniscule (1-2mb).. till juju 1.21 where we store charms in mongodb. cheers, Kapil On Sun, Oct 19, 2014 at 2:35 PM, Vasiliy Tolstov v.tols...@selfip.ru wrote: Hi again =). I find discussion about mongodb repl log size, that it minimal 512Mb and maximum 1024Mb (max may be wrong...). Does it possible to minimize it to 128Mb? I'm understand about negative sides of this, but now i deploy via juju single machine and don't want to keep this logs big. Does it possible to minimize it, but if i need big cluster - set new size? -- Vasiliy Tolstov, e-mail: v.tols...@selfip.ru jabber: v...@selfip.ru -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev
Re: mongodb log file size
we're trying to make providers easier to right, currently we use provider object-storage, but a number of low cost providers don't have it and it complicates writing a provider (ie. no all openstack installs have object storage either), obviating the need by making juju handle its own very limited need object storage needs makes things a bit simpler. most charms are fairly small minus those that bundle binaries. -k On Mon, Oct 20, 2014 at 4:53 PM, Vasiliy Tolstov v.tols...@selfip.ru wrote: 2014-10-20 21:16 GMT+04:00 Kapil Thangavelu kapil.thangav...@canonical.com: That should be fine, the dictates here are from mongodb default semantics, we've tweaked them minorly but for the most part there per upstream recommends. The amount of data juju uses is miniscule (1-2mb).. till juju 1.21 where we store charms in mongodb. Hmm, why you need to store charms in mongodb =( ? -- Vasiliy Tolstov, e-mail: v.tols...@selfip.ru jabber: v...@selfip.ru -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev
Re: Azure US West having problems
possibly related to http://azure.microsoft.com/en-us/status/ Starting at approximately 19:00 on the 18th Oct, 2014 UTC a limited subset of customers may experience intermittent errors when attempting to access Azure Virtual Networks. Engineers are continuing with their manual recovery and have validated significant improvement as a result of their action plan. Customers may begin to see improvements to availability of their Virtual Networks. The next update will be provided in 2 hours or as events warrant. -k On Mon, Oct 20, 2014 at 5:05 PM, Nate Finch nate.fi...@canonical.com wrote: This is a pretty major problem it *seems* like it must be Azure's fault, but it would be good to get more information about it. If anyone cares to investigate, here's the bug: https://launchpad.net/bugs/1383310 -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev
Re: move already deployed service to new vps with different ip address.
On Wed, Oct 15, 2014 at 3:32 PM, John Meinel j...@arbash-meinel.com wrote: So if the machines are able to contact the API server, they should inform it of their new addresses, which should update the other charm to point at the new addresses. Note that with LXC you'd also have stable IP addresses for mysql and wordpress in their respective containers. (This might require 1.21, but I thought the IP address update changes landed in 1.20) i thought so to, but i don't see the address update behavior in 1.20.9 fwiw. -k John =:- On Wed, Oct 15, 2014 at 4:21 PM, Vasiliy Tolstov v.tols...@selfip.ru wrote: 2014-10-15 11:08 GMT+04:00 John Meinel j...@arbash-meinel.com: You could probably edit the /var/lib/juju/agents/unit-*/agent.conf and /var/lib/juju/agents/machine-*/agent.conf to change the IP addresses stored there (everyone needs to know how to get back to the API server). Generally the API server filters out 127.0.0.1 when reporting its possible addresses to other units, since *most* of the time they can't actually contact it at 127.* In fact, the only time it works is when they are colocated, if you used containers or VMs the 127.* address wouldn't ever work. And generally colocating your services with the API server is considered a security issue. (You have to give your cloud credentials to the API server if you want to let it start instances for you, but that information should not be available to the services you deploy.) If you did deploy into containers (like juju deploy --to lxc:0) then the services would be isolated, and likely the API server would get a 10.0.3.1 address, which could be preserved between packing it up and putting it somewhere else. Problem not in state server, as i see in wordpress config i have address 10.0.2.15 as i understand in relation-change than i'm attach mysql it gets it public_ip (that can't be 127 as it ignored). As i understand i need to delete relation from wordpress to mysql and add it again.. or not? -- Vasiliy Tolstov, e-mail: v.tols...@selfip.ru jabber: v...@selfip.ru -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Re: how to get info about completing deploying charms
On Mon, Oct 13, 2014 at 10:49 PM, Vasiliy Tolstov v.tols...@selfip.ru wrote: 2014-10-13 19:19 GMT+04:00 Marco Ceppi marco.ce...@canonical.com: So there's no current way to determine when an environment is idle in Juju, there's work being done to allow services to illuminate more than just the current states of PENDING, INSTALLED, STARTED, ERROR in Juju which will help illuminate where a service is in it's lifecycle. However, all of that would be gathered from the juju status output. Currently only YAML and JSON are supported but in 1.21 new options such as summary, oneline (comparable to git log --oneline), and tabular will be available for parsing. Ok, so if i need to check that for example mysql and wordpress completed and running i need to grep juju status for agent state and check that it started three times (machine started and wordpress and mysql), right? My current best practice on this is to include a health check in the charm and manually invoke via juju run ie. juju run --service=wordpress ./health and have the health hook return some structured data (json) to stdout. cheers, Kapil -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Re: Multiple Juju Relationships to a Single Charm
Hi Michael, Thanks for elaborating. Afaics, the crux is two fold. The primary of being able to establish multiple relations between apache and identity providers per virtual host. This is supported today via api and cli. From a juju terminology apache is an IDP interface requirer (aka client) and the IDP is a provider (aka server). Simply doing juju add-relation apache idp multiple times suffices to add multiple relations between apache and different identity provider. Part of the confusion about this may have been a result of the gui not supporting this. The algorithm i used in the gui for dimming non valid relation targets, tries to simplify the common case and provide a guide to users and wont consider 'require' relation endpoints already satisfied as needing further relations established. Potentially the gui needs some sort of option/key press to enable an 'advanced' mode when creating relations that provides for this (i just filed bug http://pad.lv/1377414 for it). The secondary issue is that providing for configuration of the virtualhost idp mapping this way is currently tedious, as the config for the idp relation and virtualhost needs to flow from the service config or other charm accessible data source and then mapped onto the individual relation instances by the charm. This has come up in the context of other relation workflows/use cases as well. There are tentative plans to address it via providing for relation configuration that can be provided by the admin and managed as part of the relation lifecycle. ie add-relation apache idp --config=vhost=http://myapp.com acct=0123 Fwiw. The majority of the juju developers are sprinting this week on code and feature futures and relation config is on the agenda. cheers, Kapil On Fri, Oct 3, 2014 at 3:09 PM, Michael Schwartz m...@gluu.org wrote: Kapil, Here is a picture of a Juju Deployment of the Gluu Server: http://www.gluu.org/blog/wp-content/uploads/2014/10/juju- screenshot-gluu-apache.png In this digram, the Gluu Server is where the person is authenticated. It is the Central Identity Provider or IDP. Everything's great right? The Apache Server uses the Gluu Server for Authentication... nice and simple. The only problem.. the world is not quite so simple. Apache has a widely used feature to support virtual hosting. So if you are an ISP, unless you want to deploy one apache server for every customer, the above relationship doesn't do you much good. In the real world, there are multiple IDPs. Many domains have their own IDP. Google is really just another domain on the Internet. Many companies also use google to authenticate their people. So in this diagram: http://www.gluu.org/blog/wp- content/uploads/2014/10/juju_apache_charm.png I was showing a situation where a single Apache Web server might have multiple folders for different websites that it is serving, and each website may have a different IDP. Does that help? Can juju provide a nice interface or CLI controls for this? thx, Mike On 2014-10-03 13:30, Kapil Thangavelu wrote: not quite clear why you think it doesn't work, could you outline what you'd like to do and where the difficulty arises. a picture is worth a thousand words, but some words as context are useful to frame it. -k On Fri, Oct 3, 2014 at 1:15 PM, Michael Schwartz m...@gluu.org wrote: Juju'ers: If you consider virtual hosting on a web server, each web folder may be a different client, who may have their own OpenID Provider. I made a quick diagram: http://www.gluu.org/blog/wp-content/uploads/2014/10/juju_ apache_charm.png [1] As far as I can tell, there is no really good way to do this in Juju. Any ideas? thx, Mike - Michael Schwartz Gluu Founder / CEO @gluufederation -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju [2] Links: -- [1] http://www.gluu.org/blog/wp-content/uploads/2014/10/juju_ apache_charm.png [2] https://lists.ubuntu.com/mailman/listinfo/juju -- - Michael Schwartz Gluu Founder / CEO m...@gluu.org -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Re: Multiple Juju Relationships to a Single Charm
not quite clear why you think it doesn't work, could you outline what you'd like to do and where the difficulty arises. a picture is worth a thousand words, but some words as context are useful to frame it. -k On Fri, Oct 3, 2014 at 1:15 PM, Michael Schwartz m...@gluu.org wrote: Juju'ers: If you consider virtual hosting on a web server, each web folder may be a different client, who may have their own OpenID Provider. I made a quick diagram: http://www.gluu.org/blog/wp-content/uploads/2014/10/juju_ apache_charm.png As far as I can tell, there is no really good way to do this in Juju. Any ideas? thx, Mike - Michael Schwartz Gluu Founder / CEO @gluufederation -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/ mailman/listinfo/juju -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Re: Juju Scale Test
Unfortunately that's not very representative of the current implementation as it was based on pyjuju while the current implementation is in go and utilizing mongodb instead of zookeeper. -kapil On Thu, Oct 2, 2014 at 9:40 AM, Charles Butler charles.but...@canonical.com wrote: There's this article which was published a while ago: https://maas.ubuntu.com/2012/06/04/scaling-a-2000-node-hadoop-cluster-on-ec2ubuntu-with-juju/ Hope this helps, Charles On Thu, Oct 2, 2014 at 9:02 AM, Mike Sam mikesam...@gmail.com wrote: I was wondering what is the largest vm count that has been provisioned and deployed with juju in testing so far? In other words, what is the demonstrated scale that juju has proven to handle well so far? Thanks, Mike -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev
disabling upgrades on new machines by default?
juju can save minutes per machine (especially against release images) if we turn off upgrades by default. At the moment in juju 1.21 (dev) there's a setting os-enable-upgrade: false that will do just that (apt-get update but not upgrade) but thats not by default. i wanted to raise the question of doing that by default.. thoughts opinions.. heartbleeds / shellshocks? cheers, Kapil -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Re: ACTION: if you have admin hardcoded in scripts, this is a warning that things will change soon(ish)
That could be useful, assuming it has properties of not hanging on dead envs.. etc. at the moment jenv parsing clients are responsible for manually verifying connectivity to servers. Although it doesn't really address the issue for servers interacting with the api. ie they'll need to have their ui or config modified to take user info or the output of api-info. Realistically as i pointed out in a previous api compatibility discussion, jenvs are part of the api atm. Fwiw, we've got roughly a half-dozen programs i can think of that of the top of my head that use the api (landscape, community cloud installer, cloudfoundry orchestrator, deployer, gui, and others). In general we should try to keep or version compatible with non juju binary clients on the api, unless we're saying the api is private. And if we need to break compatibility, at minimum we should state why we're doing so, which is still missing in this thread. ie. why can't bootstrap user have an alias to 'admin' for compatibility in this case? fwiw, looks like latest jujuclient Environment.connect already used the the user in the jenv instead of hardcoding admin. -k On Mon, Sep 29, 2014 at 5:10 AM, John Meinel j...@arbash-meinel.com wrote: I think we want a simpler single-command to get everything you need to connect to the API. juju api-info or something like that, which essentially gives you the structured .jenv information that you would use (cert information, username, password, IP addresses, etc) John =:- On Mon, Sep 29, 2014 at 12:54 AM, Tim Penhey tim.pen...@canonical.com wrote: On 26/09/14 20:39, Bjorn Tillenius wrote: On Fri, Sep 26, 2014 at 04:57:17PM +1200, Tim Penhey wrote: Hi folks, All environments that exist so far have had an admin user being the main (and only) user that was created in the environment, and it was used for all client connections. Code has landed in master now that makes this initial username configurable. The juju client is yet to take advantage of this, but there is work due to be finished off soon that does exactly that. Soon, the 'juju bootstrap' command will use the name of the currently logged in user as the initial username to create [1]. What's the official way of getting the username in 1.20.8? I see 'juju api-endpoints' which returns the state servers, and 'juju get-environment' that returns a bunch of information, except the username. The only way I see is to get the .jenv file and parse it, but it feels a bit dirty. Is it guaranteed that the location and name of the file won't change, and that the format of it won't be changed in way that breaks backwards-compatibility? We don't have one yet, but one command that was proposed was juju whoami This would be pretty trivial to implement. There are a bunch of user commands that will be coming on-line soon. We won't land the change to change the admin user until there is an easy way to determine what that name it. The change will not change the user for any existing environment, only newly bootstrapped ones. Tim -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev
Re: ACTION: if you have admin hardcoded in scripts, this is a warning that things will change soon(ish)
On Fri, Sep 26, 2014 at 12:57 AM, Tim Penhey tim.pen...@canonical.com wrote: Hi folks, All environments that exist so far have had an admin user being the main (and only) user that was created in the environment, and it was used for all client connections. Code has landed in master now that makes this initial username configurable. The juju client is yet to take advantage of this, but there is work due to be finished off soon that does exactly that. Soon, the 'juju bootstrap' command will use the name of the currently logged in user as the initial username to create [1]. So, for me juju bootstrap would create the initial user tim (or thumper if I am logged in as my other user). If the current username is not translatable to a valid username, the command will fail and require the user to specify the name of the initial user on the command line. juju bootstrap --user eric After talking with Rick this morning, he mentioned that 'juju quickstart' had admin hard coded, and there are bound to be other places too. Your about to break alot of api using programs out there without having given a reason for why? Ie. can we do admin and logged in user for the bootstrapping user? I've moved jujuclient to using jenv files for connections (as its the only way to get the username, password, api servers, and cert info), i'll see about issuing an update for it to use the login user from there as well. The issue is for remote environments / ie servers interacting with juju, there is no logged in environment user (and no jenv), and no clear way to get one without changing user interfaces for the user to input one if they know it (was it thumper or tim)? cheers, Kapil -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev
Re: I gave a quick writeup over Juju on Digital Ocean
andrew has almost finished the work on getting providers by default not using object storage (already quite functional).. The other missing piece is getting DO to install cloud-init into their default ubuntu images, and supporting their variation on ec2 metadata api for retrieving userdata within cloudinit. On Mon, Sep 22, 2014 at 12:15 PM, Sebastian sebas5...@gmail.com wrote: Yeah! DO rocks, and like Nate said, can't wait for the real provider !!! Here's a quote about that from Kapil: Its possible *(although unscheduled atm) *a real provider could be done for DO when the provider object storage requirements are dropped from juju which is work that's scheduled. The coreos announcement also coincided with the release of the new userdata facility on DO (smoser verified its basically ec2 compatible) which means cloud-init support. In the future version of this plugin i intend to use the userdata facilities as it will speed up non bootstrap machine allocation as at the moment each machine has to be ssh'd into. Product owner! move this history to the top of the backlog please! :) Cheers, Sebas. 2014-09-22 12:47 GMT-03:00 Nate Finch nate.fi...@canonical.com: Nice! Digital Ocean really is super fast my Discourse charm takes 21(!) minutes to deploy on an AWS m1.small and 7 minutes on a DO 2GB droplet, which is 2/3rds the price of the amazon instance. Kapil's digital ocean plugin really makes it all (relatively) seamless, too. I look forward to when we can make a real provider for Digital Ocean, and I can take out that relatively part :) On Mon, Sep 22, 2014 at 11:42 AM, Charles Butler charles.but...@canonical.com wrote: http://blog.dasroot.net/juju-digital-ocean-awesome/ Juju on Digital Ocean, WOW! That's all I have to say. Digital Ocean is one of the fastest cloud hosts around with their SSD backed virtual machines. To top it off their billing is a no-nonsense straight forward model. $5/mo for their lowest end server, with 1TB of included traffic. That's enough to scratch just about any itch you might have with the cloud. Speaking of scratching itches, if you haven't checked out Juju yet, now you have a *prime, low cost cloud provider* to toe the waters. Spinning up droplets with Juju is very straight forward, and offers you a hands on approach to service orchestration thats affordable enough for a weekend hacker to whet their appetite. Not to mention, Juju is currently the #1 project on their API Integration listing! http://goo.gl/m6u781 In about 11 minutes, we will go from zero to deployed infrastructure for a scale-out blog (much like the one you're reading right now). -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
psa pls don't use transactions for single doc unobserved atomic updates
Its sort of misses the point on why we're doing client side transactions. Mongodb has builtin atomic operations on an individual document. We use client side txns (multiple order of magnitude slower) for multi-document txns *and/or* things we want to observe for watches. -k -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev
Re: Revision file on charms
On Mon, Sep 8, 2014 at 4:00 PM, Simon Davy bloodearn...@gmail.com wrote: On 8 September 2014 15:40, Matt Bruzek matthew.bru...@canonical.com wrote: José I just had a conversation about that with the Ecosystems team. You are correct, the revision file in the charm directory is no longer used and we can delete those files on future updates. So this is interesting, as we are looking to start using the revision file more explicitly, as part of our automation. We deploy from local repositories, with charms in bzr, and one thing we would like to explicitly know which *code* revision is currently deployed in an environment. So we were thinking to have a pre-deploy step that did $(bzr revno) revision in each charm, just before deploy/upgrade. That way, the revision reported by juju status gives us the exact code revision of the the current charm. This would pave the way for us to diff a current environment with a desired one and see what charms need changing as part of a deploy. Does this make sense? Is there some other method of knowing the code branch/revision a charm is currently running? you need to keep in mind the revision file will never match a local charm version in the state server (it will at min be at least one higher than that). This goes back to removing the need for users to manage the revision file contents while in development or pass in the upgrade flag during dev, the need was obviated by having the state server control the revision of the charm. i filed https://bugs.launchpad.net/juju-core/+bug/1313016 to cover this case for deployer, so it could annotate vcs charms with their repo and rev since local charm revisions are useless for repeatability as their independent of content and determined soley by the state server (with the revision file serving as a hint for min sequence value) and the available charms in the state server are not introspectable. cheers, Kapil -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Re: new release of juju digital ocean plugin (0.5.1)
Its possible (although unscheduled atm) a real provider could be done for DO when the provider object storage requirements are dropped from juju which is work that's scheduled. The coreos announcement also coincided with the release of the new userdata facility on DO (smoser verified its basically ec2 compatible) which means cloud-init support. In the future version of this plugin i intend to use the userdata facilities as it will speed up non bootstrap machine allocation as at the moment each machine has to be ssh'd into. cheers, Kapil On Fri, Sep 5, 2014 at 2:04 PM, Sebastian sebas5...@gmail.com wrote: Awesome! Thanks!! Do you think this plugin will be a real provider? considering the amount of people that are using Digital Ocean now. Something interesting https://www.digitalocean.com/company/blog/coreos-now-available-on-digitalocean Cheers!, Sebas. Em 05/09/2014 11:41, Kapil Thangavelu kapil.thangav...@canonical.com escreveu: Hi Folks, I wanted to send out an announce on the new version of the juju digital ocean plugin 0.5.1 Docs and download info are on the project page. http://github.com/kapilt/juju-digitalocean Its been in fairly regular use by lots of folks and is stable. It takes about 3-4m to bootstrap a new environment, and additional machine creation is multi-threaded for speed. I'm regularly using to create dozen+ machine environments in a few minutes. Some cool tricks that can be done by it with it are cross region environments ( juju add-machine --constraints=region=nyc3 juju add-machine --constraints=region=ams ). Regarding this release and a changelog. - os images for ubuntu are looked up via api when adding machines, DO has started updating their image with a bit more frequency and the previous static mapping in the plugin has been removed. Additional latency is about 1s. - new list-machines subcommand which uses DO api to show all machines and their DO details in tabular format. (see readme for examples). - destroy-environment --force both of which bypass juju and use the digital ocean api Roadmap on futures - digital ocean has a new api version that supports userdata and ipv6 ps. if you don't have a digital ocean account signups via my affiliate link are appreciated. https://www.digitalocean.com/?refcode=5df4b80c84c8 -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
new release of juju digital ocean plugin (0.5.1)
Hi Folks, I wanted to send out an announce on the new version of the juju digital ocean plugin 0.5.1 Docs and download info are on the project page. http://github.com/kapilt/juju-digitalocean Its been in fairly regular use by lots of folks and is stable. It takes about 3-4m to bootstrap a new environment, and additional machine creation is multi-threaded for speed. I'm regularly using to create dozen+ machine environments in a few minutes. Some cool tricks that can be done by it with it are cross region environments ( juju add-machine --constraints=region=nyc3 juju add-machine --constraints=region=ams ). Regarding this release and a changelog. - os images for ubuntu are looked up via api when adding machines, DO has started updating their image with a bit more frequency and the previous static mapping in the plugin has been removed. Additional latency is about 1s. - new list-machines subcommand which uses DO api to show all machines and their DO details in tabular format. (see readme for examples). - destroy-environment --force both of which bypass juju and use the digital ocean api Roadmap on futures - digital ocean has a new api version that supports userdata and ipv6 ps. if you don't have a digital ocean account signups via my affiliate link are appreciated. https://www.digitalocean.com/?refcode=5df4b80c84c8 -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Re: juju local bootstrap from tip
on a similar topic (local on tip) i was debugging with pitti his lxc environment on utopic host earlier today, and the culmination of several rounds of debugging and bug filing revealed this one Bug #1364069: local provider must transform localhost in apt proxy address amd64 apport-bug utopic juju-core (Ubuntu):New https://launchpad.net/bugs/1364069 On Mon, Sep 1, 2014 at 2:53 AM, John Meinel j...@arbash-meinel.com wrote: Right, I think it has to *know* about the target, which is obviously an issue here. But we still do *heavily* encourage (probably just outright require) cloud-archive:tools for running Juju agents on Precise. John =:- On Mon, Sep 1, 2014 at 10:46 AM, Andrew Wilkins andrew.wilk...@canonical.com wrote: On Mon, Sep 1, 2014 at 2:04 PM, John Meinel j...@arbash-meinel.com wrote: I thought --target-release was supposed to just change the priorities and prefer a target, not require it. We need it because we add cloud-archive:tools but we explicitly pin it to lower priority because we don't want to mess up charms that we are installing. Sorry, I should have included the error message. andrew@precise:~$ sudo apt-get --option=Dpkg::Options::=--force-confold --option=Dpkg::options::=--force-unsafe-io --assume-yes --quiet install --target-release precise-updates/cloud-tools mongodb-server Reading package lists... E: The value 'precise-updates/cloud-tools' is invalid for APT::Default-Release as such a release is not available in the sources (if I apt-add-repository cloud-archive:tools, it's happy) John =:- On Mon, Sep 1, 2014 at 9:57 AM, Andrew Wilkins andrew.wilk...@canonical.com wrote: On Mon, Sep 1, 2014 at 1:50 PM, John Meinel j...@arbash-meinel.com wrote: The version of mongodb in Precise is too old (2.2.4?), Ah, so it is. The entry in apt-cache I was looking at has main in it, but it's actually from the juju/stable PPA. we require a version at least 2.4.6 (which is in cloud-archive:tools and is what we use when bootstrapping Precise instances in the cloud). It is recommended that if you are running local on Precise that you should have cloud-archive:tools in your apt list. The problem is, code-wise it's currently a requirement. Should we drop --target-release for local? I'm not apt-savvy enough to know what the right thing to do here is. John =:- On Mon, Sep 1, 2014 at 9:16 AM, Andrew Wilkins andrew.wilk...@canonical.com wrote: On Mon, Sep 1, 2014 at 12:53 PM, Andrew Wilkins andrew.wilk...@canonical.com wrote: Works fine on my trusty laptop, but I'm also getting a new error when I try bootstrapping on precise: 2014-09-01 04:51:27 INFO juju.utils.apt apt.go:132 Running: [apt-get --option=Dpkg::Options::=--force-confold --option=Dpkg::options::=--force-unsafe-io --assume-yes --quiet install --target-release precise-updates/cloud-tools mongodb-server] 2014-09-01 04:51:37 ERROR juju.utils.apt apt.go:166 apt-get command failed: unexpected error type *errors.errorString args: []string{apt-get, --option=Dpkg::Options::=--force-confold, --option=Dpkg::options::=--force-unsafe-io, --assume-yes, --quiet, install, --target-release, precise-updates/cloud-tools, mongodb-server} 2014-09-01 04:51:37 ERROR juju.cmd supercommand.go:323 cannot install mongod: apt-get failed: unexpected error type *errors.errorString Bootstrap failed, destroying environment I'm looking into it at the moment. So that error message was unhelpful, and I'll fix that, but the underlying issue is that the agent is expecting to install mongodb-server from cloud-archive:tools, and the Makefile does not add that repo. I'm not sure it *should* add it either. Is there something wrong with the one in main? After all, that's where the juju-local package's dependency was resolved. Cheers, Andrew On Sat, Aug 30, 2014 at 8:19 PM, Matthew Williams matthew.willi...@canonical.com wrote: Hi Folks, I thought I'd try looking into the lxc failing to creates machines bug: https://bugs.launchpad.net/juju-core/+bug/1363143 If I wanted to do a local deploy using tip I thought it would be as simple as doing make install then juju bootstrap is that correct? It doesn't seem to work for me, are there any steps I'm missing Just to be annoying - I've just shutdown my precise vm so I can't paste the errors I get here. I'll follow up with pastes next week Matty -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev
Re: Thoughts on Dense Container testing
i went with a different approach and wrote a charm to do the overlay network using coreos's newly released rudder http://bazaar.launchpad.net/~hazmat/charms/trusty/rudder/trunk/view/head:/readme.txt it works across all providers (including manual and the digitalocean and softlayer manual based plugins) using udp for encapsulation. cheers, Kapil ps. ec2 is/was broken due to archive error across regions today. On Wed, Aug 27, 2014 at 9:52 AM, Kapil Thangavelu kapil.thangav...@canonical.com wrote: On Wed, Aug 27, 2014 at 9:17 AM, John A Meinel john.mei...@canonical.com wrote: So I played around with manually assigning IP addresses to a machine, and using BTRFS to make the LXC instances cheap in terms of disk space. I had success bringing up LXC instances that I created directly, I haven't gotten to the point where I could use Juju for the intermediate steps. See the attached document for the steps I used to set up several addressable containers on an instance. However, I feel pretty good that Container Addressability would actually be pretty straightforward to achieve with the new Networker. We need to make APIs for requesting an Address for a new container available, but then we can configure all of the routing stuff without too much difficulty. Also of note, is that because we are using MASQUERADE in order to route the traffic, it doesn't require putting the bridge (br0) directly onto eth0. So it depends if MaaS will play nicely with routing rules if you assign an IP address into a container on a machine, will the routes end up routing the traffic there (I think it will, but we'd have to test to confirm it). Ideally, I'd rather do the same thing everywhere, rather that have containers routed one way in MaaS and a different way on EC2. It may be that in the field we need to not Masquerade, so I'm open to feedback here. I wrote this up a bit like how I would want to use dense containers for scale testing, since you can then deploy actual workloads into each of these LXCs if you wanted (and had the horsepower :). I succeeded in putting 6 IPs on a single m3.medium and running 5 LXC containers and was able to connect to them from another machine running inside the VPC. Thanks for exploring this John. I'm excited about utilizing something like this for regular scale testing on the cheap (10 instances for 1 hr on spot markets with 200 containers per test ~ 2k machine/unit env). Fwiw, i use ansible to automate the provisioning and machine setup ( aws/lxc/btrfs/ebs volume for btrfs) in ec2 via https://github.com/kapilt/juju-lxc/blob/master/ec2.yml .. There's some other scripts in there (add.py) for provisioning the container with userdata (ie. automate key installation and machine setup) which can obviate/automate several of these steps. Either ebs or instance ephemeral disk (ssd) is preferable i think to loopback dev for perf testing. Re uniform networking handling, it still feels like we're exploring here its unclear if we have the knowledge base to dictate a common mechanism yet. cheers, Kapil -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev
Re: Thoughts on Dense Container testing
On Wed, Aug 27, 2014 at 9:17 AM, John A Meinel john.mei...@canonical.com wrote: So I played around with manually assigning IP addresses to a machine, and using BTRFS to make the LXC instances cheap in terms of disk space. I had success bringing up LXC instances that I created directly, I haven't gotten to the point where I could use Juju for the intermediate steps. See the attached document for the steps I used to set up several addressable containers on an instance. However, I feel pretty good that Container Addressability would actually be pretty straightforward to achieve with the new Networker. We need to make APIs for requesting an Address for a new container available, but then we can configure all of the routing stuff without too much difficulty. Also of note, is that because we are using MASQUERADE in order to route the traffic, it doesn't require putting the bridge (br0) directly onto eth0. So it depends if MaaS will play nicely with routing rules if you assign an IP address into a container on a machine, will the routes end up routing the traffic there (I think it will, but we'd have to test to confirm it). Ideally, I'd rather do the same thing everywhere, rather that have containers routed one way in MaaS and a different way on EC2. It may be that in the field we need to not Masquerade, so I'm open to feedback here. I wrote this up a bit like how I would want to use dense containers for scale testing, since you can then deploy actual workloads into each of these LXCs if you wanted (and had the horsepower :). I succeeded in putting 6 IPs on a single m3.medium and running 5 LXC containers and was able to connect to them from another machine running inside the VPC. Thanks for exploring this John. I'm excited about utilizing something like this for regular scale testing on the cheap (10 instances for 1 hr on spot markets with 200 containers per test ~ 2k machine/unit env). Fwiw, i use ansible to automate the provisioning and machine setup ( aws/lxc/btrfs/ebs volume for btrfs) in ec2 via https://github.com/kapilt/juju-lxc/blob/master/ec2.yml .. There's some other scripts in there (add.py) for provisioning the container with userdata (ie. automate key installation and machine setup) which can obviate/automate several of these steps. Either ebs or instance ephemeral disk (ssd) is preferable i think to loopback dev for perf testing. Re uniform networking handling, it still feels like we're exploring here its unclear if we have the knowledge base to dictate a common mechanism yet. cheers, Kapil -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev
Re: First customer pain point pull request - default-hook
hmm.. there's three distinct threads here. default-hook - charms that do so symlink 0-100% - to one hook.. in practice everything, sometimes minus install (as the hook infrastructure needs pkgs).. and most typically implemented via dispatch table. something-changed - completely orthogonal to either the default-hook merge request, and in practice full of exceptions, but useful as an optimization of event collapsing around charms capable of event coalescence periodic async framework invoked - metrics and health checks. afaics best practices around these would be not invoking default-hook which is lifecycle event handler, while these are periodic poll, yes its possible but it conflates different roles. cheers, Kapil also +1 to default-hook using JUJU_HOOK_NAME. On Tue, Aug 19, 2014 at 6:10 PM, Gustavo Niemeyer gust...@niemeyer.net wrote: On Tue, Aug 19, 2014 at 6:58 PM, Matthew Williams matthew.willi...@canonical.com wrote: Something to be mindful of is that we will shortly be implementing a new hook for metering (likely called collect-metrics). This hook differs slightly to the others in that it will be called periodically (e.g. once every hour) with the intention of sending metrics for that unit to the state server. I'm not sure it changes any of the details in this feature or the pr - but I thought you should be aware of it Yeah, that's a good point. I'm wonder how reliable the use of default-hook will be, as it's supposed to run whenever any given hook doesn't exist, so charms using that feature should expect _any_ hook to be called there, even those they don't know about, or that don't even exist yet. The charms that symlink into a single hook seem to be symlinking a few things, not everything. It may well turn out that default-hook will lead to brittle charms. gustavo @ http://niemeyer.net -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev
Re: First customer pain point pull request - default-hook
That doc implies a completely different style of authoring ie. rewrite then of most extant (95%) charms using symlinks to a single implementation. There are a minority that do indeed reconsider all current state from juju each hook invocation, in which case this level of optimization is useful, but its orthogonal to solving the tedium of current authors that the pull-request is addressing. -k On Sun, Aug 17, 2014 at 6:28 AM, John Meinel j...@arbash-meinel.com wrote: The main problem with having a hook that just fires instead of the others is that you end up firing a hook a whole bunch of times where it essentially does nothing because it is still waiting for some other hook for it to actually be ready. The something-changed proposal essentially colapses the 10 calls to various hooks into a single firing. William has thought much more about it, so I'd like him to fill in any details I've missed. John =:- On Sun, Aug 17, 2014 at 1:59 PM, Nate Finch nate.fi...@canonical.com wrote: That's an interesting document, but I feel like it doesn't really explain the problem it's trying to solve. Why does a single entry point cause a lot of boilerplate (I presume he means code boilerplate)? Isn't it just a switch on the name of the hook? What does it mean when a new hook is introduced? Doesn't the charm define what hooks it has? And wouldn't the aforementioned switch mean that any new hook (whatever that means) would be ignored the same way it would if the hook file wasn't there? Can someone explain to me what exactly the problem is? On Sun, Aug 17, 2014 at 1:30 AM, John Meinel j...@arbash-meinel.com wrote: I'd just like to point out that William has thought long and hard about this problem, and what semantics make the most sense (does it get called for any hook, does it always get called, does it only get called when the hook doesn't exist, etc). I feel like had some really good decisions on it: https://docs.google.com/a/canonical.com/document/d/1V5G6v6WgSoNupCYcRmkPrFKvbfTGjd4DCUZkyUIpLcs/edit# default-hook sounds (IMO) like it may run into problems where we do logic based on whether a hook exists or not. There are hooks being designed like leader-election and address-changed that might have side effects, and default-hook should (probably?) not get called for those. I'd just like us to make sure that we actually think about (and document) what hooks will fall into this, and make sure that it always makes sense to rebuild the world on every possible hook (which is how charm writers will be implementing default-hook, IMO). John =:- On Sat, Aug 16, 2014 at 1:02 AM, Aaron Bentley aaron.bent...@canonical.com wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 14-08-15 04:36 PM, Nate Finch wrote: There's new hook in town: default-hook. If it exists and a hook gets called that doesn't have a corresponding hook file, default-hook gets called with the name of the original hook as its first argument (arg[1]). That's it. Nice! Thank you. Aaron -BEGIN PGP SIGNATURE- Version: GnuPG v1 iQEcBAEBAgAGBQJT7nVvAAoJEK84cMOcf+9h90UH/RMVabfJp4Ynkueh5XQiS6mD TPWwY0FVHfpAWEIbnQTQpnmkhMzSOKIFy0fkkXkEx4jSUt6I+iNYXdu8T77mA38G 7IZ7HAi+dAzRCrGTIZHsextrs5VpxhdzFJYOxL+TN5VUWYt+U+awSPFn0MlUZfAC 5aUuV3p3KjlHByLNT7ob3eMzR2mwylP+AS/9UgiojbUOahlff/9y83dYqkCDYzih C2rlwf0Wal12svu70ifggGKWcnF/eiwSm4TQjJsfMdCfw0gSg4ICgmIbWQ78OytJ AM4UBk1/Ue94dUm3YP+lcgAqJCC9GW5HksCFN74Qr+4xcnuqYoCJJxpU5fBOTls= =5YwW -END PGP SIGNATURE- -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev
Re: getting rid of all-machines.log
On Fri, Aug 15, 2014 at 7:36 AM, Gabriel Samfira gsamf...@cloudbasesolutions.com wrote: I think this thread has become a bit lengthy, and we have started to loose perspective on what we are actually trying to accomplish. agreed. afaics that's how do we support logging on windows. Gustavo's idea to save the logs to is awesome and it works across platforms, allows immense flexibility and would give people a powerful tool. We should deffinately aspire to get that done sooner rather then later. However, at this point in time its only an idea, without a clear blueprint. agreed. What Nate is proposing *already exists*, its tangible, proposed as a PR and improves the way juju handles logs. you'll have to be more specific, there's been a shotgun of statements in this thread, touching on logstash, aggregation removal, rsyslog removal, log rotation, deferring to stderr/stdout, 12factor apps, working with ha state servers, etc. The only thing I see missing, that might ease people's minds is a --log-file option (that works with --debug) to actually enforce the usage of a log file. If we omit that option, then juju should just log to stdout/stderr. So we get to keep what we have, but also solve a huge PITA on Windows or any other platform that have limitations in this respect, with a minimal change... afaics your referencing your branch ( https://github.com/gabriel-samfira/syslog) which will directly send logs to a remote aggregating syslog server on windows nodes. with regard to the default service behavior on windows, where does stdout/stderr go for a service? isn't the expected behavior for a windows service to use the event log facility? ie. to be inline with expected windows behavior and extant juju semantics, shouldn't we have multiple handlers on the log facility, one to rsyslog, and one to event log on windows. Please keep in mind that its better to move forward no matter how small the steps, then to just stand still while we figure out the perfect logging system. I would much rather have windows support today, then 2 months from now when someone actually gets around to implement a *new* logging system. This should not be a discussion about which logging system is _best_. This should be a discussion about which logging system is _better_ and available *now*. Otherwise we risk of getting caught up in details and loose sight of our actual goal. Just my 2 cents. Regards, Gabriel Thanks, Kapil On 14.08.2014 23:47, Kapil Thangavelu wrote: On Thu, Aug 14, 2014 at 2:14 PM, Nate Finch nate.fi...@canonical.com wrote: I didn't bring up 12 factor, it's irrelevant to my argument. I'm trying to make our product simpler and easier to maintain. That is all. If there's another cross-platform solution that we can use, I'd be happy to consider it. We have to change the code to support Windows. I'd rather the diff be +50 -150 than +75 -0. I don't know how to state it any simpler than that. the abrogation of responsibility which is what ic you adocating for in this thread, also makes our product quite a lot less usable imo... Our product is a distributed system with emergent behavior. Having a debug log is one of the most useful things you can have to observe the system and back in py days was one of the most used features and it was just a simple dump to the db with querying. Its unfortunate that ability to use it usefully didn't land to core till recently and did so in broken fashion (still requiring internal tag names for filtering).. or lots more people would be using it. Gustavo's suggestion of storing the structured log data in mongo sounds really good to me. Yes, features are work and require code but that sort of implementation is also cross platform portable. The current implementation and proposed alternatives I find somewhat ridicolous in that we basically dump structured data into an unstructured format only to reparse it every time we look at it (or ingest into logstash) given that we already have the structured data. Asking people to setup one of those distributed log aggregation systems systems and configure them is a huge task, and anyone suggesting punting that to an end user or charm developer has never setup one up themselves i suspect. ie. an analogy imo http://xahlee.info/comp/i/fault-tolerance_NoSQL.png As for the operations follks who do have them.. we can continue sending messages send to local syslog and let them collect per their preference. -k On Thu, Aug 14, 2014 at 1:35 PM, Gustavo Niemeyer gustavo.nieme...@canonical.com wrote: On Thu, Aug 14, 2014 at 1:35 PM, Nate Finch nate.fi...@canonical.com wrote: On Thu, Aug 14, 2014 at 12:24 PM, Gustavo Niemeyer gustavo.nieme...@canonical.com wrote: Why support two things when you can support just one? Just to be clear, you really mean why support two existing and well known things when I can implement a third thing, right
Re: Port ranges - restricting opening and closing ranges
agreed. to be clear .. imo, close-port shouldn't error unless there's a type mismatch on inputs. ie none of the posited scenarios in this thread should result in an error. -k On Tue, Aug 5, 2014 at 8:34 PM, Gustavo Niemeyer gust...@niemeyer.net wrote: On Tue, Aug 5, 2014 at 4:18 PM, roger peppe rogpe...@gmail.com wrote: close ports 80-110 - error (mismatched port range?) I'd expect ports to be closed here, and also on 0-65536. gustavo @ http://niemeyer.net -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev
Re: Port ranges - restricting opening and closing ranges
imo, no, its a no-op. the end state is still the same. if its an error, and now we have partial failure modes to consider against ranges. On Tue, Aug 5, 2014 at 1:25 PM, David Cheney david.che...@canonical.com wrote: Yes, absolutely. On Tue, Aug 5, 2014 at 8:33 PM, Domas Monkus domas.mon...@canonical.com wrote: A follow-up question: should closing a port that was not opened previous to that result in an error? Domas On Fri, Jun 27, 2014 at 2:13 PM, Matthew Williams matthew.willi...@canonical.com wrote: +1 on an opened-ports hook tool, I've added it to the task list On Fri, Jun 27, 2014 at 9:41 AM, William Reade william.re...@canonical.com wrote: Agreed. Note, though, that we'll want to give charms a way to know what ports they have already opened: I think this is a case where look-before-you-leap maybe beats easier-ask-forgiveness-than-permission (and the consequent requirement that error messages be parsed...). An opened-ports hook tool should do the trick. On Thu, Jun 26, 2014 at 9:18 PM, Gustavo Niemeyer gust...@niemeyer.net wrote: +1 to Mark's point. Handling exact matches is much easier, and does not prevent a fancier feature later, if there's ever the need. On Thu, Jun 26, 2014 at 3:38 PM, Mark Ramm-Christensen (Canonical.com) mark.ramm-christen...@canonical.com wrote: My belief is that as long as the error messages are clear, and it is easy to close 8000-9000 and then open 8000-8499 and 8600-9000, we are fine. Of course it is nicer if we can do that automatically for you, but I don't see why we can't add that later, and I think there is a value in keeping a port-range as an atomic data-object either way. --Mark Ramm On Thu, Jun 26, 2014 at 2:11 PM, Domas Monkus domas.mon...@canonical.com wrote: Hi, me and Matthew Williams are working on support for port ranges in juju. There is one question that the networking model document does not answer explicitly and the simplicity (or complexity) of the implementation depends greatly on that. Should we only allow units to close exactly the same port ranges that they have opened? That is, if a unit opens the port range [8000-9000], can it later close ports [8500-8600], effectively splitting the previously opened port range in half? Domas -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev -- gustavo @ http://niemeyer.net -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev
Re: api/cli compatability between juju minor versions
There's an extant version incompatibility between 1.18 and 1.20 that was highlighted during the 1.19 dev cycle which is unaddressed till the unreleased 1.21 (http://pad.lv/1311227). We should treat compatibility breakage as a blocker for stable releases. Also in addition to the api cli, environment.jenv files need to adhere to compatibility constraints as their the only reasonable mechanism for a client to connect (credentials, certs, addresses). On Mon, Jul 28, 2014 at 10:44 AM, Curtis Hovey-Canonical cur...@canonical.com wrote: Thank you everyone. I am glad we are promising compatibility. We are not fulling testing minor-to-minor compatibility yet, The feature isn't even scheduled to start yet, but we have added some tests and a lot of infrastructure to support it. I may need to stop work on other things to deliver more of these tests now. -- Curtis Hovey Canonical Cloud Development and Operations http://launchpad.net/~sinzui -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev
Re: juju as a web service
awesome, looking forward to seeing it. On Wed, Jul 9, 2014 at 11:15 AM, Sebastian sebas5...@gmail.com wrote: Juju as a service is a real need, i started that project some months ago. we are planning to release an MVP in the next two months :) Abs, Sebas. Em 09/07/2014 11:47, Adam Stokes adam.sto...@ubuntu.com escreveu: We also have a python3 version hosted at: https://github.com/Ubuntu-Solutions-Engineering/macumba It's what Ubuntu Openstack Installer is switching to for its internal juju communication. On Wed, Jul 9, 2014 at 9:28 AM, Mark Shuttleworth m...@ubuntu.com wrote: On 09/07/14 13:52, Andrew Wilkins wrote: Juju has a websocket API that you can use to perform the same operations as the CLI can. There are Go and Python client APIs. The Go one is in the core repository, and the Python one is here: https://launchpad.net/python-jujuclient Batteries included. No extra charge :) Mark -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju -- [ Adam Stokes ] -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Re: Device mappings in EC2
On Mon, Jul 14, 2014 at 10:09 AM, Henning Eggers henn...@keeeb.com wrote: Hi, this is a follow-up to these two: http://askubuntu.com/questions/457282/why-do-ec2-instances-provisioned-with-juju-no-longer-include-additional-storage https://bugs.launchpad.net/juju-core/+bug/1280852 The new m3 EC2 instances come with fast SSD instance (ephemeral) storage volumes. To make these available, a device mapping has to be specified at instance launch time. AFAICT there is no such option in juju. On askubuntu Jorge suggests specifiying constraints in such a ways, that an old instance type is selected. In the LP bug on the other hand Kapil is calling for the complete removal of these old instance types. These two seem to be counter-productive. ;-) What I would need is a way to specify a device-mapping when launching a service or maybe a machine. I don't think it would make sense to change this on a per-unit base assuming that units are configured identically. Is there any solution for this in the pipeline? my concern complaint on removal of old types was pre the ability to actually specify what instance type you want as doing it via cpupower/cores/mem/disk constraints was a confusing and indirect and often needed source inspection to determine actual behavior. latest juju's now allow directly specifying instance type, ie exactly what you want. i think we should have a separate bug on the need for block dev mapping on these m3 instance types (juju does specify a block dev map when launching but nothing specific around the m3 instance types). cheers, Kapil -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Re: access environment.yaml data from the hooks
On Fri, Jul 11, 2014 at 4:44 AM, Tudor Rogoz ro...@adobe.com wrote: Hi all, Is it possible to access the juju environment properties directly from the hooks? More precisely, I want to have access to the AWS credentials (defined in the environments.yaml file) directly from the hooks, is this possible? I can workaround the situation, by defining specific config properties and duplicate the information there and this way I can get the data by calling ‘config-get’ function.But I’m just thinking if maybe it would be a cleaner way to achieve this.Ideas? Juju doesn't allow for extraction of provider credentials from the state server as a security measure. Its typically much better to define these as charm config properties, because you can use a separate iam account that's permission scoped to the usage you want rather than proliferating a more privileged account. Even better is using iam roles ( http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html) with manual provisioning and workload placement (deploy --to) against the ec2 provider and avoiding the credential management entirely. cheers, Kapil -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Re: Relation addresses
addresses are just keys in a unit relation data bag. relation-get is the cli tool to retrieve either self or related units databag key/values (ie for self's address in the rel $ relation-get $JUJU_UNIT_NAME private-address). unit-get is used to retrieve the current iaas properties of a unit. my point regarding binding and addresses was more that we're forward thinking bug fixes by introducing a bunch of user facing stuff without having completely thought/designed or started implementation on the proposed solution that is the reason we're exposing additional things to users. instead i'd rather we just fix the bug, and actually implement the feature when we get around to implementing the feature. By the time we get around to implementing it (which for this cycle is against a single provider) we may have a different implementation and end-user exposed surface in mind. moreover the user facing (charm author) aspects of the changes as currently in the spec are going to be confusing, ie. relation hooks are always called for remote units, except for this one case which is special. additionally i'd prefer we have a plan for maintaining backwards compatibilities with proxy charms that are already extant. -k On Wed, Jun 18, 2014 at 1:44 AM, John Meinel j...@arbash-meinel.com wrote: Well, given it is unit-get shouldn't it be more relation-get private-address ? The issue is *that* is give me the private-address for the other side of this relation. Which is not quite what you want. And while I think it is true that many things won't be able to handle binding to more than one ip address (its either everything with 0.0.0.0 or one thing), I think we should at least make it *possible* for well formed services to behave the way we would like. John =:- On Wed, Jun 18, 2014 at 6:12 AM, Andrew Wilkins andrew.wilk...@canonical.com wrote: On Tue, Jun 17, 2014 at 11:35 PM, Kapil Thangavelu kapil.thangav...@canonical.com wrote: On Tue, Jun 17, 2014 at 9:29 AM, John Meinel j...@arbash-meinel.com wrote: ... In a nutshell: - There will be a new hook, relation-address-changed, and a new tool called address-get. This seems less than ideal, we already have standards ways of getting this data and being notified of its change. introducing non-orthogonal ways of doing the same lacks value afaics or at least any rationale in the document. So maybe the spec isn't very clear, but the idea is that the new hook is called on the unit when *its* private address might have changed, to give it a chance to respond. After which, relation-changed is called on all the associated units to let them know that the address they need to connect to has changed. It would be possible to just roll relation-address-changed into config changed. or another unit level change hook (unit-address-changed), again the concerns are that we're changing the semantics of relation hooks to something fundamentally different for this one case (every other relation hook is called for a remote unit) and that we're doing potentially redundant event expansion and hook queuing as opposed to coalescing/executing the address set change directly at the unit scope level. The reason it is called for each associated unit is because the network model means we can actually have different addresses (be connected on a different network) for different things related to me. e.g. I have a postgres charm related to application on network A, but related to my-statistics-aggregator on network B. The address it needs to give to application should be different than the address given to my-statistics-aggregator. And, I believe, the config in pg_hba.conf would actually be different. thanks, that scenario would be useful to have in the spec doc. As long as we're talking about unimplemented features guiding current bug fixes, realistically there's quite a lot of software that only knows how to listen on one address, so for network scoped relations to be more than advisory would also need juju to perform some form of nftables/iptables mgmt. Its feels a bit slippery that we'd be exposing the user to new concepts and features that are half-finished and not backwards-compatible for proxy charms as part of a imo critical bug fix. the two perspectives of addresses for self vs related also seem to be a bit muddled. a relation hook is called in notification of a remote unit change, but now we're introducing one that behaves in the opposite manner of every other, and we're calling it redundantly for every relation instead of once for the unit? - The hook will be called when the relation's address has changed, and the tool can be called to obtain the address. If the hook is not implemented, the private-address setting will be updated. Otherwise it is down to you to decide how you want to react to address changs (e.g. for proxy charms, probably just don't do anything.) perhaps
Re: Relation addresses
On Wed, Jun 18, 2014 at 5:21 PM, William Reade william.re...@canonical.com wrote: On Wed, Jun 18, 2014 at 7:05 PM, Kapil Thangavelu kapil.thangav...@canonical.com wrote: addresses are just keys in a unit relation data bag. relation-get is the cli tool to retrieve either self or related units databag key/values (ie for self's address in the rel $ relation-get $JUJU_UNIT_NAME private-address). unit-get is used to retrieve the current iaas properties of a unit. Yes: unit-get retrieves iaas properties; and relation-get retrieves unit properties; but self's private-address is *not* put in self's relation data bag for the benefit of self; it's for the remote units that *react* to changes in that data bag. Its not a write only bag and we don't constrain reads. Charms can retrieve their own relation properties when evaluating a remote relation change. address is simply a key in that bag. The benefit to self/local unit, and to all charm authors was the one boilerplate property that every single one of them needed to provide/relation-set was effectively handled by the framework. Afaics it also makes it easier for us to do some of the sdn relation binding because we provide that value else we'd be rewriting all extant charms to support it. Using `relation-get $JUJU_UNIT_NAME private-address` is Doing It Wrong: the canonical way to get that data is `unit-get private-address`, and the problem is not that we don't magically update the relation data bag: the problem is that we don't provide a means to know when the relation's data bag should be updated. it sort of depends why your retrieving wrt to if its wrong, if a unit want's its own address then retrieving it directly from unit-get is clearly correct. if wants to reason about the address its advertising to related units, then retrieving from the relation is valid. Agreed re the issue being lack of updates. but adding -r to the unit-get seems to be more conflating of the relation data bags and iaas properties associated to a set of unit addresses. per the original network sketch i'd imagine in a multiple network and address world unit-get would grow facilities for retrieving list of networks and addresses. as for relation to network or route binding, it also seems its missing the notion of retrieving the named network on the rel.. ie either more framework relation properties.. or ideally this could get shuffled into relation-config or exposed more explicitly. Honestly, it's kinda bad that we prepopulate private-address *anyway*. It's helpful in the majority of cases, but it's straight-up wrong for proxy charms. its debatable, given that it would be simply boilerplate for the majority, its seems reasonable and its been easy for proxy charms to explicitly set what they want the actual value to be. Afaics the only real issue to-date has been juju isn't updating the property its populating. I don't want to take on the churn caused by reversing that decision; but equally I don't want to fix it with magical rewrites of the original magic writes. to me its a question of ownership.. if the framework owned the value by providing it, then the framework is responsible for updating the value till such time as the charm takes ownership by writing a new one. my point regarding binding and addresses was more that we're forward thinking bug fixes by introducing a bunch of user facing stuff without having completely thought/designed or started implementation on the proposed solution that is the reason we're exposing additional things to users. instead i'd rather we just fix the bug, and actually implement the feature when we get around to implementing the feature. By the time we get around to implementing it (which for this cycle is against a single provider) we may have a different implementation and end-user exposed surface in mind. That's not impossible; but I don't think it's a good reason to pick an approach at odds with our current best judgment of where we're heading. but we're not heading there yet at best we're still doing plumbing afaics, and we're going to expose all this end user machinery which we'll have to support before we even started on the path and under the seeming aegis of providing a bug fix that could be addressed much more simply without exposing additional concepts and hooks to charm authors. ie. to me the analogy is the plumbing to the sink is stopped up, and instead of calling a plumber to clean the pipes, we're doing a renovation. yes we may want to do a renovation in the future, but that's no reason we shouldn't just fix the sink till we start it. moreover the user facing (charm author) aspects of the changes as currently in the spec are going to be confusing, ie. relation hooks are always called for remote units, except for this one case which is special. I don't agree that this one case is special; relation hooks are called in response to changes in a local unit's view
Re: Relation addresses
On Tue, Jun 17, 2014 at 12:39 AM, Andrew Wilkins andrew.wilk...@canonical.com wrote: Hi all, I've started looking into fixing https://bugs.launchpad.net/juju-core/+bug/1215579. The gist is, we currently set private-address in relation settings when a unit joins, but never update it. I've had some preliminary discussions with John, William and Dimiter, and came up with the following proposal: https://docs.google.com/a/canonical.com/document/d/1jCNvS7sSMZqtSnup9rDo3b2Wwgs57NimqMorXr9Ir-o/edit If you're a charm author, particularly if you work on proxy charms, please take a look at this and let me know of any concerns or suggestions. I have opened up comments on the doc. In a nutshell: - There will be a new hook, relation-address-changed, and a new tool called address-get. This seems less than ideal, we already have standards ways of getting this data and being notified of its change. introducing non-orthogonal ways of doing the same lacks value afaics or at least any rationale in the document. the two perspectives of addresses for self vs related also seem to be a bit muddled. a relation hook is called in notification of a remote unit change, but now we're introducing one that behaves in the opposite manner of every other, and we're calling it redundantly for every relation instead of once for the unit? - The hook will be called when the relation's address has changed, and the tool can be called to obtain the address. If the hook is not implemented, the private-address setting will be updated. Otherwise it is down to you to decide how you want to react to address changs (e.g. for proxy charms, probably just don't do anything.) perhaps there is a misunderstanding of proxies, but things that set their own address have taken responsibility for it. ie juju only updates private address if it provided it, else its the charms responsibility. fwiw, i think this could use some additional discussion. -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev
Re: Relation addresses
On Tue, Jun 17, 2014 at 9:29 AM, John Meinel j...@arbash-meinel.com wrote: ... In a nutshell: - There will be a new hook, relation-address-changed, and a new tool called address-get. This seems less than ideal, we already have standards ways of getting this data and being notified of its change. introducing non-orthogonal ways of doing the same lacks value afaics or at least any rationale in the document. So maybe the spec isn't very clear, but the idea is that the new hook is called on the unit when *its* private address might have changed, to give it a chance to respond. After which, relation-changed is called on all the associated units to let them know that the address they need to connect to has changed. It would be possible to just roll relation-address-changed into config changed. or another unit level change hook (unit-address-changed), again the concerns are that we're changing the semantics of relation hooks to something fundamentally different for this one case (every other relation hook is called for a remote unit) and that we're doing potentially redundant event expansion and hook queuing as opposed to coalescing/executing the address set change directly at the unit scope level. The reason it is called for each associated unit is because the network model means we can actually have different addresses (be connected on a different network) for different things related to me. e.g. I have a postgres charm related to application on network A, but related to my-statistics-aggregator on network B. The address it needs to give to application should be different than the address given to my-statistics-aggregator. And, I believe, the config in pg_hba.conf would actually be different. thanks, that scenario would be useful to have in the spec doc. As long as we're talking about unimplemented features guiding current bug fixes, realistically there's quite a lot of software that only knows how to listen on one address, so for network scoped relations to be more than advisory would also need juju to perform some form of nftables/iptables mgmt. Its feels a bit slippery that we'd be exposing the user to new concepts and features that are half-finished and not backwards-compatible for proxy charms as part of a imo critical bug fix. the two perspectives of addresses for self vs related also seem to be a bit muddled. a relation hook is called in notification of a remote unit change, but now we're introducing one that behaves in the opposite manner of every other, and we're calling it redundantly for every relation instead of once for the unit? - The hook will be called when the relation's address has changed, and the tool can be called to obtain the address. If the hook is not implemented, the private-address setting will be updated. Otherwise it is down to you to decide how you want to react to address changs (e.g. for proxy charms, probably just don't do anything.) perhaps there is a misunderstanding of proxies, but things that set their own address have taken responsibility for it. ie juju only updates private address if it provided it, else its the charms responsibility. fwiw, i think this could use some additional discussion. So one of the reasons is that it takes some double handling of values to know if the existing value was the one that was what we last set it. And there is the possibility that it has changed 2 times, and it was the value we set it to, but that was the address before this one and we just haven't gotten to update it. There was a proposal that we could effectively have 2 fields this is the private address you are sharing, which might be empty and this is the private address we set which is where we put our data. And we return the second value if the first is still nil. Or we set it twice, and we only set the first one if it matches what was in the second one, etc. All these things are possible, but in the discussions we had it seemed simpler to not have to track extra data for marginal benefit. Things which are proxy charms know that they are, and they found the right address to give in the past, and they simply do the same thing again when told that we want to change their address. there's lots of other implementation complexity in juju that we don't leak, we just try to present a simple interface to it. we'd be breaking existing proxy charms if we update the values out from the changed values. The simple basis of update being you touched you own it and if you didn't it updates, is simple, explicit, and backwards compatible imo. There's also the question of why the other new hook (relation-created) is needed or how it relates to this functionality, or why the existing unit-get private-address needs to be supplemented by address-get. chers, Kapil -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev
Re: This is why we should make go get work on trunk
just as it fails for many other projects.. etcd, docker, serf, consul, etc... most larger projects are going to run afoul of trying to do cowboy dependency management and adopt one of the extant tools for managing deps and have a non standard install explained to users in its readme, else its vendoring its deps. -k On Fri, Jun 6, 2014 at 5:05 PM, Nate Finch nate.fi...@canonical.com wrote: (Resending since the list didn't like my screenshots) https://twitter.com/beyang/statuses/474979306112704512 https://github.com/juju/juju/issues/43 Any tooling that exists for go projects is going to default to doing go get. Developers at all familiar with go, are going to use go get. People are going to do go get github.com/juju/juju and it's going to fail to build, and that's a terrible first impression. Yes, we can update the README to tell people to run godeps after running go get, and many people are not going to read it until after they get the error building. Here's my suggestion: We make go get work on trunk and still use godeps (or whatever) for repeatable builds of release branches. There should never be a time when tip of trunk and all dependent repos don't build. This is exceedingly easy to avoid. Go crypto (which I believe is what is failing above) is one of the few repos we rely on that isn't directly controlled by us. We should fork it so we can control when it updates (since the people maintaining it seem to not care about making breaking API changes). -Nate -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev
Re: Ubuntu Online Summit
On Sat, May 31, 2014 at 8:22 AM, brian mullan bmullan.m...@gmail.com wrote: Jorge Would there be a possibility of doing a session on Juju in Local Provider mode using LXC. There is so much interest in containers today and I think it would be useful to show juju-gui deployed by juju into a container in local mode then start a bundle or some other such demo. Docker is a hot topic right now and I think its important to help people understand what Juju can do with containers.I think what Juju is doing (and could do with LXC/Local Provider) in regards to service orchestration goes beyond what docker, chef, puppet, ansible etc can provide (or at least provide as easy as Juju). I personally would like to see the above but where a remote user could point a browser to the host and get the juju-gui (running in a container)... as I've struggled to try to get nginx reverse-proxy working to do this and so far its only partially successful. This should help for remote access to containers in a local provider or remote provider via iptable port forwards, https://github.com/cmars/juju-nat There's some work being done on directly integrating with containers provider networking. Note on the maas provider containers are directly available on the host network. cheers, Kapil -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Re: --constraints root-disk=16384M fails in EC2
i hadn't realized it was doing block dev maps, but yeah that's readily apparent in going through the source. cool. On Thu, May 29, 2014 at 9:47 PM, Ian Booth ian.bo...@canonical.com wrote: If a root disk constraint is specified, Juju will translate that into a block device mapping request when the instance is started. Hence we do start an instance with the required root disk size but the subsequent constraints matching fails. That's my understanding anyway. On 30/05/14 11:42, Kapil Thangavelu wrote: fwiw. all the ubuntu cloud images root disks in ec2 have 8gb of disk size by default, juju doesn't reallocate the root volume size when creating an instance (if it did cloudinit will auto resize the root fs if its created with a larger root vol). On Thu, May 29, 2014 at 8:26 PM, Ian Booth ian.bo...@canonical.com wrote: Hi Stein This does appear to be a bug in Juju's constraints handling for EC2. I'd have to do an experiment to confirm, but certainly reading the code appears to show a problem. Given how EC2 works, in that Juju asks for the specified root disk size when starting an instance, I don't have a workaround that I can think of to share with you. The fix for this would be relatively simple to implement and so can be done in time for the next stable release (1.20) which is due in a few weeks. Alternatively, we hope to have a new development release out next week (1.19.3). I'll try to get any fix done in time for that also. I've raised bug 1324729 for this issue. On Fri 30 May 2014 09:29:15 EST, GMail wrote: Trying to deploy a charm with some extra root disk space. When using the root-disk constraint defined above I get the following error: '(error: no instance types in us-east-1 matching constraints cpu-power=100 root-disk=16384M)' I’m deploying a bundle with the following constraints: constraints: mem=4G arch=amd64”, but need more disk-space then the default provided. Any suggestions ? Stein Myrseth Bjørkesvingen 6J 3408 Tranby mob: +47 909 62 763 mailto:stein.myrs...@gmail.com -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Re: juju bootstrap error
juju contacts that server in attempt to download the binaries it uses for agents on machines. you avoid the lookup and download there by using juju bootstrap --upload-tools On Fri, May 23, 2014 at 8:46 AM, boyd yang boyd.y...@gmail.com wrote: Hi David, The error still exists after few days. Why does the juju bootstrap need to communicate to that server? 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Reading package lists... Building dependency tree... Reading state information... cpu-checker is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Reading package lists... Building dependency tree... Reading state information... bridge-utils is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Reading package lists... Building dependency tree... Reading state information... rsyslog-gnutls is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Reading package lists... Building dependency tree... Reading state information... juju-mongodb is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. curl: (56) SSL read: error::lib(0):func(0):reason(0), errno 104 tools from https://streams.canonical.com/juju/tools/releases/juju-1.18.3-trusty-amd64.tgzdownloaded: HTTP 200; time 1692.359s; size 6979584 bytes; speed 4124.000 bytes/s 2014-05-23 11:32:09 ERROR juju.provider.common bootstrap.go:123 bootstrap failed: rc: 1 Stopping instance... 2014-05-23 11:32:09 ERROR juju.cmd supercommand.go:305 rc: 1 On Wed, May 21, 2014 at 6:46 PM, David Cheney david.che...@canonical.comwrote: I am sorry, the bootstrap machine was not able to communicate with the server, streams.canonical.com. Please try again, I hope this error is temporary. On Wed, May 21, 2014 at 7:32 PM, boyd yang boyd.y...@gmail.com wrote: Hello, Juju bootstrap gives below error: ... bridge-utils is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Reading package lists... Building dependency tree... Reading state information... rsyslog-gnutls is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Reading package lists... Building dependency tree... Reading state information... juju-mongodb is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. curl: (56) SSL read: error::lib(0):func(0):reason(0), errno 104 tools from https://streams.canonical.com/juju/tools/releases/juju-1.18.3-trusty-amd64.tgz downloaded: HTTP 200; time 1471.629s; size 7094272 bytes; speed 4820.000 bytes/s 2014-05-21 08:29:45 ERROR juju.provider.common bootstrap.go:123 bootstrap failed: rc: 1 Stopping instance... 2014-05-21 08:29:45 ERROR juju.cmd supercommand.go:305 rc: 1 -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Re: [Proposal] Requiring Go 1.2 across the board
fwiw the only interim release still under support is S (along with lts releases of L, P, T). interim releases get 9 months of support, and S expires in July. https://wiki.ubuntu.com/Releases On Fri, May 16, 2014 at 9:15 AM, Gustavo Niemeyer gust...@niemeyer.netwrote: On Fri, May 16, 2014 at 4:08 AM, David Cheney david.che...@canonical.com wrote: This is a proposal that we raise the minimum Go spec from Go 1.1 to Go 1.2. Sounds sensible. [1] I am ignoring the intermediate, non LTS series', as there are no charms for them, nor do CTS offer support for them. If this is unacceptable, anything which applies to Precise wrt. backports, also applies to Q, R and S. I suppose we do want people using these releases on their own machines to be able to use juju, at least as a client. What's the proposed mechanism for getting it to them? gustavo @ http://niemeyer.net -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev
Re: Questions about the integration of the Outscale cloud provider into juju-core
its come up before (rpc providers, shell script providers) but it doesn't quite fit with the upgrade and distribution model in juju-core and go atm. it is possible to layer on top of manual provider using a client side plugin to effectively automate machine creation for a given provider, i've published digital ocean and softlayer providers using that mechanism, it has some caveats compared to a native provider, but its still useful and functional. cheers, Kapil On Mon, May 5, 2014 at 6:12 PM, Sebastian sebas5...@gmail.com wrote: Taking the opportunity about this topic, what about making this pluggable, like providers plugins, not into the core? Sebas. 2014-05-05 18:44 GMT-03:00 Adam Stokes adam.sto...@ubuntu.com: I'd probably start here: http://bazaar.launchpad.net/~go-bot/juju-core/trunk/files/head:/provider/ This can give you an idea on how the ec2 implementation is done On Mon, May 5, 2014 at 10:56 AM, Benoît Canet benoit.ca...@irqsave.net wrote: Hello, I am a developper planning to add the support for the Outscale cloud into juju-core. The Outscale cloud implement most of the EC2 API. Does the Juju maintainer have some guidance on how the support should be written ? Best regards Benoît Canet Nodalink -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju -- [ Adam Stokes ] -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Re: juju manual bootstrap - does it work?
Hi Brian, One part of Andrew's reply that may have been overlooked is verifying a passwordless sudo setup for the 'bootstrap-user', ie that the following works. ssh me@server sudo true cheers, Kapil On Sun, May 4, 2014 at 5:59 AM, brian mullan bmullan.m...@gmail.com wrote: Andrews... sorry but don't spend any more time troubleshooting this. I'm going to blow that server away and start over. If I end up in the same place with the same problem I'll send another email but I've already spent way too much time trying to get this server to work via manual juju bootstrap. brian On Sun, May 4, 2014 at 7:22 AM, brian mullan bmullan.m...@gmail.comwrote: Thanks Andrew... information you asked for is inline. On Sat, May 3, 2014 at 10:12 PM, Andrew Wilkins andrew.wilk...@canonical.com wrote: On Sat, May 3, 2014 at 2:11 PM, brian mullan bmullan.m...@gmail.comwrote: I've tried for 2 days to get this to work and I'm stumped. using my laptop w/ubuntu 14.04 desktop remote server /w ubuntu 14.04 fresh server install I am the only account on both systems and I have both ssh and sudo access on both I can ssh login to server just fine I even set up passwordless ssh for me from laptop to server and doing When you did that, did you use ~/.ssh/id_rsa or something else? *I tried this two different ways* *$ssh-keygen -t rsa* *then* *$ ssh-add* *then* *$ ssh-copy-id my_login_ID@server_ip* *then tried juju bootstrap each time... when that didn't work I removed those keys and used the following which didn't work either.* *$ ssh-keygen* *then* *$ ssh-add* *then* *$ ssh-copy-id my_login_ID@server_ip* *But with either of the above passwordless ssh works for me if I just ssh to the server in a terminal window* *example: ssh my_login_ID@server_ip* *logs me directly into the server with no password prompt.* ssh me@server logs me directly into it just fine. And ssh me@server sudo true works, without prompting? On laptop I've installed juju Just to be clear, you're on 1.18.x? *yes... v1.18.1* *$ juju --version1.18.1-trusty-amd64* Created configuration template environments.yaml with: default: manual manual: type: manual # bootstrap-host holds the host name of the machine where the # bootstrap machine agent will be started. bootstrap-host: server_ip # bootstrap-user specifies the user to authenticate as when # connecting to the bootstrap machine. If defaults to # the current user. # bootstrap-user: my_username_id # storage-listen-ip specifies the IP address that the # bootstrap machine's Juju storage server will listen # on. By default, storage will be served on all # network interfaces. # storage-listen-ip: # storage-port specifes the TCP port that the # bootstrap machine's Juju storage server will listen # on. It defaults to 8040 # storage-port: 8040 On my laptop I execute the following *$ juju switch manual* then * $ juju bootstrap* Juju appears to connect to the Server ok but I keep getting asked for a password?? Would you mind doing this again with --debug and replying with the output? *bmullan@brians-juju:~$ juju bootstrap --debug * *2014-05-04 11:11:00 INFO juju.cmd supercommand.go:297 running juju-1.18.1-trusty-amd64 [gc]* *2014-05-04 11:11:00 DEBUG juju.environs.configstore disk.go:64 Making /home/bmullan/.juju/environments* *2014-05-04 11:11:00 INFO juju.environs.manual init.go:139 initialising 173.39.236.162, user * *2014-05-04 11:11:00 DEBUG juju.utils.ssh ssh.go:234 using OpenSSH ssh client* *2014-05-04 11:11:00 DEBUG juju.utils.ssh ssh_openssh.go:122 running: ssh -o StrictHostKeyChecking no -o PasswordAuthentication no -i /home/bmullan/.juju/ssh/juju_id_rsa -i /home/bmullan/.ssh/id_rsa ubuntu@173.39.236.162 ubuntu@173.39.236.162 sudo -n true* *Password: * *2014-05-04 11:11:49 INFO juju.environs.manual init.go:150 ubuntu user is already initialised* *2014-05-04 11:11:49 INFO juju.provider.manual provider.go:33 initialized ubuntu user* *2014-05-04 11:11:50 DEBUG juju.provider.manual environ.go:194 using ssh storage at host ubuntu@173.39.236.162 ubuntu@173.39.236.162 dir /var/lib/juju/storage* *2014-05-04 11:11:50 DEBUG juju.utils.ssh ssh.go:234 using OpenSSH ssh client* *2014-05-04 11:11:50 DEBUG juju.utils.ssh ssh_openssh.go:122 running: ssh -o StrictHostKeyChecking no -o PasswordAuthentication no -i /home/bmullan/.juju/ssh/juju_id_rsa -i /home/bmullan/.ssh/id_rsa ubuntu@173.39.236.162 ubuntu@173.39.236.162 sudo -n /bin/bash* *Password: * *2014-05-04 11:12:39 DEBUG juju.utils.ssh ssh.go:234 using OpenSSH ssh client* *2014-05-04 11:12:39 DEBUG juju.utils.ssh ssh_openssh.go:122 running: ssh -o StrictHostKeyChecking no -o
Re: What happened to pinned bootstrap
On Fri, Apr 18, 2014 at 11:34 AM, Aaron Bentley aaron.bent...@canonical.com wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 14-04-18 06:28 AM, William Reade wrote: As for automatically upgrading: it's clearly apparent that there's a compelling case for not *always* doing so. But the bulk of patch releases *will* be server-side bug fixes, and it's not great if we generally fail to deliver those to casual users. I think that users should upgrade their clients in order to get bug fixes. I think that users who don't upgrade their client are expecting to get a lock-down experience, bugs and all. And how does that work with multi-user environments? Divergence is inevitable. I don't think it's a good idea to default to deploying untested software combinations, especially when using a tested software combination will give a superior experience (i.e. client-side bug fixes). We need better client api compatibility on minor versions. The exception is bootstrap for which exact match seems reasonable for *dev* versions. For stable versions we should be compatible across micro releases and testing the same, --version still seems good for users who want exact behavior/reproduction. Even though you don't intend to introduce incompatibilities with old clients in patchlevel updates, we're human and mistakes happen. CI found lots of compatibility-breaking mistakes in the 1.17 series, and I'm sure there were many more that were caught by code review and juju-core's unit tests. The way to be certain we don't introduce such incompatibilities is testing with every patchlevel of the client, and that scales an already-big workload linearly with the number of patchlevels. for stable i think we need to go there, client versions across multiple clients and server versions will diverge across a stable series. Afaik we don't throw incompatibility flags/errors for older clients using 1.16 api against 1.18 api servers, but perhaps we should. To me a followup to that is to distribute binaries (static link) for the client as well so that people can get a newer client version as needed for their platform. There is value in using the latest patchlevel of the agent code. There is risk in using untested client/agent combinations. It is hard to weigh one against the other, and I say we don't have to-- we can get the value without introducing the risk by upgrading the client. I'm inclined to go with (1) a --version flag for bootstrap, accepting only major.minor = client tools and (2) a --dry-run flag for upgrade-juju (with the caveat that you'd need to pass its result in as --version to the next command, because new tools *could* be published in the gap between the dry-run and the, uh, wet one). Aaron, even though this doesn't promote your use case to the default, I think this'll address your issues; confirm? - --version would be an improvement, but we have a workaround, so it's not /that/ important. It's really the users I'm thinking of, the ones who care about reproducibility. I'd honestly rather have - --bootstrap-host, because the lack of it is making our testing of the manual provider a bit weird. Aaron -BEGIN PGP SIGNATURE- Version: GnuPG v1 Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iQEcBAEBAgAGBQJTUUYPAAoJEK84cMOcf+9h1l8IAJTlK7+6bhoAGSmD0uVvFjmN XjqO26yQcQT+YNBLK5cNt2L6/nFmUUjLg9B1XA/y4rX6zTGUKKk9Ge1iyrfWRXf7 ZQwWHgsMIKxTmVak9x12ack/0PQQ4/D8qoXcM5mVRDCyXJx+zVDnGSw7Cfq+5Td7 cL79xrJb9Eakhw4AUzDnW7MGMIlQQIFbkMpRoO5YBhSLN+DCf8mpXRapCKGVwxf6 oLBarulsDGuolE8641wz39vraYbOpVWZG6NVtK7hYSVjyF689rt1uitJD79ebDGc zhoKNBdGQQbDceORfK9wxQcK5072XwzZpIQTaQAPioqJ7BJQ+SL7RWksZdraVTU= =Fb/3 -END PGP SIGNATURE- -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev
Re: juju-nat: Easy NAT routing for services in LXC containers
Unrelated but in a similar vein, if you need access to services in a remote/vbox local environment, you can sshuttle to the remote and route the lxc bridge addresses. sshuttle -r ubuntu@remote_machine 10.0.3.0/24 cheers, Kapil On Thu, Apr 17, 2014 at 4:59 AM, Andrew Wilkins andrew.wilk...@canonical.com wrote: I don't have an immediate need for this, but just wanted to say it looks very handy. Thanks! Cheers, Andrew On Thu, Apr 17, 2014 at 1:08 PM, Casey Marshall casey.marsh...@canonical.com wrote: All, I'd like to share a small set of juju plugins I've developed: https://github.com/cmars/juju-nat The juju nat-* commands automate the tedious process of port forwarding and routing for services deployed into LXC containers. I like to use it with the manual provider for my personal stuff, or standing up test environments -- I'll deploy a bunch of services '--to lxc:0', then 'juju nat-expose' the ports I need public. You'll need to compile from Go sources to try it out for now. If there's sufficient interest, I could distribute some binaries. Cheers, Casey -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Re: fast containers dev workflow with juju 1.18
Its built-in into juju local provider, mostly the email was a psa re its availability, albeit lacking explicit instructions on setting it up.. ie. my basic workflow on a new machine for getting juju local using this is roughly as follows. mkfs.btrfs /dev/sdx mount -o relatime,compress=lzo /dev/sdx /var/lib/lxc sudo apt-get install squid-deb-proxy juju bootstrap local juju set-env apt-http-proxy=http://10.0.3.1:8000; # First one on a machine takes a long time (independent of environment) # lxc will download and cache an ubuntu cloud image, and juju will create a template container juju deploy wordpress # the lack of feedback during this download is a known issue, you can also explicitly download the image yourself # to /var/lib/lxc/cloud-$series/image_name # After that one time slowness, creating machines/containers should be very quick juju deploy mysql juju add-relation mysql wordpress There's also support for aufs overlay directories, but they are not fully compatible with all charms. cheers, Kapil On Fri, Apr 11, 2014 at 5:17 PM, Serge E. Hallyn se...@hallyn.com wrote: Quoting Kapil Thangavelu (kapil.thangav...@canonical.com): Hi Folks, instructions on getting a speedy workflow with local provider (a container per second) Hey Kapil, sounds interesting - but I don't see a link? Is it hidden in plain sight? with juju's builtin local provider (running 1.18) if you have btrfs @ /var/lib/lxc and install squid-deb-proxy on the host, and run juju set-env apt-http-proxy=http://10.0.3.1:8000; you should be able to use the local provider with significant speed improvements. (clone for containers package caches, you also need to modify the squid deb proxy conf to allow for ppa access). that won't include nested containers or device access (open juju bugs for hat) you can however use add-machine kvm:0 to get a kvm machine in local provider for greater machine access on particular workloads. you'll need to configure the 'network-bridge' option in environments.yaml to point to virbr0 so lxc and kvm come up on the same network. The first container will see some lag as juju downloads and populates the lxc cloud images into /var/cache/lxc and creates a template container for subsequent cloning. cheers, Kapil -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Re: Best practices for fat charms
fwiw. deployer/bundles have support for an explicit build phase for this reason.. basically a build hook in charms is run prior to deploying. i'd like to push it a bit further to deployer bundles as an archive format that can be completely self-contained for an app. On Tue, Apr 1, 2014 at 3:07 PM, Jorge O. Castro jo...@ubuntu.com wrote: Hi everyone, Matt Bruzek and I have been doing some charm testing on a machine that does not have general access to the internet. So charms that pull from PPAs, github, etc. do not work. We've been able to fatten the charms by doing things like creating a /files directory in the charm itself and putting the package/tarball/jar file in there, and given the networking issues that we might face in production environments that we should start thinking about best practices for having charms with payloads instead of pulling from a network source. Marco has some ideas on how we can generalize this and he will respond to this thread. -- Jorge Castro Canonical Ltd. http://juju.ubuntu.com/ - Automate your Cloud Infrastructure -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Re: Best practices for fat charms
If your trying to do this in automated fashion, juju supports proxies, and possibily with intelligent proxy you could do something a bit more automated. else its going to require alot of auditing. you could even skip the additional steps of modifying all the charms have the intelligent proxy work in offline mode with its cache to serve out the files back when deploying in an offline setup. the issue is going to be https url then. On Tue, Apr 1, 2014 at 4:05 PM, Matt Bruzek matthew.bru...@canonical.comwrote: Thanks Jorge, Not sure we want to call them fat charms, maybe enterprise charms. Here is my approach when making a charm work on the enterprise or limited networks. 1) Find out what hook downloads the packages that we are unable to access (wget, curl, or special ppa repositories). The enterprise network will block these requests often resulting in a charm hook failing. 2) Download the necessary packages from system that has access. 3) Upload the packages to the locked down system, copying the packages to a directory on the local charm. 4) Edit the local charm hooks to check for the package in the local directory first and if that does not exist, the charm would continue to download the files (using wget, or curl, or custom ppa). I believe we could provide a charm-tools method that does something like this and we could use this in charms to create enterprise charms that are able to be used on limited network environments. However this creates an interesting problem that I have not figured out a good way to resolve yet (your feedback requested). If the packages exist within the charm the URL will never be used. Some charms allow the user to configure the download URL and sha1sum of the package. Other charms do not allow this level of customization. For charms that have a config option for URL we could change it to also accept a file:// transport and some kind of $CHARM_DIR variable and use http:// or https:// for normal URLs. But what to do with the charms that do not allow the URLs to be configurable, or the charms that use a custom ppa repository? - Matthew Bruzek matthew.bru...@canonical.com On Tue, Apr 1, 2014 at 2:07 PM, Jorge O. Castro jo...@ubuntu.com wrote: Hi everyone, Matt Bruzek and I have been doing some charm testing on a machine that does not have general access to the internet. So charms that pull from PPAs, github, etc. do not work. We've been able to fatten the charms by doing things like creating a /files directory in the charm itself and putting the package/tarball/jar file in there, and given the networking issues that we might face in production environments that we should start thinking about best practices for having charms with payloads instead of pulling from a network source. Marco has some ideas on how we can generalize this and he will respond to this thread. -- Jorge Castro Canonical Ltd. http://juju.ubuntu.com/ - Automate your Cloud Infrastructure -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Re: Best practices for fat charms
just to be clear the https url thing is solvable with the intelligent proxy thing just a bit more work and client/library support isn't always great. http://wiki.squid-cache.org/Features/HTTPS On Tue, Apr 1, 2014 at 4:11 PM, Kapil Thangavelu kapil.thangav...@canonical.com wrote: If your trying to do this in automated fashion, juju supports proxies, and possibily with intelligent proxy you could do something a bit more automated. else its going to require alot of auditing. you could even skip the additional steps of modifying all the charms have the intelligent proxy work in offline mode with its cache to serve out the files back when deploying in an offline setup. the issue is going to be https url then. On Tue, Apr 1, 2014 at 4:05 PM, Matt Bruzek matthew.bru...@canonical.comwrote: Thanks Jorge, Not sure we want to call them fat charms, maybe enterprise charms. Here is my approach when making a charm work on the enterprise or limited networks. 1) Find out what hook downloads the packages that we are unable to access (wget, curl, or special ppa repositories). The enterprise network will block these requests often resulting in a charm hook failing. 2) Download the necessary packages from system that has access. 3) Upload the packages to the locked down system, copying the packages to a directory on the local charm. 4) Edit the local charm hooks to check for the package in the local directory first and if that does not exist, the charm would continue to download the files (using wget, or curl, or custom ppa). I believe we could provide a charm-tools method that does something like this and we could use this in charms to create enterprise charms that are able to be used on limited network environments. However this creates an interesting problem that I have not figured out a good way to resolve yet (your feedback requested). If the packages exist within the charm the URL will never be used. Some charms allow the user to configure the download URL and sha1sum of the package. Other charms do not allow this level of customization. For charms that have a config option for URL we could change it to also accept a file:// transport and some kind of $CHARM_DIR variable and use http:// or https:// for normal URLs. But what to do with the charms that do not allow the URLs to be configurable, or the charms that use a custom ppa repository? - Matthew Bruzek matthew.bru...@canonical.com On Tue, Apr 1, 2014 at 2:07 PM, Jorge O. Castro jo...@ubuntu.com wrote: Hi everyone, Matt Bruzek and I have been doing some charm testing on a machine that does not have general access to the internet. So charms that pull from PPAs, github, etc. do not work. We've been able to fatten the charms by doing things like creating a /files directory in the charm itself and putting the package/tarball/jar file in there, and given the networking issues that we might face in production environments that we should start thinking about best practices for having charms with payloads instead of pulling from a network source. Marco has some ideas on how we can generalize this and he will respond to this thread. -- Jorge Castro Canonical Ltd. http://juju.ubuntu.com/ - Automate your Cloud Infrastructure -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Re: What happened to pinned bootstrap
sounds like a great case being made for --upload-tools by default. On Sun, Mar 30, 2014 at 12:23 AM, John Meinel j...@arbash-meinel.comwrote: I thought at one point we were explicitly requiring that we bootstrap exact versions of tools (so juju CLI 1.17.2 would only bootstrap a 1.17.2 set of tools). We at least did 1.17 will only bootstrap 1.17, but looking at the code we still always deploy the latest 1.17 (which broke all the 1.17 series of CLI because 1.17.7 has an incompatible required flag). There is an argument that we can't get away with such a thing in a stable series anyway, so it isn't going to be a problem. Mostly, though, I had thought that we did exact matching, but I can see from the code that is clearly not true. Would it be very hard to do so? I think William had a very interesting idea that CLI bootstrap would always only bootstrap the exact version of tools, but could set the AgentVersion to the latest stable minor version, so it essentially bootstraps and then immediately upgrades. (With the big benefit that the upgrade process to migrate from old versions to new versions gets run.) This could be a distraction from the other stuff we're working on, but it doesn't look that hard to implement, and would avoid some of these semi-accidental breaking of old tools. John =:- -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev
Re: What happened to pinned bootstrap
On Sun, Mar 30, 2014 at 4:17 PM, Ian Booth ian.bo...@canonical.com wrote: On 31/03/14 02:11, Kapil Thangavelu wrote: sounds like a great case being made for --upload-tools by default. --upload-tools does happen automatically on bootstrap, but only if no matching, pre-built tools are found. So, if a 1.19 client were used to bootstrap and only 1.18 tools were available, upload-tools would be done automatically. On trunk it currently just falls back to the latest tools major/minor version match found in streams afaics. (ie 1.17.8 trunk client bootstraps 1.17.7 env) which may or may not be compatible and is backwards version movement, though it matches up with the major/minor match you write of. As John points out, tools matching is done based on major.minor version number. My understanding was that X.Y.Z should be compatible with X.Y.W where W != Z. So 1.17.6 clients should have been compatible with 1.17.7 tools. If we break compatibility, then we should have incremented the minor version number. Or, in this case, given we didn't want to do that, ensure 1.17.7 tools were backwards compatible with 1.17.6 clients. Note that we used to just match tools on major version. This was correctly deemed unworkable, and so the move to major.minor matching was at the time considered to be sufficient so long as we coded for such compatibility. I think the core issue here was just a simple mistake and/or misunderstanding of the version compatibility policies in place. If the situation has highlighted the need for a change in policy, that's fine, but we then need to agree that we need to be stricter on tools matching. there's still a few issues with simplestreams vs upload-tools even when it works perfectly. additional steps and maintenance for private clouds, zero visibility into version chosen when upgrading juju (ie. no dryrun). thankfully as of last week private clouds setup are publicly documented for initial bootstrap. still, all told, the simplicity of just use the binary i'm running is that its incredibly transparent and obvious what the result will be and always works, and i've basically hardwired it to avoid ambiguity. i know many of us have spent a few hrs helping users debug tool issues. but perhaps this is just the last step on the road to working reliably and transparently. in that case, i'd suggest for dev versions we default to major/minor/micro match, and stable can keep major/minor match or do the same, and never go backwards on versions when bootstrapping. For the transparency aspect having a flag/plugin/cli to find what version juju will pick for a given env on bootstrap upgrade would be good. cheers, kapil On Sun, Mar 30, 2014 at 12:23 AM, John Meinel j...@arbash-meinel.com wrote: I thought at one point we were explicitly requiring that we bootstrap exact versions of tools (so juju CLI 1.17.2 would only bootstrap a 1.17.2 set of tools). We at least did 1.17 will only bootstrap 1.17, but looking at the code we still always deploy the latest 1.17 (which broke all the 1.17 series of CLI because 1.17.7 has an incompatible required flag). There is an argument that we can't get away with such a thing in a stable series anyway, so it isn't going to be a problem. Mostly, though, I had thought that we did exact matching, but I can see from the code that is clearly not true. Would it be very hard to do so? I think William had a very interesting idea that CLI bootstrap would always only bootstrap the exact version of tools, but could set the AgentVersion to the latest stable minor version, so it essentially bootstraps and then immediately upgrades. (With the big benefit that the upgrade process to migrate from old versions to new versions gets run.) This could be a distraction from the other stuff we're working on, but it doesn't look that hard to implement, and would avoid some of these semi-accidental breaking of old tools. John =:- -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev -- Juju-dev mailing list Juju-dev@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev