Re: [openstack-dev] [magnum] Discuss the idea of manually managing the bay nodes

2016-06-02 Thread Keith Bray
Has an email been posted to the [heat] community for their input?  Maybe I
missed it.

Thanks,
-Keith

On 6/2/16, 9:42 AM, "Hongbin Lu"  wrote:

>Madhuri,
>
>It looks both of us agree the idea of having heterogeneous set of nodes.
>For the implementation, I am open to alternative (I supported the
>work-around idea because I cannot think of a feasible implementation by
>purely using Heat, unless Heat support "for" logic which is very unlikely
>to happen. However, if anyone can think of a pure Heat implementation, I
>am totally fine with that).
>
>Best regards,
>Hongbin
>
>> -Original Message-
>> From: Kumari, Madhuri [mailto:madhuri.kum...@intel.com]
>> Sent: June-02-16 12:24 AM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually
>> managing the bay nodes
>> 
>> Hi Hongbin,
>> 
>> I also liked the idea of having heterogeneous set of nodes but IMO such
>> features should not be implemented in Magnum, thus deviating Magnum
>> again from its roadmap. Whereas we should leverage Heat(or may be
>> Senlin) APIs for the same.
>> 
>> I vote +1 for this feature.
>> 
>> Regards,
>> Madhuri
>> 
>> -Original Message-
>> From: Hongbin Lu [mailto:hongbin...@huawei.com]
>> Sent: Thursday, June 2, 2016 3:33 AM
>> To: OpenStack Development Mailing List (not for usage questions)
>> 
>> Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually
>> managing the bay nodes
>> 
>> Personally, I think this is a good idea, since it can address a set of
>> similar use cases like below:
>> * I want to deploy a k8s cluster to 2 availability zone (in future 2
>> regions/clouds).
>> * I want to spin up N nodes in AZ1, M nodes in AZ2.
>> * I want to scale the number of nodes in specific AZ/region/cloud. For
>> example, add/remove K nodes from AZ1 (with AZ2 untouched).
>> 
>> The use case above should be very common and universal everywhere. To
>> address the use case, Magnum needs to support provisioning
>> heterogeneous set of nodes at deploy time and managing them at runtime.
>> It looks the proposed idea (manually managing individual nodes or
>> individual group of nodes) can address this requirement very well.
>> Besides the proposed idea, I cannot think of an alternative solution.
>> 
>> Therefore, I vote to support the proposed idea.
>> 
>> Best regards,
>> Hongbin
>> 
>> > -Original Message-
>> > From: Hongbin Lu
>> > Sent: June-01-16 11:44 AM
>> > To: OpenStack Development Mailing List (not for usage questions)
>> > Subject: RE: [openstack-dev] [magnum] Discuss the idea of manually
>> > managing the bay nodes
>> >
>> > Hi team,
>> >
>> > A blueprint was created for tracking this idea:
>> > https://blueprints.launchpad.net/magnum/+spec/manually-manage-bay-
>> > nodes . I won't approve the BP until there is a team decision on
>> > accepting/rejecting the idea.
>> >
>> > From the discussion in design summit, it looks everyone is OK with
>> the
>> > idea in general (with some disagreements in the API style). However,
>> > from the last team meeting, it looks some people disagree with the
>> > idea fundamentally. so I re-raised this ML to re-discuss.
>> >
>> > If you agree or disagree with the idea of manually managing the Heat
>> > stacks (that contains individual bay nodes), please write down your
>> > arguments here. Then, we can start debating on that.
>> >
>> > Best regards,
>> > Hongbin
>> >
>> > > -Original Message-
>> > > From: Cammann, Tom [mailto:tom.camm...@hpe.com]
>> > > Sent: May-16-16 5:28 AM
>> > > To: OpenStack Development Mailing List (not for usage questions)
>> > > Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually
>> > > managing the bay nodes
>> > >
>> > > The discussion at the summit was very positive around this
>> > requirement
>> > > but as this change will make a large impact to Magnum it will need
>> a
>> > > spec.
>> > >
>> > > On the API of things, I was thinking a slightly more generic
>> > > approach to incorporate other lifecycle operations into the same
>> API.
>> > > Eg:
>> > > magnum bay-manage  
>> > >
>> > > magnum bay-manage  reset –hard
>> > > magnum bay-manage  rebuild
>> > > magnum bay-manage  node-delete  magnum bay-manage
>> > >  node-add –flavor  magnum bay-manage  node-reset
>> > >  magnum bay-manage  node-list
>> > >
>> > > Tom
>> > >
>> > > From: Yuanying OTSUKA 
>> > > Reply-To: "OpenStack Development Mailing List (not for usage
>> > > questions)" 
>> > > Date: Monday, 16 May 2016 at 01:07
>> > > To: "OpenStack Development Mailing List (not for usage questions)"
>> > > 
>> > > Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually
>> > > managing the bay nodes
>> > >
>> > > Hi,
>> > >
>> > > I think, user also want to specify the deleting node.
>> > > So we should manage “node” individually.
>> > >
>> > > For example:
>> > > $ magnum node-create —bay …
>> > > $ magnum node-list —bay
>> > > $ magnum node-delete $NODE_UUID
>> > >
>> > > A

Re: [openstack-dev] [magnum][app-catalog][all] Build unified abstraction for all COEs

2016-04-22 Thread Keith Bray
Thanks Amirth… I’m glad to see it hasn’t changed much since I was involved
with Trove in its early days.  What you are describing makes sense, and I
view it as an LCD for managing common things across the database types,
not an LCD for the database interaction performed by the user/client
interacting with the application database.  This parallels where I think
Magnum should sit, which is general management of the COEs (reinitialize
bay, backup bay, maybe even common configuration of bays, etc. etc.), and
not an LCD for user/application interaction with the COEs.  It’s a grey
area for sure, as should “list containers” on a bay be a common
abstraction?  I think it’s too early to tell… and, to be clear for all
folks, I’m not opposed to the LCD existing.  I just don’t want it to be
required for the operator to run it at this time as part of Magnum given
how quickly the COE technology landscape is evolving.  So, optional
support, or separate API/Project make the most sense to me, and can always
be merged in as part of the Magnum project at a future date once the
technology landscape settles.  RDBMS has been fairly standard for awhile.

Thanks for all the input.  The context helps.

-Keith



On 4/22/16, 6:40 AM, "Amrith Kumar"  wrote:

>For those interested in one aspect of this discussion (a common compute
>API for bare-metal, VM's and containers), there's a review of a spec in
>Trove [1], and a session at the summit [2].
>
>Please join [2] if you are able
>
> Trove Container Support
> Thursday, April 28, 9:50am-10:30am
> Hilton Austin - MR 406
>
>Keith, more detailed answer to one of your questions is below.
>
>Thanks,
>
>-amrith
>
>
>[1] https://review.openstack.org/#/c/307883/4
>[2] 
>https://www.openstack.org/summit/austin-2016/summit-schedule/events/9150
>
>> -Original Message-
>> From: Keith Bray [mailto:keith.b...@rackspace.com]
>> Sent: Thursday, April 21, 2016 5:11 PM
>> To: OpenStack Development Mailing List (not for usage questions)
>> 
>> Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build unified
>> abstraction for all COEs
>> 
>> 100% agreed on all your points… with the addition that the level of
>> functionality you are asking for doesn’t need to be baked into an API
>> service such as Magnum.  I.e., Magnum doesn’t have to be the thing
>> providing the easy-button app deployment — Magnum isn’t and shouldn’t
>>be a
>> Docker Hub alternative, a Tutum alternative, etc.  A Horizon UI, App
>> Catalog UI, or OpenStack CLI on top of Heat, Murano, Solum, Magnum, etc.
>> etc. can all provide this by pulling together the underlying API
>> services/technologies to give users the easy app deployment buttons.   I
>> don’t think Magnum should do everything (or next thing we know we’ll be
>> trying to make Magnum a PaaS, or make it a CircleCI, or … Ok, I’ve
>>gotten
>> carried away).  Hopefully my position is understood, and no problem if
>> folks disagree with me.  I’d just rather compartmentalize domain
>>concerns
>> and scope Magnum to something focused, achievable, agnostic, and easy
>>for
>> operators to adopt first. User traction will not be helped by increasing
>> service/operator complexity.  I’ll have to go look at the latest Trove
>>and
>> Sahara APIs to see how LCD is incorporated, and would love feedback from
>
>[amrith] Trove provides a common, database agnostic set of API's for a
>number of common database workflows including provisioning and lifecycle
>management. It also provides abstractions for common database topologies
>like replication and clustering, and management actions that will
>manipulate those topologies (grow, shrink, failover, ...). It provides
>abstractions for some common database administration activities like user
>management, database management, and ACL's. It allows you to take backups
>of databases and to launch new instances from backups. It provides a
>simple way in which a user can manage the configuration of databases (a
>subset of the configuration parameters that the database supports, the
>choice the subset being up to the operator) in a consistent way. Further
>it allows users to make configuration changes across a group of databases
>through the process of associating a 'configuration group' to database
>instances.
>
>The important thing about this is that there is a desire to provide all
>of the above capabilities through the Trove API and make these
>capabilities database agnostic. The actual database specific
>implementations are within Trove and largely contained in a database
>specific guest agent that performs the database specific actions to
>achieve the end result that the user reques

Re: [openstack-dev] [magnum][app-catalog][all] Build unified abstraction for all COEs

2016-04-21 Thread Keith Bray
100% agreed on all your points… with the addition that the level of
functionality you are asking for doesn’t need to be baked into an API
service such as Magnum.  I.e., Magnum doesn’t have to be the thing
providing the easy-button app deployment — Magnum isn’t and shouldn’t be a
Docker Hub alternative, a Tutum alternative, etc.  A Horizon UI, App
Catalog UI, or OpenStack CLI on top of Heat, Murano, Solum, Magnum, etc.
etc. can all provide this by pulling together the underlying API
services/technologies to give users the easy app deployment buttons.   I
don’t think Magnum should do everything (or next thing we know we’ll be
trying to make Magnum a PaaS, or make it a CircleCI, or … Ok, I’ve gotten
carried away).  Hopefully my position is understood, and no problem if
folks disagree with me.  I’d just rather compartmentalize domain concerns
and scope Magnum to something focused, achievable, agnostic, and easy for
operators to adopt first. User traction will not be helped by increasing
service/operator complexity.  I’ll have to go look at the latest Trove and
Sahara APIs to see how LCD is incorporated, and would love feedback from
Trove and Sahara operators on the value vs. customer confusion or operator
overhead they get from those LCDs if they are required parts of the
services.

Thanks,
-Keith

On 4/21/16, 3:31 PM, "Fox, Kevin M"  wrote:

>There are a few reasons, but the primary one that affects me is Its from
>the app-catalog use case.
>
>To gain user support for a product like OpenStack, you need users. The
>easier you make it to use, the more users you can potentially get.
>Traditional Operating Systems learned this a while back. Rather then make
>each OS user have to be a developer and custom deploy every app they want
>to run, they split the effort in such a way that Developers can provide
>software through channels that Users that are not skilled Developers can
>consume and deploy. The "App" culture in the mobile space it the epitome
>of that at the moment. My grandmother fires up the app store on her
>phone, clicks install on something interesting, and starts using it.
>
>Right now, Thats incredibly difficult in OpenStack. You have to find the
>software your interested in, figure out which components your going to
>consume (nova, magnum, which COE, etc) then use those api's to launch
>some resource. Then after that resource is up, then you have to switch
>tools and then use those tools to further launch things, ansible or
>kubectl or whatever, then further deploy things.
>
>What I'm looking for, is a unified enough api, that a user can go into
>horizon, go to the app catalog, find an interesting app, click
>install/run, and then get a link to a service they can click on and start
>consuming the app they want in the first place. The number of users that
>could use such an interface, and consume OpenStack resources are several
>orders of magnitude greater then the numbers that can manually deploy
>something ala the procedure in the previous paragraph. More of that is
>good for Users, Developers, and Operators.
>
>Does that help?
>
>Thanks,
>Kevin
>
>
>
>From: Keith Bray [keith.b...@rackspace.com]
>Sent: Thursday, April 21, 2016 1:10 PM
>To: OpenStack Development Mailing List (not for usage questions)
>Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build unified
>abstraction for all COEs
>
>If you don¹t want a user to have to choose a COE, can¹t we just offer an
>option for the operator to mark a particular COE as the ³Default COE² that
>could be defaulted to if one isn¹t specified in the Bay create call?  If
>the operator didn¹t specify a default one, then the CLI/UI must submit one
>in the bay create call otherwise it would fail.
>
>Kevin, can you clarify Why you have to write scripts to deploy a container
>to the COE?   It can be made easy for the user to extract all the
>runtime/env vars needed for a user to just do ³docker run Š²  and poof,
>container running on Swarm on a Magnum bay.  Can you help me understand
>the script part of it?   I don¹t believe container users want an
>abstraction between them and their COE CLIŠ but, what I believe isn¹t
>important.  What I do think is important is that we not require OpenStack
>operators to run that abstraction layer to be running a ³magnum compliant²
>service.  It should either be an ³optional² API add-on or a separate API
>or separate project.  If some folks want an abstraction layer, then great,
>feel free to build it and even propose it under the OpenStack ecosystem..
>But, that abstraction would be a ³proxy API" over the COEs, and doesn¹t
>need to be part of Magnum¹s offering, as it would be targeted at the COE
>interactions and not the bay interactions (which is where Magnum scope

Re: [openstack-dev] [magnum][app-catalog][all] Build unified abstraction for all COEs

2016-04-21 Thread Keith Bray
If you don¹t want a user to have to choose a COE, can¹t we just offer an
option for the operator to mark a particular COE as the ³Default COE² that
could be defaulted to if one isn¹t specified in the Bay create call?  If
the operator didn¹t specify a default one, then the CLI/UI must submit one
in the bay create call otherwise it would fail.

Kevin, can you clarify Why you have to write scripts to deploy a container
to the COE?   It can be made easy for the user to extract all the
runtime/env vars needed for a user to just do ³docker run Š²  and poof,
container running on Swarm on a Magnum bay.  Can you help me understand
the script part of it?   I don¹t believe container users want an
abstraction between them and their COE CLIŠ but, what I believe isn¹t
important.  What I do think is important is that we not require OpenStack
operators to run that abstraction layer to be running a ³magnum compliant²
service.  It should either be an ³optional² API add-on or a separate API
or separate project.  If some folks want an abstraction layer, then great,
feel free to build it and even propose it under the OpenStack ecosystem..
But, that abstraction would be a ³proxy API" over the COEs, and doesn¹t
need to be part of Magnum¹s offering, as it would be targeted at the COE
interactions and not the bay interactions (which is where Magnum scope is
best focused).  I don¹t think Magnum should play in both these distinct
domains (Bay interaction vs. COE interaction).  The former (bay
interaction) is an infrastructure cloud thing (fits well with OpenStack),
the latter (COE interaction) is an obfuscation of emerging technologies,
which gets in to the Trap that Adrian mentioned.  The abstraction layer
API will forever and always be drastically behind in trying to keep up
with the COE innovation.

In summary, an abstraction over the COEs would be best served as a
different effort.  Magnum would be best focused on bay interactions and
should not try to pick a COE winner or require an operator to run a
lowest-common-demonitor API abstraction.

Thanks for listening to my soap-box.
-Keith



On 4/21/16, 2:36 PM, "Fox, Kevin M"  wrote:

>I agree with that, and thats why providing some bare minimum abstraction
>will help the users not have to choose a COE themselves. If we can't
>decide, why can they? If all they want to do is launch a container, they
>should be able to script up "magnum launch-container foo/bar:latest" and
>get one. That script can then be relied upon.
>
>Today, they have to write scripts to deploy to the specific COE they have
>chosen. If they chose Docker, and something better comes out, they have
>to go rewrite a bunch of stuff to target the new, better thing. This puts
>a lot of work on others.
>
>Do I think we can provide an abstraction that prevents them from ever
>having to rewrite scripts? No. There are a lot of features in the COE
>world in flight right now and we dont want to solidify an api around them
>yet. We shouldn't even try that. But can we cover a few common things
>now? Yeah.
>
>Thanks,
>Kevin
>
>From: Adrian Otto [adrian.o...@rackspace.com]
>Sent: Thursday, April 21, 2016 7:32 AM
>To: OpenStack Development Mailing List (not for usage questions)
>Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build unified
>abstraction for all COEs
>
>> On Apr 20, 2016, at 2:49 PM, Joshua Harlow 
>>wrote:
>>
>> Thierry Carrez wrote:
>>> Adrian Otto wrote:
 This pursuit is a trap. Magnum should focus on making native container
 APIs available. We should not wrap APIs with leaky abstractions. The
 lowest common denominator of all COEs is an remarkably low value API
 that adds considerable complexity to Magnum that will not
 strategically advance OpenStack. If we instead focus our effort on
 making the COEs work better on OpenStack, that would be a winning
 strategy. Support and compliment our various COE ecosystems.
>>
>> So I'm all for avoiding 'wrap APIs with leaky abstractions' and 'making
>> COEs work better on OpenStack' but I do dislike the part about COEs
>>(plural) because it is once again the old non-opinionated problem that
>>we (as a community) suffer from.
>>
>> Just my 2 cents, but I'd almost rather we pick one COE and integrate
>>that deeply/tightly with openstack, and yes if this causes some part of
>>the openstack community to be annoyed, meh, to bad. Sadly I have a
>>feeling we are hurting ourselves by continuing to try to be everything
>>and not picking anything (it's a general thing we, as a group, seem to
>>be good at, lol). I mean I get the reason to just support all the
>>things, but it feels like we as a community could just pick something,
>>work together on figuring out how to pick one, using all these bright
>>leaders we have to help make that possible (and yes this might piss some
>>people off, to bad). Then work toward making that something great and
>>move onŠ
>
>The key issue preventing the selection of only on

Re: [openstack-dev] [magnum][app-catalog][all] Build unified abstraction for all COEs

2016-04-21 Thread Keith Bray
The work on the plug-ins can still be done by Magnum core contributors (or
anyone). My point is that the work doesn’t have to be code-coupled to
Magnum except via the plug-in interface, which, like heat resources,
should be relatively straight forward. Creating the plug-in framework in
this way allows for leverage of work by non-Magnum contributors and re-use
of Chef/Ansible/Heat/PickYourFavoriteHere tool for infra configuration and
orchestration.  

-Keith

On 4/20/16, 6:03 PM, "Hongbin Lu"  wrote:

>
>
>> -Original Message-----
>> From: Keith Bray [mailto:keith.b...@rackspace.com]
>> Sent: April-20-16 6:13 PM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build unified
>> abstraction for all COEs
>> 
>> Magnum doesn¹t have to preclude tight integration for single COEs you
>> speak of.  The heavy lifting of tight integration of the COE in to
>> OpenStack (so that it performs optimally with the infra) can be modular
>> (where the work is performed by plug-in models to Magnum, not performed
>> by Magnum itself. The tight integration can be done by leveraging
>> existing technologies (Heat and/or choose your DevOps tool of choice:
>> Chef/Ansible/etc). This allows interested community members to focus on
>> tight integration of whatever COE they want, focusing specifically on
>
>I agree that tight integration can be achieved by a plugin, but I think
>the key question is who will do the work. If tight integration needs to
>be done, I wonder why it is not part of the Magnum efforts. From my point
>of view, pushing the work out doesn't seem to address the original pain,
>which is some users don't want to explore the complexities of individual
>COEs.
>
>> the COE integration part, contributing that integration focus to Magnum
>> via plug-ins, without having to actually know much about Magnum, but
>> instead
>> contribute to the COE plug-in using DevOps tools of choice.   Pegging
>> Magnum to one-and-only one COE means there will be a Magnum2, Magnum3,
>> etc. project for every COE of interest, all with different ways of
>> kicking off COE management.  Magnum could unify that experience for
>> users and operators, without picking a winner in the COE space < this
>> is just like Nova not picking a winner between VM flavors or OS types.
>> It just facilitates instantiation and management of thins.  Opinion
>> here:  The value of Magnum is in being a light-weight/thin API,
>> providing modular choice and plug-ability to COE provisioning and
>> management, thereby providing operators and users choice of COE
>> instantiation and management (via the bay concept), where each COE can
>> be as tightly or loosely integrated as desired by different plug-ins
>> contributed to perform the COE setup and configurations.  So, Magnum
>> could have two or more swarm plug-in options contributed to the
>> community.. One overlays generic swarm on VMs.
>> The other swarm plug-in could instantiate swarm tightly integrated to
>> neutron, keystone, etc on to bare metal.  Magnum just facilities a
>> plug-in model with thin API to offer choice of CEO instantiation and
>> management.
>> The plug-in does the heavy lifting using whatever methods desired by
>> the curator.
>> 
>> That¹s my $0.2.
>> 
>> -Keith
>> 
>> On 4/20/16, 4:49 PM, "Joshua Harlow"  wrote:
>> 
>> >Thierry Carrez wrote:
>> >> Adrian Otto wrote:
>> >>> This pursuit is a trap. Magnum should focus on making native
>> >>> container APIs available. We should not wrap APIs with leaky
>> >>> abstractions. The lowest common denominator of all COEs is an
>> >>> remarkably low value API that adds considerable complexity to
>> Magnum
>> >>> that will not strategically advance OpenStack. If we instead focus
>> >>> our effort on making the COEs work better on OpenStack, that would
>> >>> be a winning strategy. Support and compliment our various COE
>> ecosystems.
>> >
>> >So I'm all for avoiding 'wrap APIs with leaky abstractions' and
>> 'making
>> >COEs work better on OpenStack' but I do dislike the part about COEs
>> >(plural) because it is once again the old non-opinionated problem that
>> >we (as a community) suffer from.
>> >
>> >Just my 2 cents, but I'd almost rather we pick one COE and integrate
>> >that deeply/tightly with openstack, and yes if this causes some part
>> of
>> >the openstack community to 

Re: [openstack-dev] [magnum][app-catalog][all] Build unified abstraction for all COEs

2016-04-21 Thread Keith Bray
+1

From: "Fox, Kevin M" mailto:kevin@pnnl.gov>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Wednesday, April 20, 2016 at 6:14 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build unified 
abstraction for all COEs

I think Magnum much is much closer to Sahara or Trove in its workings. Heat's 
orchestration. Thats what the COE does.

Sahara is and has plugins to deploy various Hadoopy like clusters, get them 
assembled into something useful, and has a few abstraction api's like "submit a 
job to the deployed hadoop cluster queue."

Trove is and has plugins to deploy various Databasey things. Both SQL and 
noSQL. It has a few abstractions over all the things for cluster maintenance, 
backups, db and user creation.

If all Magnum did was deploy a COE, you could potentially just use Heat to do 
that.

What I want to do is have Heat hooked in closely enough through Magnum that 
Heat templates can deploy COE templates through Magnum Resources. Heat tried to 
do that with a docker resource driver directly, and its messy, racy, and 
doesn't work very well. Magnum's in a better position to establish a 
communication channel between Heat and the COE due to its back channel into the 
vms, bypassing Neutron network stuff.

Thanks,
Kevin

From: Georgy Okrokvertskhov 
[gokrokvertsk...@mirantis.com<mailto:gokrokvertsk...@mirantis.com>]
Sent: Wednesday, April 20, 2016 3:51 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build unified 
abstraction for all COEs

If Magnum will be focused on installation and management for COE it will be 
unclear how much it is different from Heat and other generic orchestrations.  
It looks like most of the current Magnum functionality is provided by Heat. 
Magnum focus on deployment will potentially lead to another Heat-like  API.
Unless Magnum is really focused on containers its value will be minimal for 
OpenStack users who already use Heat/Orchestration.


On Wed, Apr 20, 2016 at 3:12 PM, Keith Bray 
mailto:keith.b...@rackspace.com>> wrote:
Magnum doesn¹t have to preclude tight integration for single COEs you
speak of.  The heavy lifting of tight integration of the COE in to
OpenStack (so that it performs optimally with the infra) can be modular
(where the work is performed by plug-in models to Magnum, not performed by
Magnum itself. The tight integration can be done by leveraging existing
technologies (Heat and/or choose your DevOps tool of choice:
Chef/Ansible/etc). This allows interested community members to focus on
tight integration of whatever COE they want, focusing specifically on the
COE integration part, contributing that integration focus to Magnum via
plug-ins, without having to actually know much about Magnum, but instead
contribute to the COE plug-in using DevOps tools of choice.   Pegging
Magnum to one-and-only one COE means there will be a Magnum2, Magnum3,
etc. project for every COE of interest, all with different ways of kicking
off COE management.  Magnum could unify that experience for users and
operators, without picking a winner in the COE space ‹ this is just like
Nova not picking a winner between VM flavors or OS types.  It just
facilitates instantiation and management of thins.  Opinion here:  The
value of Magnum is in being a light-weight/thin API, providing modular
choice and plug-ability to COE provisioning and management, thereby
providing operators and users choice of COE instantiation and management
(via the bay concept), where each COE can be as tightly or loosely
integrated as desired by different plug-ins contributed to perform the COE
setup and configurations.  So, Magnum could have two or more swarm plug-in
options contributed to the community.. One overlays generic swarm on VMs.
The other swarm plug-in could instantiate swarm tightly integrated to
neutron, keystone, etc on to bare metal.  Magnum just facilities a plug-in
model with thin API to offer choice of CEO instantiation and management.
The plug-in does the heavy lifting using whatever methods desired by the
curator.

That¹s my $0.2.

-Keith

On 4/20/16, 4:49 PM, "Joshua Harlow" 
mailto:harlo...@fastmail.com>> wrote:

>Thierry Carrez wrote:
>> Adrian Otto wrote:
>>> This pursuit is a trap. Magnum should focus on making native container
>>> APIs available. We should not wrap APIs with leaky abstractions. The
>>> lowest common denominator of all COEs is an remarkably low value API
>>> that adds considerable complexity to Magnum that will not
>>> strategically advance OpenStack. If we instead focus our effort on
>>> maki

Re: [openstack-dev] [magnum][app-catalog][all] Build unified abstraction for all COEs

2016-04-20 Thread Keith Bray
Magnum doesn¹t have to preclude tight integration for single COEs you
speak of.  The heavy lifting of tight integration of the COE in to
OpenStack (so that it performs optimally with the infra) can be modular
(where the work is performed by plug-in models to Magnum, not performed by
Magnum itself. The tight integration can be done by leveraging existing
technologies (Heat and/or choose your DevOps tool of choice:
Chef/Ansible/etc). This allows interested community members to focus on
tight integration of whatever COE they want, focusing specifically on the
COE integration part, contributing that integration focus to Magnum via
plug-ins, without having to actually know much about Magnum, but instead
contribute to the COE plug-in using DevOps tools of choice.   Pegging
Magnum to one-and-only one COE means there will be a Magnum2, Magnum3,
etc. project for every COE of interest, all with different ways of kicking
off COE management.  Magnum could unify that experience for users and
operators, without picking a winner in the COE space ‹ this is just like
Nova not picking a winner between VM flavors or OS types.  It just
facilitates instantiation and management of thins.  Opinion here:  The
value of Magnum is in being a light-weight/thin API, providing modular
choice and plug-ability to COE provisioning and management, thereby
providing operators and users choice of COE instantiation and management
(via the bay concept), where each COE can be as tightly or loosely
integrated as desired by different plug-ins contributed to perform the COE
setup and configurations.  So, Magnum could have two or more swarm plug-in
options contributed to the community.. One overlays generic swarm on VMs.
The other swarm plug-in could instantiate swarm tightly integrated to
neutron, keystone, etc on to bare metal.  Magnum just facilities a plug-in
model with thin API to offer choice of CEO instantiation and management.
The plug-in does the heavy lifting using whatever methods desired by the
curator.

That¹s my $0.2.

-Keith

On 4/20/16, 4:49 PM, "Joshua Harlow"  wrote:

>Thierry Carrez wrote:
>> Adrian Otto wrote:
>>> This pursuit is a trap. Magnum should focus on making native container
>>> APIs available. We should not wrap APIs with leaky abstractions. The
>>> lowest common denominator of all COEs is an remarkably low value API
>>> that adds considerable complexity to Magnum that will not
>>> strategically advance OpenStack. If we instead focus our effort on
>>> making the COEs work better on OpenStack, that would be a winning
>>> strategy. Support and compliment our various COE ecosystems.
>
>So I'm all for avoiding 'wrap APIs with leaky abstractions' and 'making
>COEs work better on OpenStack' but I do dislike the part about COEs
>(plural) because it is once again the old non-opinionated problem that
>we (as a community) suffer from.
>
>Just my 2 cents, but I'd almost rather we pick one COE and integrate
>that deeply/tightly with openstack, and yes if this causes some part of
>the openstack community to be annoyed, meh, to bad. Sadly I have a
>feeling we are hurting ourselves by continuing to try to be everything
>and not picking anything (it's a general thing we, as a group, seem to
>be good at, lol). I mean I get the reason to just support all the
>things, but it feels like we as a community could just pick something,
>work together on figuring out how to pick one, using all these bright
>leaders we have to help make that possible (and yes this might piss some
>people off, to bad). Then work toward making that something great and
>move on...
>
>>
>> I'm with Adrian on that one. I've attended a lot of container-oriented
>> conferences over the past year and my main takeaway is that this new
>> crowd of potential users is not interested (at all) in an
>> OpenStack-specific lowest common denominator API for COEs. They want to
>> take advantage of the cool features in Kubernetes API or the versatility
>> of Mesos. They want to avoid caring about the infrastructure provider
>> bit (and not deploy Mesos or Kubernetes themselves).
>>
>> Let's focus on the infrastructure provider bit -- that is what we do and
>> what the ecosystem wants us to provide.
>>
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][app-catalog][all] Build unified abstraction for all COEs

2016-04-19 Thread Keith Bray
Sure… I can clarify with a few additional thoughts:

.1) I wouldn’t recommend that it be required for the operator to offer
this API. Representing a view of both providing managed services for
private cloud customer-on-premise installations of upstream OpenStack and
as a business owner with responsibility to operate Magnum for internal
usage within my own employer, I would prefer not to have to operate and
service a unified abstraction API that obfuscates all the benefit of
choice of the native COEs, which is the choice being provided to the end
user who is specifically selecting one COE over another when they
instantiate a bay (unless they pick the “Default” operator choice).  Maybe
a unified abstraction API is a separate project?  OpenStack services get
complicated very quickly and try to do too much.  At a minimum, I would
recommend it be an optional API, not required, and any overhead of
database or other necessary service components should be minimized to not
impact operators who do not want to offer it because it negates the point
of COE choice.  My ideal state is it would be a separate project entirely.

.2) I’d like for folks who want the lowest common denominator API to chime
in with why they want it, and whether they need it to be part of Magnum or
not. I don’t intend to argue with folks who want it… I assume their
reasons are justified, but I would want to find out why it needs to be
part of the Magnum API. Offering choice in COEs and then getting out of
the way (which I believe Magnum should do) is at odds with abstracting the
differentiation of the CEO choice via a unified API.  If there aren’t good
arguments for the "why a unified API needs to be integrated in Magnum",
then have it be separate from a code perspective and not required for
running the Magnum service.  When we talk about APIs and whether a service
is supported by one vendor or another, it is generally easiest to think
about the entire API; The API is either supported in its entirety or the
service isn’t compatible with OpenStack.  If some folks believe a lowest
common denominator API should exist, but there aren’t compelling arguments
for why it must be a required part of the Magnum API then we should
probably consider them as separate projects.  At this point, I am not
compelled to be in favor of integrating a unified API in Magnum when doing
so is a fundamentally different direction than the route Magnum has been
headed down which.  By offering choice of COE, and trying not to get in
the way of that, Magnum provides relevant choice of platform to a very
rapidly changing technology landscape.

Thank you for asking for clarification.  I’d really like to hear thoughts
from anyone who wants the unified API as to why it would need to be part
of Magnum, especially when doing so means chasing rapidly changing
technologies (hard to keep up with continued abstraction) and not offering
the deep value of their differentiation.

Kind regards,
-Keith



On 4/19/16, 8:58 PM, "Hongbin Lu"  wrote:

>I am going to clarify one thing. Users will always have access to native
>APIs provided by individual COEs, regardless of the existence of the
>common abstraction. In other words, the proposed common abstraction layer
>is an addition, not a replacement.
>
>Best regards,
>Hongbin
>
>> -Original Message-
>> From: Keith Bray [mailto:keith.b...@rackspace.com]
>> Sent: April-19-16 7:17 PM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build unified
>> abstraction for all COEs
>> 
>> I would recommend against implementing a lowest common denominator API
>> for
>> the COEs ³right now.²   It¹s too early to tell if the COEs are going to
>> be
>> seen as a commodity (where in the long run they may all perform
>> relatively equal for the majority of workloads < in which case why do
>> you care to have choice in COE?), or continue to be
>> specialized/differentiated.  If you assume having choice to provision
>> more than one COE using the same system is valuable, then it is logical
>> that users value the differentiation in the COEs in some way. If they
>> are differentiated, and you value that, then you likely want to avoid
>> the lowest-common-demonitator API because that abstracts you from the
>> differentiation that you value.
>> 
>> Kind regards,
>> -Keith
>> 
>> 
>> 
>> On 4/19/16, 10:18 AM, "Hongbin Lu"  wrote:
>> 
>> >Sorry, it is too late to adjust the schedule now, but I don't mind to
>> >have a pre-discussion here. If you have opinions/ideas on this topic
>> >but cannot attend the session [1], we'd like to have you inputs in
>> this
>> >ML or in the etherpad [2]. This will help to set th

Re: [openstack-dev] [magnum][app-catalog][all] Build unified abstraction for all COEs

2016-04-19 Thread Keith Bray
I would recommend against implementing a lowest common denominator API for
the COEs ³right now.²   It¹s too early to tell if the COEs are going to be
seen as a commodity (where in the long run they may all perform relatively
equal for the majority of workloads ‹ in which case why do you care to
have choice in COE?), or continue to be specialized/differentiated.  If
you assume having choice to provision more than one COE using the same
system is valuable, then it is logical that users value the
differentiation in the COEs in some way. If they are differentiated, and
you value that, then you likely want to avoid the
lowest-common-demonitator API because that abstracts you from the
differentiation that you value.

Kind regards,
-Keith



On 4/19/16, 10:18 AM, "Hongbin Lu"  wrote:

>Sorry, it is too late to adjust the schedule now, but I don't mind to
>have a pre-discussion here. If you have opinions/ideas on this topic but
>cannot attend the session [1], we'd like to have you inputs in this ML or
>in the etherpad [2]. This will help to set the stage for the session.
>
>For background, Magnum supports provisioning Container Orchestration
>Engines (COEs), including Kubernetes, Docker Swarm and Apache Mesos, on
>top of Nova instances. After the provisioning, users need to use the
>native COE APIs to manage containers (and/or other COE resources). In the
>Austin summit, we will have a session to discuss if it makes sense to
>build a common abstraction layer for the supported COEs. If you think it
>is a good idea, it would be great to elaborate the details. For example,
>answering the following questions could be useful:
>* Which abstraction(s) you are looking for (i.e. container, pod)?
>* What are your use cases for the abstraction(s)?
>* How the native APIs provided by individual COEs doesn't satisfy your
>requirements?
>
>If you think it is a bad idea, I would love to hear your inputs as well:
>* Why it is bad?
>* If there is no common abstraction, how to address the pain of
>leveraging native COE APIs as reported below?
>
>[1] 
>https://www.openstack.org/summit/austin-2016/summit-schedule/events/9102
>[2] https://etherpad.openstack.org/p/newton-magnum-unified-abstraction
>
>Best regards,
>Hongbin
>
>> -Original Message-
>> From: Fox, Kevin M [mailto:kevin@pnnl.gov]
>> Sent: April-18-16 6:13 PM
>> To: OpenStack Development Mailing List (not for usage questions);
>> Flavio Percoco
>> Cc: foundat...@lists.openstack.org
>> Subject: Re: [openstack-dev] [OpenStack Foundation] [board][tc][all]
>> One Platform - Containers/Bare Metal? (Re: Board of Directors Meeting)
>> 
>> I'd love to attend, but this is right on top of the app catalog meeting.
>> I think the app catalog might be one of the primary users of a cross
>> COE api.
>> 
>> At minimum we'd like to be able to be able to store url's for
>> Kubernetes/Swarm/Mesos templates and have an api to kick off a workflow
>> in Horizon to have Magnum start up a new instance of of the template
>> the user selected.
>> 
>> Thanks,
>> Kevin
>> 
>> From: Hongbin Lu [hongbin...@huawei.com]
>> Sent: Monday, April 18, 2016 2:09 PM
>> To: Flavio Percoco; OpenStack Development Mailing List (not for usage
>> questions)
>> Cc: foundat...@lists.openstack.org
>> Subject: Re: [openstack-dev] [OpenStack Foundation] [board][tc][all]
>> One Platform - Containers/Bare Metal? (Re: Board of Directors Meeting)
>> 
>> Hi all,
>> 
>> Magnum will have a fishbowl session to discuss if it makes sense to
>> build a common abstraction layer for all COEs (kubernetes, docker swarm
>> and mesos):
>> 
>> https://www.openstack.org/summit/austin-2016/summit-
>> schedule/events/9102
>> 
>> Frankly, this is a controversial topic since I heard agreements and
>> disagreements from different people. It would be great if all of you
>> can join the session and share your opinions and use cases. I wish we
>> will have a productive discussion.
>> 
>> Best regards,
>> Hongbin
>> 
>> > -Original Message-
>> > From: Flavio Percoco [mailto:fla...@redhat.com]
>> > Sent: April-12-16 8:40 AM
>> > To: OpenStack Development Mailing List (not for usage questions)
>> > Cc: foundat...@lists.openstack.org
>> > Subject: Re: [openstack-dev] [OpenStack Foundation] [board][tc][all]
>> > One Platform - Containers/Bare Metal? (Re: Board of Directors Meeting)
>> >
>> > On 11/04/16 16:53 +, Adrian Otto wrote:
>> > >Amrith,
>> > >
>> > >I respect your point of view, and agree that the idea of a common
>> > >compute API is attractive... until you think a bit deeper about what
>> > that
>> > >would mean. We seriously considered a "global" compute API at the
>> > >time we were first contemplating Magnum. However, what we came to
>> > >learn through the journey of understanding the details of how such a
>> > >thing would be implemented, that such an API would either be (1) the
>> > >lowest common denominator (LCD) of all compute types, or (2) an
>> > >exceedingly
>> > complex

[openstack-dev] [app-catalog] [solum] Base Image tagging vs. App tagging

2015-06-18 Thread Keith Bray
Hi folks,

I had to leave the app-catalog IRC meeting early today, but I read back through 
the logs.   I wanted to bring up a point about Apps vs. Components, and 
determination of what is an app and tagging.  I don't think it's any more black 
and white with Solum language packs than it is with Glance images.

As an example, a solum user can create a language pack called Ubuntu, LAMP,  
Wordpress, DockerRegistry, or anything else.. In fact, any Docker image in the 
public Docker Registry could become a Solum language pack .   A language pack 
can be a base run-time where the user then layers app code on-top, or it can be 
a run-time with application code already installed that the user just layers on 
changes to the app code.  Applications and application components can be 
pre-installed on solum language packs.   Solum layers on the controlled 
workflow to integrate a user's CI/CD options of choice, where Solum's 
controlled workflow instills the CI/CD gates (e.g. Tests must pass before we 
push your app live to production) and ensures proper Heat template selection to 
match appropriate reference architecture for the type of app being deployed.
Think of Solum as integrating Heat, Auto-scale, Git, Mistral, and up-leveing 
application deploying to the cloud such that an end-user just needs to specify 
a language pack, a git repo, and optionally a test command and application run 
command.   If a base language pack has everything needed to get started, it can 
be used standalone with an empty git repo or Solum could setup a git repo 
automatically with the base app code (e.g. Wordpress).

So, I want to challenge the notion that it's a clear line for solum language 
packs to not be tagged apps and that glance images are the only artifacts in 
the gray area.

Thanks,
-Keith
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum][app-catalog] [ Supporting swift downloads for operator languagepacks

2015-06-17 Thread Keith Bray
Hi Kevin,

We absolute envision languagepack artifacts being made available via
apps.openstack.org (ignoring for a moment that the name may not be a
perfect fit, particularly for things like vanilla glance images ... Is it
an OS or an App? ...  catalog.openstack.org might be more fitting).
Anyway, there are two stages for language packs, unbuilt, and built.  If
it's in an unbuilt state, then it's really a Dockerfile + any accessory
files that the Dockerfile references.   If it's in a built state, then
it's a Docker image (same as what is found on Dockerhub I believe).  I
think there will need to be more discussion to know what users prefer,
built vs. unbuilt, or both options (where unbuilt is often a collection of
files, best managed in a repo like github vs. built which are best
provided as direct links so a single source like Dockerhub).

-Keith

On 6/17/15 1:58 PM, "Fox, Kevin M"  wrote:

>This question may be off on a tangent, or may be related.
>
>As part of the application catalog project, (http://apps.openstack.org/)
>we're trying to provide globally accessible resources that can be easily
>consumed in OpenStack Clouds. How would these global Language Packs fit
>in? Would the url record in the app catalog be required to point to an
>Internet facing public Swift system then? Or, would it point to the
>source git repo that Solum would use to generate the LP still?
>
>Thanks,
>Kevin
>
>From: Randall Burt [randall.b...@rackspace.com]
>Sent: Wednesday, June 17, 2015 11:38 AM
>To: OpenStack Development Mailing List (not for usage questions)
>Subject: Re: [openstack-dev] [Solum] Supporting swift   downloads
>for operatorlanguagepacks
>
>Yes. If an operator wants to make their LP publicly available outside of
>Solum, I was thinking they could just make GET's on the container public.
>That being said, I'm unsure if this is realistically do-able if you still
>have to have an authenticated tenant to access the objects. Scratch that;
>http://blog.fsquat.net/?p=40 may be helpful.
>
>On Jun 17, 2015, at 1:27 PM, Adrian Otto 
> wrote:
>
>> To be clear, Randall is referring to a swift container (directory).
>>
>> Murali has a good idea of attempting to use swift client first, as it
>>has performance optimizations that can speed up the process more than
>>naive file transfer tools. I did mention to him that wget does have a
>>retiree feature, and that we could see about using curl instead to allow
>>for chunked encoding as additional optimizations.
>>
>> Randall, are you suggesting that we could use swift client for both
>>private and public LP uses? That sounds like a good suggestion to me.
>>
>> Adrian
>>
>>> On Jun 17, 2015, at 11:10 AM, Randall Burt
>>> wrote:
>>>
>>> Can't an operator make the target container public therefore removing
>>>the need for multiple access strategies?
>>>
>>>  Original message 
>>> From: Murali Allada
>>> Date:06/17/2015 11:41 AM (GMT-06:00)
>>> To: "OpenStack Development Mailing List (not for usage questions)"
>>> Subject: [openstack-dev] [Solum] Supporting swift downloads for
>>>operator languagepacks
>>>
>>> Hello Solum Developers,
>>>
>>> When we were designing the operator languagepack feature for Solum, we
>>>wanted to make use of public urls to download operator LPs, such as
>>>those available for CDN backed swift containers we have at Rackspace,
>>>or any publicly accessible url. This would mean that when a user
>>>chooses to build applications on to​​p of a languagepack provided by
>>>the operator, we use a url to 'wget' the LP image.
>>>
>>> Recently, we have started noticing a number of failures because of
>>>corrupted docker images downloaded using 'wget'. The docker images work
>>>fine when we download them manually with a swift client and use them.
>>>The corruption seem to be happening when we try to download a large
>>>image using 'wget' and there are dropped packets or intermittent
>>>network issues.
>>>
>>> My thinking is to start using the swift client to download operator
>>>LPs by default instead of wget. The swift client already implements
>>>retry logic, downloading large images in chunks, etc. This means we
>>>would not get the niceties of using publicly accessible urls. However,
>>>the feature will be more reliable and robust.
>>>
>>> The implementation would be as follows:
>>>  • ​We'll use the existing service tenant configuration available
>>>in the solum config file to authenticate and store operator
>>>languagepacks using the swift client. We were using a different tenant
>>>to build and host LPs, but now that we require the tenants credentials
>>>in the config file, it's best to reuse the existing service tenant
>>>creds. Note: If we don't, we'll have 3 separate tenants to maintain.
>>>  • ​Service tenant
>>>  • Operator languagepack tenant
>>>  • Global admin tenant
>>>  • I'll keep the option to download the operator languagepacks
>>>from a 

[openstack-dev] [Solum] Roadmap additions

2015-06-17 Thread Keith Bray
Hi Adrian,

As an FYI, I took a shot at updating the OpenStack wiki view of the Roadmap for 
Solum per IRC and developer collaboration, where good progress has been made 
over the last cycle delivering on features that make Solum usable in a 
production OpenStack system environment.  This is, of course, one view of a 
feature set for the next milestone.  As always, community input, collaboration, 
and contribution is welcome.

https://wiki.openstack.org/wiki/Solum/HighLevelRoadmap

Kind regards,
-Keith
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Should logs be deleted when we delete an app?

2015-06-16 Thread Keith Bray
CLIs should get versioned like any other contract and allow for change (not be 
restricted in stone to what's already out there{).  With Solum, we have less to 
worry about as we are at the early phases of adoption and growth.  To someone's 
earlier point, you can  have —non-interactive flags which allows shell 
scripting, or —interactive which provides a more positive human interaction 
experience (defaulting either way, but my $0.2 is you default to human 
interaction, is even the shell scripters start there to learn/test the 
capabilities manually before scripting.  I think projects can solve for both, 
it just takes a willingness to do so.  To the extent that can be tackled in the 
new unified OpenStack client, that would be fantastic!

-Keith

From: , Kevin M mailto:kevin@pnnl.gov>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, June 16, 2015 7:05 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Solum] Should logs be deleted when we delete an 
app?

It sounded like the push was, cli's for interactive, if you want to script, use 
python. My assertion was, developers script in python, users/admins script in 
shell usually. Not arguing against making the cli user experience more pleasant 
for interactive users, but realize shell is the way most user/admins will 
script since that is what they are accustomed to.

Now, unfortunately there's probably a lot of scripts out there today, and if 
you make things more interactive, you risk breaking them horribly if you start 
requiring them to be default interactive  :/ Thats not an easily solved 
problem. Best way I can think of is fix it in the new unified openstack client, 
and give the interactive binary a new name to run interactive mode. Shell 
scripts can continue to use the existing stuff without fear of breakage.

Thanks,
Kevin

From: Keith Bray [keith.b...@rackspace.com<mailto:keith.b...@rackspace.com>]
Sent: Tuesday, June 16, 2015 4:47 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Solum] Should logs be deleted when we delete an 
app?

Kevin, I agree with your break out, except I think you are missing a 3rd 
category.   100's of public cloud support specialists, developers, and product 
management folks use the CLI without scripts every day in supporting the 
OpenStack services and customers.  Using and interacting with the CLI is how 
folks learn the OpenStack services. The CLIs can be painful for those users 
when they actually want to learn the service, not shell script around it.

-Keith

From: , Kevin M mailto:kevin@pnnl.gov>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, June 16, 2015 6:28 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Solum] Should logs be deleted when we delete an 
app?

-1. There are developers and there are users/admins. The former tend to write 
in python. the latter, shell.

Thanks,
Kevin

From: Keith Bray [keith.b...@rackspace.com<mailto:keith.b...@rackspace.com>]
Sent: Tuesday, June 16, 2015 2:42 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Solum] Should logs be deleted when we delete an 
app?

Isn't that what the SDK is for?   To chip in with a Product Management type hat 
on, I'd think the CLI should be primarily focused on user experience 
interaction, and the SDK should be primarily targeted for developer automation 
needs around programmatically interacting with the service.   So, I would argue 
that the target market for the CLI should not be the developer who wants to 
script.

-Keith

From: Adrian Otto mailto:adrian.o...@rackspace.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, June 16, 2015 12:24 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Solum] Should logs be deleted when we delete an 
app?

Interactive choices like that one can make it more confusing for developers who 
want to script with the CLI. My preference would be to label the app delete 
help text to clearly indicate that it deletes logs
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Should logs be deleted when we delete an app?

2015-06-16 Thread Keith Bray
Kevin, I agree with your break out, except I think you are missing a 3rd 
category.   100's of public cloud support specialists, developers, and product 
management folks use the CLI without scripts every day in supporting the 
OpenStack services and customers.  Using and interacting with the CLI is how 
folks learn the OpenStack services. The CLIs can be painful for those users 
when they actually want to learn the service, not shell script around it.

-Keith

From: , Kevin M mailto:kevin@pnnl.gov>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, June 16, 2015 6:28 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Solum] Should logs be deleted when we delete an 
app?

-1. There are developers and there are users/admins. The former tend to write 
in python. the latter, shell.

Thanks,
Kevin
____
From: Keith Bray [keith.b...@rackspace.com<mailto:keith.b...@rackspace.com>]
Sent: Tuesday, June 16, 2015 2:42 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Solum] Should logs be deleted when we delete an 
app?

Isn't that what the SDK is for?   To chip in with a Product Management type hat 
on, I'd think the CLI should be primarily focused on user experience 
interaction, and the SDK should be primarily targeted for developer automation 
needs around programmatically interacting with the service.   So, I would argue 
that the target market for the CLI should not be the developer who wants to 
script.

-Keith

From: Adrian Otto mailto:adrian.o...@rackspace.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, June 16, 2015 12:24 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Solum] Should logs be deleted when we delete an 
app?

Interactive choices like that one can make it more confusing for developers who 
want to script with the CLI. My preference would be to label the app delete 
help text to clearly indicate that it deletes logs
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Should logs be deleted when we delete an app?

2015-06-16 Thread Keith Bray
That makes sense Randall.. .a sort of "Novice mode" vs. "Expert mode."
I definitely want to see OpenStack to get easier to use, and lower the
barrier to entry. If projects only cater to developers, progress will be
slower than what it could be.

-Keith

On 6/16/15 4:52 PM, "Randall Burt"  wrote:

>While I agree with what you're saying, the way the OpenStack clients are
>traditionally written/designed, the CLI *is* the SDK for those users who
>want to do scripting in a shell rather than in Python. If we go with your
>suggestion, we'd probably also want to have the ability to suppress those
>prompts for folks that want to shell script.
>
>On Jun 16, 2015, at 4:42 PM, Keith Bray 
> wrote:
>
>> Isn't that what the SDK is for?   To chip in with a Product Management
>>type hat on, I'd think the CLI should be primarily focused on user
>>experience interaction, and the SDK should be primarily targeted for
>>developer automation needs around programmatically interacting with the
>>service.   So, I would argue that the target market for the CLI should
>>not be the developer who wants to script.
>> 
>> -Keith
>> 
>> From: Adrian Otto 
>> Reply-To: "OpenStack Development Mailing List (not for usage
>>questions)" 
>> Date: Tuesday, June 16, 2015 12:24 PM
>> To: "OpenStack Development Mailing List (not for usage questions)"
>>
>> Subject: Re: [openstack-dev] [Solum] Should logs be deleted when we
>>delete an app?
>> 
>>> Interactive choices like that one can make it more confusing for
>>>developers who want to script with the CLI. My preference would be to
>>>label the app delete help text to clearly indicate that it deletes logs
>> 
>>_
>>_
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Should logs be deleted when we delete an app?

2015-06-16 Thread Keith Bray
Isn't that what the SDK is for?   To chip in with a Product Management type hat 
on, I'd think the CLI should be primarily focused on user experience 
interaction, and the SDK should be primarily targeted for developer automation 
needs around programmatically interacting with the service.   So, I would argue 
that the target market for the CLI should not be the developer who wants to 
script.

-Keith

From: Adrian Otto mailto:adrian.o...@rackspace.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, June 16, 2015 12:24 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Solum] Should logs be deleted when we delete an 
app?

Interactive choices like that one can make it more confusing for developers who 
want to script with the CLI. My preference would be to label the app delete 
help text to clearly indicate that it deletes logs
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Should logs be deleted when we delete an app?

2015-06-15 Thread Keith Bray
Regardless of what the API defaults to, could we have the CLI prompt/warn so 
that the user easily knows that both options exist?  Is there a precedent 
within OpenStack for a similar situation?

E.g.
> solum app delete MyApp
 Do you want to also delete your logs? (default is Yes):  [YES/no]
  NOTE, if you choose No, application logs will remain on your 
account. Depending on your service provider, you may incur on-going storage 
charges.

Thanks,
-Keith

From: Devdatta Kulkarni 
mailto:devdatta.kulka...@rackspace.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Monday, June 15, 2015 9:56 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Solum] Should logs be deleted when we delete an 
app?


Yes, the log deletion should be optional.

The question is what should be the default behavior. Should the default be to 
delete the logs and provide a flag to keep them, or keep the logs by default 
and provide a override flag to delete them?

Delete-by-default is consistent with the view that when an app is deleted, all 
its artifacts are deleted (the app's meta data, the deployment units (DUs), and 
the logs). This behavior is also useful in our current state when the app 
resource and the CLI are in flux. For now, without a way to specify a flag, 
either to delete the logs or to keep them, delete-by-default behavior helps us 
clean all the log files from the application's cloud files container when an 
app is deleted.

This is very useful for our CI jobs. Without this, we end up with lots of log 
files in the application's container,

and have to resort to separate scripts to delete them up after an app is 
deleted.


Once the app resource and CLI stabilize it should be straightforward to change 
the default behavior if required.


- Devdatta



From: Adrian Otto mailto:adrian.o...@rackspace.com>>
Sent: Friday, June 12, 2015 6:54 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Solum] Should logs be deleted when we delete an app?

Team,

We currently delete logs for an app when we delete the app[1].

https://bugs.launchpad.net/solum/+bug/1463986

Perhaps there should be an optional setting at the tenant level that determines 
whether your logs are deleted or not by default (set to off initially), and an 
optional parameter to our DELETE calls that allows for the opposite action from 
the default to be specified if the user wants to override it at the time of the 
deletion. Thoughts?

Thanks,

Adrian
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [new][app-catalog] App Catalog next steps

2015-05-27 Thread Keith Bray
Maybe.  I'm not up to speed on defcore/refstack requirements.. But, to put
the question on the table, do folks want the OpenStack App-catalog to only
have support for the "lowest-common-denominator" of artifacts and cloud
capabilities, or instead allow for showcasing all that is possible when
using cloud technology that major vendors have adopted but are not yet
part of refstack/defcore?

-Keith

On 5/27/15 6:58 PM, "Fox, Kevin M"  wrote:

>Should RefStack be involved here? To integrate tightly with the App
>Catalog, the Cloud Provider would be required to run RefStack against
>their cloud, the results getting registered to an App Catalog service in
>that Cloud. The App Catalog UI in Horizon could then filter out from the
>global App Catalog any apps that would be incompatible with their cloud.
>I think the Android app store works kind of like that...
>
>Thanks,
>Kevin
>____
>From: Keith Bray [keith.b...@rackspace.com]
>Sent: Wednesday, May 27, 2015 4:41 PM
>To: OpenStack Development Mailing List (not for usage questions)
>Subject: Re: [openstack-dev] [new][app-catalog] App Catalog next steps
>
>In-line responses.  Thanks for chipping in Monty.
>-Keith
>
>On 5/27/15 6:03 PM, "Monty Taylor"  wrote:
>
>>On 05/27/2015 06:35 PM, Keith Bray wrote:
>>> Joe, regarding apps-catalog for any app deployable on OpenStack
>>> (regardless of deployment technology), my two cents is that is a good
>>> idea.  I also believe, however, that the app-catalog needs to evolve
>>> first with features that make it super simple to understand which
>>> artifacts will work on which clouds (out-of-the-box) vs. needing
>>> additional required dependencies or cloud operator software.   My
>>> guess is there will be a lot of discussions related to defcore,
>>> and/or tagging artifacts with known public/private cloud
>>> distributions  the artifacts are known to work on. To the extent an
>>> openstack operator or end user has to download/install 3rd party or
>>> stack forge or non defcore openstack components in order to deploy an
>>> artifact, the more sophisticated and complicated it becomes and we
>>> need a way to depict that for items shown in the catalog.
>>>
>>> For example, I'd like to see a way to tag items in the catalog as
>>> known-to-work on HP or Rackspace public cloud, or known to work on
>>> RDO.  Even a basic Heat template optimized for one cloud won't
>>> necessarily work on another cloud without modification.
>>
>>That's an excellent point - I have two opposing thoughts to it.
>>
>>a) That we have to worry about the _vendor_ side of that is a bug and
>>should be fixed. Since all clouds already have a service catalog,
>>mapping out a "this app requires trove" should be easy enough. The other
>>differences are ... let's just say as a user they do not provide me value
>
>I wouldn't call it a bug.  By design, Heat is pluggable with different
>resource implementations. And, different cloud run different plug-ins,
>hence a template written for one cloud won't necessarily run on another
>cloud unless that cloud also runs the same Heat plug-ins.
>
>>
>>b) The state you describe is today's reality, and as much as wringing
>>out hands and spitting may feel good, it doesn't get us anywhere. You
>>do, in _fact_ need to know those things to use even basic openstack
>>functions today- so we might as well deal with it.
>
>I don't buy the argument of you need to know those things to make
>openstack function, because:  The catalog _today_ is targeted more at the
>end user, not the operator.  The end user shouldn't need to know whether
>trove is or is not set up, let alone how to do it.  Maybe that isn't the
>intention of the catalog, and probably worth sorting out.
>
>>
>>I'll take this as an opportunity to point people towards work in this
>>direction grew out of a collaboration between infra and ansible:
>>
>>http://git.openstack.org/cgit/openstack-infra/shade/
>>and
>>http://git.openstack.org/cgit/openstack/os-client-config
>>
>>os-client-config knows about the differences between the clouds. It has,
>>sadly, this file:
>>
>>http://git.openstack.org/cgit/openstack/os-client-config/tree/os_client_c
>>o
>>nfig/vendors.py
>>
>>Which lists as much knowledge as we've figured out so far about the
>>differences between clouds.
>>
>>shade presents business logic to users so that they don't have to know.
>>For instance:
>
>I'm all

Re: [openstack-dev] [new][app-catalog] App Catalog next steps

2015-05-27 Thread Keith Bray
Kevin, I like your vision.  Today we have images, heat templates, Murano 
packages.  What are your thoughts on how to manage additions?  Should it be 
restricted to things in the OpenStack namespace under the big tent?  E.g., I'd 
like to see Solum language packs get added to the app-catalog.  Solum is 
currently in stack forge, but meets all the criteria I believe to enter 
OpenStack namespace.  We plan to propose it soon. Folks from various companies 
did a lot of work the past few summits to clearly distinguish, Heat, Murano, 
Mistral, and Solum as differentiated enough to co-exist and add value to the 
ecosystem.

Thanks,
-Keith

From: , Kevin M mailto:kevin@pnnl.gov>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Wednesday, May 27, 2015 6:27 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [new][app-catalog] App Catalog next steps

I'd say, tools that utilize OpenStack, like the knife openstack plugin, are not 
something that you would probably go to the catalog to find. And also, the 
recipes that you would use with knife would not be specific to OpenStack in any 
way, so you would just be duplicating the config management system's own 
catalog in the OpenStack catalog, which would be error prone. Duplicating all 
the chef recipes, and docker containers, puppet stuff, and . is a lot of 
work...

The vision I have for the Catalog (I can be totally wrong here, lets please 
discuss) is a place where users (non computer scientists) can visit after 
logging into their Cloud, pick some app of interest, hit launch, and optionally 
fill out a form. They then have a running piece of software, provided by the 
greater OpenStack Community, that they can interact with, and their Cloud can 
bill them for. Think of it as the Apple App Store for OpenStack.  Having a 
reliable set of deployment engines (Murano, Heat, whatever) involved is 
critical to the experience I think. Having too many of them though will mean it 
will be rare to have a cloud that has all of them, restricting the utility of 
the catalog. Too much choice here may actually be a detriment.

If chef, or what ever other configuration management system became multitenant 
aware, and integrated into OpenStack and provided by the Cloud providers, then 
maybe it would fit into the app store vision?

Thanks,
Kevin

From: Joe Gordon [joe.gord...@gmail.com]
Sent: Wednesday, May 27, 2015 3:20 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [new][app-catalog] App Catalog next steps



On Fri, May 22, 2015 at 9:06 PM, Christopher Aedo 
mailto:ca...@mirantis.com>> wrote:
I want to start off by thanking everyone who joined us at the first
working session in Vancouver, and those folks who have already started
adding content to the app catalog. I was happy to see the enthusiasm
and excitement, and am looking forward to working with all of you to
build this into something that has a major impact on OpenStack
adoption by making it easier for our end users to find and share the
assets that run on our clouds.

Great job. This is very exciting to see, I have been wanting something like 
this for some time now.


The catalog: http://apps.openstack.org
The repo: https://github.com/stackforge/apps-catalog
The wiki: https://wiki.openstack.org/wiki/App-Catalog

Please join us via IRC at #openstack-app-catalog on freenode.

Our initial core team is Christopher Aedo, Tom Fifield, Kevin Fox,
Serg Melikyan.

I’ve started a doodle poll to vote on the initial IRC meeting
schedule, if you’re interested in helping improve and build up this
catalog please vote for the day/time that works best and get involved!
http://doodle.com/vf3husyn4bdkui8w

At the summit we managed to get one planning session together. We
captured that on etherpad[1], but I’d like to highlight here a few of
the things we talked about working on together in the near term:

-More information around asset dependencies (like clarifying
requirements for Heat templates or Glance images for instance),
potentially just by providing better guidance in what should be in the
description and attributes sections.
-With respect to the assets that are listed in the catalog, there’s a
need to account for tagging, rating/scoring, and a way to have
comments or a forum for each asset so potential users can interact
outside of the gerrit review system.
-Supporting more resource types (Sahara, Trove, Tosca, others)

What about expanding the scope of the application catalog to any application 
that can run *on* OpenStack, versus the implied scope of applications that can 
be deployed *by* (heat, murano, etc.) OpenStack and *on* OpenStack services 
(nova, cinder etc.). This would mean adding room for Ansible roles that 
provision openstack resource

Re: [openstack-dev] [new][app-catalog] App Catalog next steps

2015-05-27 Thread Keith Bray
In-line responses.  Thanks for chipping in Monty.
-Keith

On 5/27/15 6:03 PM, "Monty Taylor"  wrote:

>On 05/27/2015 06:35 PM, Keith Bray wrote:
>> Joe, regarding apps-catalog for any app deployable on OpenStack
>> (regardless of deployment technology), my two cents is that is a good
>> idea.  I also believe, however, that the app-catalog needs to evolve
>> first with features that make it super simple to understand which
>> artifacts will work on which clouds (out-of-the-box) vs. needing
>> additional required dependencies or cloud operator software.   My
>> guess is there will be a lot of discussions related to defcore,
>> and/or tagging artifacts with known public/private cloud
>> distributions  the artifacts are known to work on. To the extent an
>> openstack operator or end user has to download/install 3rd party or
>> stack forge or non defcore openstack components in order to deploy an
>> artifact, the more sophisticated and complicated it becomes and we
>> need a way to depict that for items shown in the catalog.
>> 
>> For example, I'd like to see a way to tag items in the catalog as
>> known-to-work on HP or Rackspace public cloud, or known to work on
>> RDO.  Even a basic Heat template optimized for one cloud won't
>> necessarily work on another cloud without modification.
>
>That's an excellent point - I have two opposing thoughts to it.
>
>a) That we have to worry about the _vendor_ side of that is a bug and
>should be fixed. Since all clouds already have a service catalog,
>mapping out a "this app requires trove" should be easy enough. The other
>differences are ... let's just say as a user they do not provide me value

I wouldn't call it a bug.  By design, Heat is pluggable with different
resource implementations. And, different cloud run different plug-ins,
hence a template written for one cloud won't necessarily run on another
cloud unless that cloud also runs the same Heat plug-ins.

>
>b) The state you describe is today's reality, and as much as wringing
>out hands and spitting may feel good, it doesn't get us anywhere. You
>do, in _fact_ need to know those things to use even basic openstack
>functions today- so we might as well deal with it.

I don't buy the argument of you need to know those things to make
openstack function, because:  The catalog _today_ is targeted more at the
end user, not the operator.  The end user shouldn't need to know whether
trove is or is not set up, let alone how to do it.  Maybe that isn't the
intention of the catalog, and probably worth sorting out.

>
>I'll take this as an opportunity to point people towards work in this
>direction grew out of a collaboration between infra and ansible:
>
>http://git.openstack.org/cgit/openstack-infra/shade/
>and
>http://git.openstack.org/cgit/openstack/os-client-config
>
>os-client-config knows about the differences between the clouds. It has,
>sadly, this file:
>
>http://git.openstack.org/cgit/openstack/os-client-config/tree/os_client_co
>nfig/vendors.py
>
>Which lists as much knowledge as we've figured out so far about the
>differences between clouds.
>
>shade presents business logic to users so that they don't have to know.
>For instance:

I'm all +1 on different artifact types with different deployment
mechanisms, including Ansible, in case that wasn't clear. As long as the
app-catalog supports letting the consumer know what they are in for and
expectations.  I'm not clear on how the infra stuff works, but agree we
don't want cloud specific logic... I especially don't want the application
architect authors (e.g. The folks writing Heat templates and/or Murano
packages) to have to account for Cloud specific checks in their authoring
files. It'd be better to automate this on the catalog testing side at
best, or use author submission + voting as a low cost human method (but
not without problems in up-keep).

>
>import shade
>cloud = shade.openstack_cloud()
>cloud.create_image(
>name='ubuntu-trusty',
>filename='ubuntu-trusty.qcow2',
>wait=True)
>
>Should upload an image to an openstack cloud no matter the deployer
>choices that are made.
>
>The new upstream ansible modules build on this - so if you say:
>
>os_server: name=ubuntu-test flavor_ram=1024 image='Ubuntu 14.04 LTS'
>   config_drive=yes
>
>It _should_ just work. Of course, image names and image content across
>clouds vary - so you probably want:
>
>os_image: name=ubuntu-trusty file=ubuntu-trusty.qcow2 wait=yes
>  register=image
>os_server: name=ubuntu-test flavor_ram=1024 image={{ image.id }}
> 

Re: [openstack-dev] [new][app-catalog] App Catalog next steps

2015-05-27 Thread Keith Bray
Joe, regarding apps-catalog for any app deployable on OpenStack (regardless of 
deployment technology), my two cents is that is a good idea.  I also believe, 
however, that the app-catalog needs to evolve first with features that make it 
super simple to understand which artifacts will work on which clouds 
(out-of-the-box) vs. needing additional required dependencies or cloud operator 
software.   My guess is there will be a lot of discussions related to defcore, 
and/or tagging artifacts with known public/private cloud distributions  the 
artifacts are known to work on. To the extent an openstack operator or end user 
has to download/install 3rd party or stack forge or non defcore openstack 
components in order to deploy an artifact, the more sophisticated and 
complicated it becomes and we need a way to depict that for items shown in the 
catalog.

For example, I'd like to see a way to tag items in the catalog as known-to-work 
on HP or Rackspace public cloud, or known to work on RDO.  Even a basic Heat 
template optimized for one cloud won't necessarily work on another cloud 
without modification.

Thanks,
-Keith

From: Joe Gordon mailto:joe.gord...@gmail.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Wednesday, May 27, 2015 5:20 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [new][app-catalog] App Catalog next steps



On Fri, May 22, 2015 at 9:06 PM, Christopher Aedo 
mailto:ca...@mirantis.com>> wrote:
I want to start off by thanking everyone who joined us at the first
working session in Vancouver, and those folks who have already started
adding content to the app catalog. I was happy to see the enthusiasm
and excitement, and am looking forward to working with all of you to
build this into something that has a major impact on OpenStack
adoption by making it easier for our end users to find and share the
assets that run on our clouds.

Great job. This is very exciting to see, I have been wanting something like 
this for some time now.


The catalog: http://apps.openstack.org
The repo: https://github.com/stackforge/apps-catalog
The wiki: https://wiki.openstack.org/wiki/App-Catalog

Please join us via IRC at #openstack-app-catalog on freenode.

Our initial core team is Christopher Aedo, Tom Fifield, Kevin Fox,
Serg Melikyan.

I’ve started a doodle poll to vote on the initial IRC meeting
schedule, if you’re interested in helping improve and build up this
catalog please vote for the day/time that works best and get involved!
http://doodle.com/vf3husyn4bdkui8w

At the summit we managed to get one planning session together. We
captured that on etherpad[1], but I’d like to highlight here a few of
the things we talked about working on together in the near term:

-More information around asset dependencies (like clarifying
requirements for Heat templates or Glance images for instance),
potentially just by providing better guidance in what should be in the
description and attributes sections.
-With respect to the assets that are listed in the catalog, there’s a
need to account for tagging, rating/scoring, and a way to have
comments or a forum for each asset so potential users can interact
outside of the gerrit review system.
-Supporting more resource types (Sahara, Trove, Tosca, others)

What about expanding the scope of the application catalog to any application 
that can run *on* OpenStack, versus the implied scope of applications that can 
be deployed *by* (heat, murano, etc.) OpenStack and *on* OpenStack services 
(nova, cinder etc.). This would mean adding room for Ansible roles that 
provision openstack resources [0]. And more generally it would reinforce the 
point that there is no 'blessed' method of deploying applications on OpenStack, 
you can use tools developed specifically for OpenStack or tools developed 
elsewhere.


[0] 
https://github.com/ansible/ansible-modules-core/blob/1f99382dfb395c1b993b2812122761371da1bad6/cloud/openstack/os_server.py

-Discuss using glance artifact repository as the backend rather than
flat YAML files
-REST API, enable searching/sorting, this would ease native
integration with other projects
-Federated catalog support (top level catalog including contents from
sub-catalogs)
- I’ll be working with the OpenStack infra team to get the server and
CI set up in their environment (though that work will not impact the
catalog as it stands today).

I am pleased to see moving this to OpenStack Infra is a high priority.

A quick nslookup of http://apps.openstack.org shows it us currently hosted on 
linode at http://nb-23-239-6-45.fremont.nodebalancer.linode.com/. And last I 
checked linode isn't OpenStack powered.  
apps.openstack.org is a great example of the type of 
application that should be easy to deploy with OpenStack, since as far as I can 
tell it just needs a web server 

Re: [openstack-dev] [App-Catalog] Planning/working session Wednesday in Vancouver

2015-05-26 Thread Keith Bray
Chris, 

I am interested in getting more involved.  Is there any effort already in
place to run this like a regular project, with IRC meetings, etc.?  What
are the channels, etc., by which I can get involved.

Thanks,
-Keith

On 5/20/15 7:24 AM, "Christopher Aedo"  wrote:

>[Cross-posting to both dev and operators list because I believe this
>is important to both groups]
>
>For those of us who have been working on the OpenStack Community App
>Catalog (http://apps.openstack.org) yesterday was really exciting.  We
>had a chance to do a quick demo and walk through during they keynote,
>followed by a longer talk in the afternoon (slides here:
>http://www.slideshare.net/aedocw/openstack-community-app-catalog-httpappso
>penstackorg)
>
>The wiki page with more details is here:
>https://wiki.openstack.org/wiki/App-Catalog
>
>If you are in Vancouver and are interested in helping improve the
>Community App Catalog please join us for this working session:
>
>http://sched.co/3Rk4 (11:50 room 116/117)
>Etherpad: https://etherpad.openstack.org/YVR-app-catalog-plans
>
>If you can't join the session but have ideas or thoughts you would
>like to see discussed please add them to the etherpad.  I've put down
>a few of the ideas that have come up so far, but it's definitely not a
>comprehensive list.
>
>We got really great feedback yesterday afternoon and found a lot of
>people are interested in contributing to the catalog and working
>together to add improvements.  Hopefully you can join us today!
>
>-Christopher
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Should app names be unique?

2015-03-11 Thread Keith Bray
Dev, thanks for bringing up the item about Heat enforcing unique stack names.. 
My mistake on thinking it supported non-unique stack names.  I remember it 
working early on, but probably got changed/fixed somewhere along the way.

My argument in IRC was one based on consistency with related/similar 
projects... So, as Murali pointed out, if things aren't consistent within 
OpenStack, then that certainly leaves much more leeway in my opinion for Solum 
to determine its own path without concern for falling in line with what the 
other projects have done (since a precedent can't be established).

To be honest, I don't agree with the argument about github, however.  Github 
(and also Heroku) are using URLs, which are Unique IDs.  I caution against 
conflating a URL with a name, where a URL in the case of github serves both 
purposes, but each (both a name and an ID) have merit as standalone 
representations.

I am happy to give my support to enforcing unique names as the Solum default, 
but I continue to highly encourage things be architected in a way that 
non-unique names could be supported in the future on at least a per-tenant 
basis, should that need become validated by customer asks.

Kind regards,
-Keith

From: Murali Allada 
mailto:murali.all...@rackspace.com>>
Reply-To: 
"openstack-dev@lists.openstack.org" 
mailto:openstack-dev@lists.openstack.org>>
Date: Wednesday, March 11, 2015 2:12 PM
To: 
"openstack-dev@lists.openstack.org" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Solum] Should app names be unique?


The only reason this came up yesterday is because we wanted Solums 'app create' 
behavior to be consistent with other openstack services.


However, if heat has a unique stack name constraint and glance\nova don't, then 
the argument of consistency does not hold.


I'm still of the opinion that we should have a unique name constraint for apps 
and languagepacks within a tenants namespace, as it can get very confusing if a 
user creates multiple apps with the same name.


Also, customer research done here at Rackspace has shown that users prefer 
using 'names' rather than 'UUIDs'.


-Murali




From: Devdatta Kulkarni 
mailto:devdatta.kulka...@rackspace.com>>
Sent: Wednesday, March 11, 2015 2:48 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Solum] Should app names be unique?


Hi Solum team,


In yesterday's team meeting the question of whether Solum should enforce unique 
app name constraint

within a tenant came up.


As a recollection, in Solum one can create an 'app' using:

solum app create --plan-file  --name 


Currently Solum does support creating multiple apps with the same name.

However, in yesterday's meeting we were debating/discussing whether this should 
be the case.

The meeting log is available here:

http://eavesdrop.openstack.org/meetings/solum_team_meeting/2015/solum_team_meeting.2015-03-10-21.00.log.html



To set the context for discussion, consider the following:

- heroku does not allow creating another app with the same name as that of an 
already existing app

- github does not allow creating another repository with the same name as that 
of an already existing repo


Thinking about why this might be in case for heroku, one aspect that comes to 
mind is the setting of a 'remote' using
the app name. When we do a 'git push', it happens to this remote.
When we don't specify a remote in 'git push' command, git defaults to using the 
'origin' remote.
Even if multiple remotes with the same name were to be possible, when using an 
implicit command such as 'git push',
in which some of the input comes from the context, the system will not be able 
to disambiguate which remote to use.
So requiring unique names ensures that there is no ambiguity when using such 
implicit commands.
This might also be the reason why on github we cannot create repository with an 
already existing name.

But this is just a guess for why unique names might be required. I could be 
totally off.

I think Solum's use case is similar.

Agreed that Solum currently does not host application repositories and so there 
is no question of
Solum generated remotes. But by allowing non-unique app names
it might be difficult to support this feature in the future.

As an aside, I checked what position other Openstack services take on this 
issue.
1) Heat enforces unique stack-name constraint.
2) Nova does not enforce this constraint.


So it is clear that within Openstack there is no consistency on this issue.


What should Solum do?


Thoughts?


Best regards,

Devdatta


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/

Re: [openstack-dev] [Heat]Heat template parameters encryption

2014-06-11 Thread Keith Bray


On 6/11/14 2:43 AM, "Steven Hardy"  wrote:

>>IMO, when a template author marks a parameter as hidden/secret, it seems
>>incorrect to store that information in plain text.
>
>Well I'd still question why we're doing this, as my previous questions
>have
>not been answered:
>- AFAIK nova user-data is not encrypted, so surely you're just shifting
>the
>  attack vector from one DB to another in nearly all cases?

Having one system (e.g. Nova) not as secure as it could be isn't a reason
to not secure another system as best we can. For every attack vector you
close, you have another one to chase. I'm concerned that the merit of the
feature is being debated, so let me see if I can address that:

We want to use Heat to launch customer facing stacks.  In a UI, we would
prompt customers for Template inputs, including for example: Desired
Wordpress Admin Password, Desired MySQL password, etc. The UI then makes
an API call to Heat to orchestrate instantiation of the stack.  With Heat
as it is today, these customer specified credentials (as template
parameters) would be stored in Heat's database in plain text. As a Heat
Service Administrator, I do not need nor do I want the customer's
Wordpress application password to be accessible to me.  The application
belongs to the customer, not to the infrastructure provider.  Sure, I
could blow the customer's entire instance away as the service provider.
But, if I get fired or leave the company, I could no longer can blow away
their instance... If I leave the company, however, I could have taken a
copy of the Heat DB with me, or had looked that info up in the Heat DB
before my exit, and I could then externally attack the customer's
Wordpress instance.  It makes no sense for us to store user specified
creds unencrypted unless we are administering the customer's Wordpress
instance for them, which we are not.  We are administering the
infrastructure only.  I realize the encryption key could also be stolen,
but in a "production" system the encryption key access gets locked down to
a VERY small set of folks and not all the people that administer Heat
(that's part of good security practices and makes auditing of a leaked
encryption key much easier).
  

>- Is there any known way for heat to "leak sensitive user data", other
>than
>  a cloud operator with admin access to the DB stealing it?  Surely cloud
>  operators can trivially access all your resources anyway, including
>  instances and the nova DB/API so they have this data anyway.

Encrypting the data in the DB also helps in case if a leak of arbitrary DB
data does surface in Heat.  We are not aware of any issues with Heat today
that could leak that data... But, we never know what vulnerabilities will
be introduced or discovered in the future.


At Rackspace, individual cloud operators can not trivially access all
customer cloud resources.  When operating a large cloud at scale, service
administrator's operations and capabilities are limited to the systems
they work on.  While I could impersonate a user via Heat and do lot's of
bad things across many of their resources, each of the other systems
(Nova, Databases, Auth, etc.) audit the who is doing what on behave of
what customer, so I can't do something malicious to a customer's Nova
instance without the Auth System Administrators ensuring that HR knows I
would be the person to blame.  Similarly, a Nova system administrator
can't delete a customer's Heat stack without our Heat administrators
knowing who is to blame.  We have checks and balances across our systems
and purposefully segment our possible attack vectors.

Leaving sensitive customer data unencrypted at rest provides many more
options for that data to get in the wrong hands or be taken outside the
company.  It is quick and easy to do a MySQL dump if the DB linux system
is compromised, which has nothing to do with Heat having a vulnerability.

Our ask is to provide an optional way for the service operator to allow
template authors to choose what data is sensitive, and if the data is
marked sensitive, it gets encrypted by the Heat system as opposed to
storing in plain txt.

I hope this helps.  Vijendar, feel free to take some of this write-up and
include it in a gerit review of the feature blueprint.

Thanks,
-Keith


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] metadata for a HOT

2014-04-03 Thread Keith Bray
Steve, agreed.  Your description I believe is the conclusion that the community 
came to when this was perviously discussed, and we managed to get the 
implementation of parameter grouping and ordering [1] that you mentioned which 
has been very helpful.  I don't think we landed the keywords blueprint [2], 
which may be controversial because it is essentially unstructured. I wanted to 
make sure Mike had the links for historical context, but certainly understand 
and appreciate your point of view here.  I wasn't able to find the email 
threads to point Mike to, but assume they exist in the list archives somewhere.

We proposed another specific piece of template data [3] which I can't remember 
whether it was met with resistance or we just didn't get to implementing it 
since we knew we would have to store other data specific to our uses cases in 
other files anyway.   We decided to go with storing our extra information in a 
catalog (really just a Git repo with a README.MD [4]) for now  until we can 
implement acceptable catalog functionality somewhere like Glance, hopefully in 
the Juno cycle.  When we want to share the template, we share all the files in 
the repo (inclusive of the README.MD).  It would be more ideal if we could 
share a single file (package) inclusive of the template and corresponding help 
text and any other UI hint info that would helpful.  I expect service providers 
to have differing views of the extra data they want to store with a template... 
So it'd just be nice to have a way to account for service providers to store 
their unique data along with a template that is easy to share and is part of 
the template package.  We bring up portability and structured data often, but 
I'm starting to realize that portability of a template breaks down unless every 
service provider runs exactly the same Heat resources, same image IDs, flavor 
types, etc.). I'd like to drive more standardization of data for image and 
template data into Glance so that in HOT we can just declare things like 
"Linux, Flavor Ubuntu, latest LTS, minimum 1Gig" and automatically discover and 
choose the right image to provision, or error if a suitable match can not be 
found.  The Murano team has been hinting at wanting to solve a similar problem, 
but with a broader vision from a complex-multi application declaration 
perspective that crosses multiple templates or is a layer above just matching 
to what capabilities Heat resources provide and matching against capabilities 
that a catalog of templates provide (and mix that with capabilities the cloud 
API services provide).  I'm not yet convinced that can't be done with a parent 
Heat template since we already have the declarative constructs and language 
well defined, but I appreciate the use case and perspective those folks are 
bringing to the conversation.

[1] https://blueprints.launchpad.net/heat/+spec/parameter-grouping-ordering
 https://wiki.openstack.org/wiki/Heat/UI#Parameter_Grouping_and_Ordering

[2] https://blueprints.launchpad.net/heat/+spec/stack-keywords
https://wiki.openstack.org/wiki/Heat/UI#Stack_Keywords

[3] https://blueprints.launchpad.net/heat/+spec/add-help-text-to-template
https://wiki.openstack.org/wiki/Heat/UI#Help_Text

[4] Ex. Help Text accompanying a template in README.MD format:
https://github.com/rackspace-orchestration-templates/docker

-Keith

From: Steven Dake mailto:sd...@redhat.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Thursday, April 3, 2014 10:30 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [heat] metadata for a HOT

On 04/02/2014 08:41 PM, Keith Bray wrote:
https://wiki.openstack.org/wiki/Heat/StackMetadata

https://wiki.openstack.org/wiki/Heat/UI

-Keith

Keith,

Taking a look at the UI specification, I thought I'd take a look at adding 
parameter grouping and ordering to the hot_spec.rst file.  That seems like a 
really nice constrained use case with a clear way to validate that folks aren't 
adding magic to the template for their custom environments.  During that, I 
noticed it is is already implemented.

What is nice about this specific use case is it is something that can be 
validated by the parser.  For example, the parser could enforce that parameters 
in the parameter-groups section actually exist as parameters in the parameters 
section.  Essentially this particular use case *enforces* good heat template 
implementation without an opportunity for HOT template developers to jam 
customized data blobs into the template.

Stack keywords on the other hand doesn't necessarily follow this model.  I 
understand the use case, but it would be possible to jam unstructured metadata 
into the template.  That said, the limitations on the ja

Re: [openstack-dev] [heat] metadata for a HOT

2014-04-02 Thread Keith Bray
https://wiki.openstack.org/wiki/Heat/StackMetadata

https://wiki.openstack.org/wiki/Heat/UI

-Keith

From: Lingxian Kong mailto:anlin.k...@gmail.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Wednesday, April 2, 2014 9:31 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [heat] metadata for a HOT

Is there any relevant wiki or specification doc?


2014-04-03 4:45 GMT+08:00 Mike Spreitzer 
mailto:mspre...@us.ibm.com>>:
I would like to suggest that a metadata section be allowed at the top level of 
a HOT.  Note that while resources in a stack can have metadata, there is no way 
to put metadata on a stack itself.  What do you think?

Thanks,
Mike
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
---
Lingxian Kong
Huawei Technologies Co.,LTD.
IT Product Line CloudOS PDU
China, Xi'an
Mobile: +86-18602962792
Email: konglingx...@huawei.com; 
anlin.k...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano][Heat] MuranoPL questions?

2014-03-26 Thread Keith Bray


On 3/25/14 11:55 AM, "Ruslan Kamaldinov"  wrote:

>* Murano DSL will focus on:
>  a. UI rendering


One of the primary reasons I am opposed to using a different DSL/project
to accomplish this is that the person authoring the HOT template is
usually the system architect, and this is the same person who has the
technical knowledge to know what technologies you can swap in/out and
still have that system/component work, so they are also the person who
can/should define the "rules" of what component building blocks can and
can't work together.  There has been an overwhelmingly strong preference
from the system architects/DevOps/ApplicationExperts I [1] have talked to
for the ability to have control over those rules directly within the HOT
file or immediately along-side the HOT file but feed the whole set of
files to a single API endpoint.  I'm not advocating that this extra stuff
be part of Heat Engine (I understand the desire to keep the orchestration
engine clean)... But from a barrier to adoption point-of-view, the extra
effort for the HOT author to learn another DSL and use yet another system
(or even have to write multiple files) should not be underestimated.
These people are not OpenStack developers, they are DevOps folks and
Application Experts.  This is why the Htr[2] project was proposed and
threads were started to add extra data to HOT template that Heat engine
could essentially ignore, but would make defining UI rendering and
component connectivity easy for the HOT author.

I'm all for contributions to OpenStack, so I encourage the Murano team to
continue doing its thing if they find it adds value to themselves or
others. However, I'd like to see the Orchestration program support the
surrounding things the users of the Heat engine want/need from their cloud
system instead of having those needs met by separate projects seeking
incubation. There are technical ways to keep the core engine "clean" while
having the Orchestration Program API Service move up the stack in terms of
cloud user experience.

>  b. HOT generation
>  c. Setup other services (like put Mistral tasks to Mistral and bind
> them with events)
>
>Speaking about new DSL for Murano. We're speaking about Application
>Lifecycle
>Management. There are a lot of existing tools - Heat/HOT, Python, etc,
>but none
>of them was designed with ALM in mind as a goal.

Solum[3] is specifically designed for ALM and purpose built for
OpenStack... It has declared that it will generate HOT templates and setup
other services, including putting together or executing supplied workflow
definition (using Mistral if applicable).  Like Murano, Solum is also not
an OpenStack incubated project, but it has been designed with community
collaboration (based on shared pain across multiple contributors) with the
ALM goal in mind from the very beginning.

-Keith


[1] I regularly speak with DevOps, Application Specialists, and cloud
customers, specifically about Orchestration and Heat.. HOT is somewhat
simple enough for the most technical of them (DevOps & App Specialists) to
grasp and have interest in adopting, but their is strong push back from
the folks I talk to about having to learn one more thing... Since Heat
adopters are exactly the same people who have the knowledge to define the
overall system capabilities including component connectivity and how UI
should be rendered, I'd like to keep it simple for them. The more we can
do to have the Orchestration service look/feel like one thing (even if
it's Engine + Other things under the hood), or reuse other OpenStack core
components (e.g. Glance) the better for adoption.
[2] https://wiki.openstack.org/wiki/Heat/htr
[3] http://solum.io



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano][Heat] MuranoPL questions?

2014-03-18 Thread Keith Bray
Hi Ruslan,

I did not intend to suggest that definition of things like billing rules
should necessarily be supported syntax in Heat. Murano is certainly able
to develop whatever features it would like, including an alternative DSL.
I have a preference towards minimizing DSLs across the OpenStack
ecosystem, if possible. I hope that helps clear up my position.  If the
goal is to specify pertinent information that a billing system could
consume, I default back to wanting to specify/store that information as
associated data in the catalog with the application definition (e.g. HOT
artifact in Glance).  If the billing/rules logic crosses multiple HOT
artifacts, I find that very interesting and would enjoy reading about a
specific use-case.

As for Trove and Savanna, I view these as application "Services."
Service registry/discovery isn't what I thought Murano was aiming to be,
but I've been wrong about what Murano is/isn't/wants-to-be many times
before, so my apologies if I'm misunderstanding again. I am a bit more
confused now, however, given that discovery/definition of exposed software
services is entering the conversation?

Thank you for considering my comments,
-Keith

On 3/18/14 7:32 PM, "Ruslan Kamaldinov"  wrote:

>- definition of an application which is already exposed via REST API.
>Think of
>  something like Sahara (ex. Savanna) or Trove developed in-house for
>internal
>  company needs. app publishers wouldn't be happy if they'll be forced to
>  develop a new resource for Heat
>- definition of billing rules for an application


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano][Heat] MuranoPL questions?

2014-03-18 Thread Keith Bray
Georgy,

In consideration of the "can you express it" instead of the "who will generate 
it," I see Heat's HOT evolving to support the expression of complex multi-tier 
architectures and applications (I would argue you can already do this today, 
perhaps with some additional features desired, e.g. Ability to define cloud 
workflows and workflow execution rules which could come when we have a workflow 
service like Mistral).  Therefore, I would encourage Murano contributors to 
consider whether they can help make Heat sufficiently cover desired use cases.  
I have never viewed Heat templates as isolated components of a multi-tier 
architecture.  Instead, a single template or a combination of 
master/subordinate templates together (using references, nesting, or inclusion) 
could express the complete architecture, both infrastructure and applications.

If I've read your previous comments and threads correctly, you desire a way to 
express System Lifecycle Management across multiple related applications or 
components, whereby you view the System as a grouping of independently 
developed and/or deployed (but systematically related) "components," whereby 
you view Components as individual disconnected Heat templates that 
independently describe different application stacks of the System.  Did I get 
that correct?   If so, perhaps the discussion here is one of "scope" of what 
can or should be expressed in a Heat template. Is it correct to state that your 
argument is that a separate system (such as Murano) should be used to express 
System Lifecycle Management as I've defined it here?  If so, why could we not 
use the Heat DSL to also define the System?  The System definition could be 
logically separated out into its own text file... But, we'd have a common DSL 
syntax and semantics for both lower level and higher level component 
interaction (a building block effect of sorts).

As for "who will generate it," ( with "it" being the Heat multi-tier 
application/infrastructure definition) I think that question will go through a 
lot more evolution and could be any number of sources: e.g. Solum, Murano, 
Horizon, Template Author with a text editor, etc.

Basically, I'm a +1 for as few DSLs as possible. I support the position that we 
should evolve HOT if needed vs. having two separate DSLs that are both related 
to expressing application and infrastructure semantics.

Workflow is quite interesting ... Should we be able to express imperative 
workflow semantics in HOT?  Or, should we only be able to declare workflow 
configurations that get configured in a service like Mistral whereby Mistral's 
execution of a workflow may need to invoke Heat hooks or Stack Updates?  Or, 
some other solution?

I look forward to a design discussion on all this at the summit... This is fun 
stuff to think about!

-Keith

From: Georgy Okrokvertskhov 
mailto:gokrokvertsk...@mirantis.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, March 18, 2014 1:49 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Murano][Heat] MuranoPL questions?

I see this in the following way - who will generate HOT template for my complex 
multi-tier applications when I have only templates for components?
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat]Policy on upgades required config changes

2014-03-10 Thread Keith Bray
I want to echo Clint's responses... We do run close to Heat master here at
Rackspace, and we'd be happy to set up a non-voting job to notify when a
review would break Heat on our cloud if that would be beneficial.  Some of
the breaks we have seen have been things that simply weren't caught in
code review (a human intensive effort), were specific to the way we
configure Heat for large-scale cloud use, applicable to the entire Heat
project, and not necessarily service provider specific.

-Keith

On 3/10/14 5:19 PM, "Clint Byrum"  wrote:

>Excerpts from Steven Hardy's message of 2014-03-05 04:24:51 -0800:
>> On Tue, Mar 04, 2014 at 02:06:16PM -0800, Clint Byrum wrote:
>> > Excerpts from Steven Hardy's message of 2014-03-04 09:39:21 -0800:
>> > > Hi all,
>> > > 
>> > > As some of you know, I've been working on the instance-users
>>blueprint[1].
>> > > 
>> > > This blueprint implementation requires three new items to be added
>>to the
>> > > heat.conf, or some resources (those which create keystone users)
>>will not
>> > > work:
>> > > 
>> > > https://review.openstack.org/#/c/73978/
>> > > https://review.openstack.org/#/c/76035/
>> > > 
>> > > So on upgrade, the deployer must create a keystone domain and
>>domain-admin
>> > > user, add the details to heat.conf, as already been done in
>>devstack[2].
>> > > 
>> > > The changes requried for this to work have already landed in
>>devstack, but
>> > > it was discussed to day and Clint suggested this may be unacceptable
>> > > upgrade behavior - I'm not sure so looking for guidance/comments.
>> > > 
>> > > My plan was/is:
>> > > - Make devstack work
>> > > - Talk to tripleo folks to assist in any transition (what prompted
>>this
>> > >   discussion)
>> > > - Document the upgrade requirements in the Icehouse release notes
>>so the
>> > >   wider community can upgrade from Havana.
>> > > - Try to give a heads-up to those maintaining downstream heat
>>deployment
>> > >   tools (e.g stackforge/puppet-heat) that some tweaks will be
>>required for
>> > >   Icehouse.
>> > > 
>> > > However some have suggested there may be an openstack-wide policy
>>which
>> > > requires peoples old config files to continue working indefinitely
>>on
>> > > upgrade between versions - is this right?  If so where is it
>>documented?
>> > > 
>> > 
>> > I don't think I said indefinitely, and I certainly did not mean
>> > indefinitely.
>> > 
>> > What is required though, is that we be able to upgrade to the next
>> > release without requiring a new config setting.
>> 
>> So log a warning for one cycle, then it's OK to expect the config after
>> that?
>> 
>
>Correct.
>
>> I'm still unclear if there's an openstack-wide policy on this, as the
>>whole
>> time-based release with release-notes (which all of openstack is
>>structured
>> around and adheres to) seems to basically be an uncomfortable fit for
>>folks
>> like tripleo who are trunk chasing and doing CI.
>>
>
>So we're continuous delivery focused, but we are not special. HP Cloud
>and Rackspace both do this, and really anyone running a large cloud will
>most likely do so with CD, as the value proposition is that you don't
>have big scary upgrades, you just keep incrementally upgrading and
>getting newer, better code. We can only do this if we have excellent
>testing, which upstream already does and which the public clouds all
>do privately as well of course.
>
>Changes like the one that was merged last week in Heat turn into
>stressful fire drills for those deployment teams.
>
>> > Also as we scramble to deal with these things in TripleO (as all of
>>our
>> > users are now unable to spin up new images), it is clear that it is
>>more
>> > than just a setting. One must create domain users carefully and roll
>>out
>> > a new password.
>> 
>> Such are the pitfalls of life at the bleeding edge ;)
>> 
>
>This is mildly annoying as a stance, as that's not how we've been
>operating with all of the other services of OpenStack. We're not crazy
>for wanting to deploy master and for wanting master to keep working. We
>are a _little_ crazy for wanting that without being in the gate.
>
>> Seriously though, apologies for the inconvenience - I have been asking
>>for
>> feedback on these patches for at least a month, but clearly I should've
>> asked harder.
>> 
>
>Mea culpa too, I did not realize what impact this would have until it
>was too late.
>
>> As was discussed on IRC yesterday, I think some sort of (initially
>>non-voting)
>> feedback from tripleo CI to heat gerrit is pretty much essential given
>>that
>> you're so highly coupled to us or this will just keep happening.
>> 
>
>TripleO will be in the gate some day (hopefully soon!) and then this
>will be less of an issue as you'd see failures early on, and could open
>bugs and get us to fix our issue sooner.
>
>However you'd still need to provide the backward compatibility for a
>single cycle. Servers aren't upgraded instantly, and keystone may not be
>ready for this v3/domain change until after users ha

Re: [openstack-dev] [Mistral] Porting executor and engine to oslo.messaging

2014-02-28 Thread Keith Bray
Hey, yeah, so I started the Convection wiki and discussed that idea at the 
Portland summit in 2013.  Mirantis picked up the Convection proposal and 
decided to run with it in a project named Mistral from what I understand...  
Their was apparently some concerns over name infringement, which is why the 
name Convection was not kept when they began their Mistral proof-of-concept.  
Convection was an "idea proposed."  There is no code for Convection, and no 
active contributors to Convection.  In the spirit of collaboration, if folks 
are interested in contributing to a Workflow system, I would encourage you to 
get involved in Mistral and see if it fits your needs... If not, I'm sure the 
Mistral (and other openstack folks) would love to have a discussion about that.

Kind regards,
-Keith

From: Joshua Harlow mailto:harlo...@yahoo-inc.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Friday, February 28, 2014 2:46 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>, 
W Chan mailto:m4d.co...@gmail.com>>
Subject: Re: [openstack-dev] [Mistral] Porting executor and engine to 
oslo.messaging

Convection? Afaik u guys are building convection (convection was just an idea, 
I see mistral as the POC/impl) ;)

https://wiki.openstack.org/wiki/Convection#NOTICE:_Similar_project_-.3E_Mistral

So questions around taskflow:

  1.  Correct u put it in your task, there was previous ideas/work done by the 
team @ https://etherpad.openstack.org/p/BrainstormFlowConditions but from 
previous people that have build said systems it was determined that actually 
there wasn't much use for conditionals being useful (yet). But expression 
evaluation, not sure what that means, being a library, any type of expression 
evaluation is just whatever u can imagine in python. Conditional tasks (and 
such) being managed by taskflows engines we can reconsider & might even be 
possible but this is imho dangerous territory that is being approached, 
expression evaluation and conditional branching and loops is basically a 
language specification ;)
  2.  I don't see taskflow managing a catalog (currently), that seems out of 
scope of a library that provides the execution, resumption parts (any consumer 
of taskflow should be free to define and organize there catalog as they choose).
  3.  Negative, taskflow is a execution and state-management library (not a 
full framework imho) that helps build the upper layers that services like 
mistral can use (or nova, or glance or…). I don't feel its the right place to 
have taskflow force a DSL onto people, since the underlying primitives that can 
form a upper level DSL are more service/app level choices (heat has there DSL, 
mistral has theres, both are fine, and both likely can take advantage of the 
same taskflow execution and state-management primitives to use in there 
service).

Hope that helps :)

-Josh

From: W Chan mailto:m4d.co...@gmail.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Friday, February 28, 2014 at 12:02 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Mistral] Porting executor and engine to 
oslo.messaging

All,
This is a great start.  I think the sooner we have this discussion the better.  
Any uncertainty in the direction/architecture here is going to stall progress.  
How about Convection?  What's the status of the Convection project and where 
it's heading?  Should we have similar discussion with the contributors of that 
project?

Joshua,
I have a few questions about TaskFlow.
1) How does it handle conditional loop and expression evaluation for decision 
branching?  I've looked at the Taskflow wiki/code briefly and it's not obvious. 
 I assume it would be logic that user will embed within a task?
2) How about predefined catalog of standard tasks (i.e. REST call, SOAP call, 
Email task, etc.)?  Is that within the scope of Taskflow or up to TaskFlow 
consumers like Mistral?
3) Does TaskFlow have its own DSL?  The examples provided are mostly code based.

Thanks.
Winson




On Fri, Feb 28, 2014 at 10:54 AM, Joshua Harlow 
mailto:harlo...@yahoo-inc.com>> wrote:
Sounds good,

Lets connect, the value of central oslo connected projects is that shared 
libraries == share the pain. Duplicating features and functionality is always 
more pain. In the end we are a community, not silos, so it seems like before 
mistral goes down the path of duplicating more and more features (I understand 
the desire to POC mistral and learn what mistral wants to become, and all that) 
that we should start the path to working together. I personally am worried that 
mistral will start to apply for incubation and then the question will come up 
as to this (mistral was doing POC, kept on doing POC

Re: [openstack-dev] [Murano] Object-oriented approach for defining Murano Applications

2014-02-24 Thread Keith Bray
Have you considered writing Heat resource plug-ins that perform (or configure 
within other services) instance snapshots, backups, or whatever other 
maintenance workflow possibilities you want that don't exist?  Then these 
maintenance workflows you mention could be expressed in the Heat template 
forming a single place for the application architecture definition, including 
defining the configuration for services that need to be application aware 
throughout the application's life .  As you describe things in Murano, I 
interpret that you are layering application architecture specific information 
and workflows into a DSL in a layer above Heat, which means information 
pertinent to the application as an ongoing concern would be disjoint.  
Fragmenting the necessary information to wholly define an 
infrastructure/application architecture could make it difficult to share the 
application and modify the application stack.

I would be interested in a library that allows for composing Heat templates 
from "snippets" or "fragments" of pre-written Heat DSL... The library's job 
could be to ensure that the snippets, when combined, create a valid Heat 
template free from conflict amongst resources, parameters, and outputs.  The 
interaction with the library, I think, would belong in Horizon, and the 
"Application Catalog" and/or "Snippets Catalog" could be implemented within 
Glance.

>>>Also, there may be workflow steps which are not covered by Heat by design. 
>>>For example, application publisher may include creating instance snapshots, 
>>>data migrations, backups etc into the deployment or maintenance workflows. I 
>>>don't see how these may be done by Heat, while Murano should definitely 
>>>support this scenarios.

From: Alexander Tivelkov mailto:ativel...@mirantis.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Monday, February 24, 2014 12:18 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Murano] Object-oriented approach for defining 
Murano Applications

Hi Stan,

It is good that we are on a common ground here :)

Of course this can be done by Heat. In fact - it will be, in the very same 
manner as it always was, I am pretty sure we've discussed this many times 
already. When Heat Software config is fully implemented, it will be possible to 
use it instead of our Agent execution plans for software configuration - it the 
very same manner as we use "regular" heat templates for resource allocation.

Heat does indeed support template composition - but we don't want our end-users 
to do learn how to do that: we want them just to combine existing application 
on higher-level. Murano will use the template composition under the hood, but 
only in the way which is designed by application publisher. If the publisher 
has decided to configure the software with using Heat Software Config, then 
this option will be used. If some other (probably some legacy ) way of doing 
this was preferred, Murano should be able to support that and allow to create 
such workflows.

Also, there may be workflow steps which are not covered by Heat by design. For 
example, application publisher may include creating instance snapshots, data 
migrations, backups etc into the deployment or maintenance workflows. I don't 
see how these may be done by Heat, while Murano should definitely support this 
scenarios.

So, as a conclusion, Murano should not be though of as a Heat alternative: it 
is a different tool located on the different layer of the stack, aiming 
different user audience - and, the most important - using the Heat underneath.


--
Regards,
Alexander Tivelkov


On Mon, Feb 24, 2014 at 8:36 PM, Stan Lagun 
mailto:sla...@mirantis.com>> wrote:
Hi Alex,

Personally I like the approach and how you explain it. I just would like to 
know your opinion on how this is better from someone write Heat template that 
creates Active Directory  lets say with one primary and one secondary 
controller and then publish it somewhere. Since Heat do supports software 
configuration as of late and has concept of environments [1] that Steven Hardy 
generously pointed out in another mailing thread that can be used for 
composition as well it seems like everything you said can be done by Heat alone

[1]: 
http://hardysteven.blogspot.co.uk/2013/10/heat-providersenvironments-101-ive.html


On Mon, Feb 24, 2014 at 7:51 PM, Alexander Tivelkov 
mailto:ativel...@mirantis.com>> wrote:
Sorry folks, I didn't put the proper image url. Here it is:


https://creately.com/diagram/hrxk86gv2/kvbckU5hne8C0r0sofJDdtYgxc%3D


--
Regards,
Alexander Tivelkov


On Mon, Feb 24, 2014 at 7:39 PM, Alexander Tivelkov 
mailto:ativel...@mirantis.com>> wrote:

Hi,


I would like to initiate one more discussion about an approach we selected to 
solve a particular problem in Murano.

The problem statement is 

Re: [openstack-dev] [Murano] Need a new DSL for Murano

2014-02-17 Thread Keith Bray

Can someone elaborate further on the things that Murano is intended to solve 
within the OpenStack ecosystem?   My observation has been that Murano has 
changed from a Windows focused Deployment Service to a Metadata Application 
Catalog Workflow thing (I fully admit this may be an invalid observation).  
It's unclear to me what OpenStack pain/use-cases is to be solved by "complex 
object composition, description of data types, contracts..."

Your thoughts would be much appreciated.

Thanks,
-Keith

From: Renat Akhmerov mailto:rakhme...@mirantis.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Monday, February 17, 2014 1:33 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Murano] Need a new DSL for Murano

Clint,

We're collaborating with Murano. We may need to do it in a way that others 
could see it though. There are several things here:

  *   Murano doesn’t really have a “workflow engine” similar to Mistral’s. 
People get confused with that but it’s just a legacy terminology, I think 
Murano folks were going to rename this component to be more precise about it.
  *   Mistral DSL doesn’t seem to be a good option for solving tasks that 
Murano is intended to solve. Specifically I mean things like complex object 
composition, description of data types, contracts and so on. Like Alex and Stan 
mentioned Murano DSL tends to grow into a full programming language.
  *   Most likely Mistral will be used in Murano for implementation, at least 
we see where it would be valuable. But Mistral is not so matured yet, we need 
to keep working hard and be patient :)

Anyway, we keep thinking on how to make both languages look similar or at least 
the possibility to use them seamlessly, if needed (call Mistral workflows from 
Murano DSL or vice versa).

Renat Akhmerov
@ Mirantis Inc.

On 16 Feb 2014, at 05:48, Clint Byrum 
mailto:cl...@fewbar.com>> wrote:

Excerpts from Alexander Tivelkov's message of 2014-02-14 18:17:10 -0800:
Hi folks,

Murano matures, and we are getting more and more feedback from our early
adopters. The overall reception is very positive, but at the same time
there are some complaints as well. By now the most significant complaint is
is hard to write workflows for application deployment and maintenance.

Current version of workflow definition markup really have some design
drawbacks which limit its potential adoption. They are caused by the fact
that it was never intended for use for Application Catalog use-cases.


Just curious, is there any reason you're not collaborating on Mistral
for this rather than both having a workflow engine?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [glance] Heater Proposal

2013-12-05 Thread Keith Bray
Jay, I don't see reduction.  I count -Glance and +Murano in your email, which 
is net zero addition of projects I think. Did I miss something?  Template 
catalog functionality could go into Heat in the short term with no new project 
additions. It could be built in a way that it would be easy to break it out and 
move it elsewhere in the future, similar to how scaling resources is being 
incubated in Heat, but written in a way that it could run standalone or be 
broken out if needed.

-Keith

On Dec 5, 2013 11:35 PM, Jay Pipes  wrote:
On 12/05/2013 04:25 PM, Clint Byrum wrote:
> Excerpts from Andrew Plunk's message of 2013-12-05 12:42:49 -0800:
>>> Excerpts from Randall Burt's message of 2013-12-05 09:05:44 -0800:
 On Dec 5, 2013, at 10:10 AM, Clint Byrum 
   wrote:

> Excerpts from Monty Taylor's message of 2013-12-04 17:54:45 -0800:
>> Why not just use glance?
>>
>
> I've asked that question a few times, and I think I can collate the
> responses I've received below. I think enhancing glance to do these
> things is on the table:
>
> 1. Glance is for big blobs of data not tiny templates.
> 2. Versioning of a single resource is desired.
> 3. Tagging/classifying/listing/sorting
> 4. Glance is designed to expose the uploaded blobs to nova, not users
>
> My responses:
>
> 1: Irrelevant. Smaller things will fit in it just fine.

 Fitting is one thing, optimizations around particular assumptions about 
 the size of data and the frequency of reads/writes might be an issue, but 
 I admit to ignorance about those details in Glance.

>>>
>>> Optimizations can be improved for various use cases. The design, however,
>>> has no assumptions that I know about that would invalidate storing blobs
>>> of yaml/json vs. blobs of kernel/qcow2/raw image.
>>
>> I think we are getting out into the weeds a little bit here. It is important 
>> to think about these apis in terms of what they actually do, before the 
>> decision of combining them or not can be made.
>>
>> I think of HeatR as a template storage service, it provides extra data and 
>> operations on templates. HeatR should not care about how those templates are 
>> stored.
>> Glance is an image storage service, it provides extra data and operations on 
>> images (not blobs), and it happens to use swift as a backend.
>>
>> If HeatR and Glance were combined, it would result in taking two very 
>> different types of data (template metadata vs image metadata) and mashing 
>> them into one service. How would adding the complexity of HeatR benefit 
>> Glance, when they are dealing with conceptually two very different types of 
>> data? For instance, should a template ever care about the field "minRam" 
>> that is stored with an image? Combining them adds a huge development 
>> complexity with a very small operations payoff, and so Openstack is already 
>> so operationally complex that HeatR as a separate service would be 
>> knowledgeable. Only clients of Heat will ever care about data and operations 
>> on templates, so I move that HeatR becomes it's own service, or becomes part 
>> of Heat.
>>
>
> I spoke at length via G+ with Randall and Tim about this earlier today.
> I think I understand the impetus for all of this a little better now.
>
> Basically what I'm suggesting is that Glance is only narrow in scope
> because that was the only object that OpenStack needed a catalog for
> before now.
>
> However, the overlap between a catalog of images and a catalog of
> templates is quite comprehensive. The individual fields that matter to
> images are different than the ones that matter to templates, but that
> is a really minor detail isn't it?
>
> I would suggest that Glance be slightly expanded in scope to be an
> object catalog. Each object type can have its own set of fields that
> matter to it.
>
> This doesn't have to be a minor change to glance to still have many
> advantages over writing something from scratch and asking people to
> deploy another service that is 99% the same as Glance.

My suggestion for long-term architecture would be to use Murano for
catalog/metadata information (for images/templates/whatever) and move
the block-streaming drivers into Cinder, and get rid of the Glance
project entirely. Murano would then become the catalog/registry of
objects in the OpenStack world, Cinder would be the thing that manages
and streams blocks of data or block devices, and Glance could go away.
Imagine it... OpenStack actually *reducing* the number of projects
instead of expanding! :)

Best,
-jay

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][horizon]Heat UI related requirements & roadmap

2013-11-25 Thread Keith Bray
Thanks Steve.  I appreciate your input. I have added the use cases for all to 
review:
https://wiki.openstack.org/wiki/Heat/StackMetadata

What are next steps to drive this to resolution?

Kind regards,
-Keith

From: Steve Baker mailto:sba...@redhat.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Monday, November 25, 2013 11:47 PM
To: 
"openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [heat][horizon]Heat UI related requirements & 
roadmap

On 11/26/2013 03:26 PM, Keith Bray wrote:

On 11/25/13 5:46 PM, "Clint Byrum" <mailto:cl...@fewbar.com> 
wrote:



Excerpts from Tim Schnell's message of 2013-11-25 14:51:39 -0800:


Hi Steve,

As one of the UI developers driving the requirements behind these new
blueprints I wanted to take a moment to assure you and the rest of the
Openstack community that the primary purpose of pushing these
requirements
out to the community is to help improve the User Experience for Heat for
everyone. Every major UI feature that I have implemented for Heat has
been
included in Horizon, see the Heat Topology, and these requirements
should
improve the value of Heat, regardless of the UI.


Stack/template metadata
We have a fundamental need to have the ability to reference some
additional metadata about a template that Heat does not care about.
There
are many possible use cases for this need but the primary point is that
we
need a place in the template where we can iterate on the schema of the
metadata without going through a lengthy design review. As far as I
know,
we are the only team attempting to actually productize Heat at the
moment
and this means that we are encountering requirements and requests that
do
not affect Heat directly but simply require Heat to allow a little
wiggle
room to flesh out a great user experience.



Wiggle room is indeed provided. But reviewers need to understand your
motivations, which is usually what blueprints are used for. If you're
getting push back, it is likely because your blueprints to not make the
use cases and long term vision obvious.


Clint, can you be more specific on what is not clear about the use case?
What I am seeing is that the use case of meta data is not what is being
contested, but that the Blueprint of where meta data should go is being
contested by only a few (but not all) of the core devs.  The Blueprint for
in-template metadata was already approved for Icehouse, but now that work
has been delivered on the implementation of that blueprint, the blueprint
itself is being contested:
   https://blueprints.launchpad.net/heat/+spec/namespace-stack-metadata
I'd like to propose that the blueprint that has been accepted go forth
with the code that exactly implements it, and if there are alternative
proposals and appropriate reasons for the community to come to consensus
on a different approach, that we then iterate and move the data (deprecate
the older feature if necessary, e.g. If that decision comes after
Icehouse, else of a different/better implementation comes before Icehouse,
then no harm done).


I don't think the Heat project has ever set any expectations over what it means 
for a blueprint to be Approved. Given that the PTL can approve blueprints 
(that's me =) but anyone in heat-core can legitimately -2 any review, I don't 
think it is realistic to expect Approved to mean anything other than "something 
that is worthy of starting to work on". Nova has adopted a policy of only 
approving blueprints will full specifications. That would avoid situations like 
this but I'd like to avoid that until Heat is more mature and that kind of 
process is really necessary.

How a blueprint is progressed after approval depends entirely on the feature 
and the people involved. This could be one of:
1) Implement it already, its trivial!
2) Write enough of a specification to convince enough core developers that it 
has value
3) Have list, irc and summit discussions for some amount of time, then do 2) or 
1)

In this case 1) has proven to be not enough, so I would recommend 2). I don't 
think this will come to 3) but we seem to be well on the way ;)

I've linked this blank wiki page to the blueprint so a spec containing use 
cases can go there.

There is precedence for an optional metadata section that can contain
any
end-user data in other Openstack projects and it is necessary in order
to
iterate quickly and provide value to Heat.



Nobody has said you can't have meta-data on stacks, which is what other
projects use.



There are many use cases that can be discussed here, but I wanted to
reiterate an initial discussion point that, by definition,
"stack/template_metadata" does not have any hard requirements in terms
of
schema or what do

Re: [openstack-dev] [heat][horizon]Heat UI related requirements & roadmap

2013-11-25 Thread Keith Bray


On 11/25/13 5:46 PM, "Clint Byrum"  wrote:

>Excerpts from Tim Schnell's message of 2013-11-25 14:51:39 -0800:
>> Hi Steve,
>> 
>> As one of the UI developers driving the requirements behind these new
>> blueprints I wanted to take a moment to assure you and the rest of the
>> Openstack community that the primary purpose of pushing these
>>requirements
>> out to the community is to help improve the User Experience for Heat for
>> everyone. Every major UI feature that I have implemented for Heat has
>>been
>> included in Horizon, see the Heat Topology, and these requirements
>>should
>> improve the value of Heat, regardless of the UI.
>> 
>> 
>> Stack/template metadata
>> We have a fundamental need to have the ability to reference some
>> additional metadata about a template that Heat does not care about.
>>There
>> are many possible use cases for this need but the primary point is that
>>we
>> need a place in the template where we can iterate on the schema of the
>> metadata without going through a lengthy design review. As far as I
>>know,
>> we are the only team attempting to actually productize Heat at the
>>moment
>> and this means that we are encountering requirements and requests that
>>do
>> not affect Heat directly but simply require Heat to allow a little
>>wiggle
>> room to flesh out a great user experience.
>> 
>
>Wiggle room is indeed provided. But reviewers need to understand your
>motivations, which is usually what blueprints are used for. If you're
>getting push back, it is likely because your blueprints to not make the
>use cases and long term vision obvious.

Clint, can you be more specific on what is not clear about the use case?
What I am seeing is that the use case of meta data is not what is being
contested, but that the Blueprint of where meta data should go is being
contested by only a few (but not all) of the core devs.  The Blueprint for
in-template metadata was already approved for Icehouse, but now that work
has been delivered on the implementation of that blueprint, the blueprint
itself is being contested:
   https://blueprints.launchpad.net/heat/+spec/namespace-stack-metadata
I'd like to propose that the blueprint that has been accepted go forth
with the code that exactly implements it, and if there are alternative
proposals and appropriate reasons for the community to come to consensus
on a different approach, that we then iterate and move the data (deprecate
the older feature if necessary, e.g. If that decision comes after
Icehouse, else of a different/better implementation comes before Icehouse,
then no harm done).


>
>> There is precedence for an optional metadata section that can contain
>>any
>> end-user data in other Openstack projects and it is necessary in order
>>to
>> iterate quickly and provide value to Heat.
>> 
>
>Nobody has said you can't have meta-data on stacks, which is what other
>projects use.
>
>> There are many use cases that can be discussed here, but I wanted to
>> reiterate an initial discussion point that, by definition,
>> "stack/template_metadata" does not have any hard requirements in terms
>>of
>> schema or what does or does not belong in it.
>> 
>> One of the initial use cases is to allow template authors to categorize
>> the template as a specific "type".
>> 
>> template_metadata:
>> short_description: Wordpress
>> 
>> 
>
>Interesting. Would you support adding a "category" keyword to python so
>we don't have to put it in setup.cfg and so that the egg format doesn't
>need that section? Pypi can just parse the python to categorize the apps
>when they're uploaded. We could also have a file on disk for qcow2 images
>that we upload to glance that will define the meta-data.
>
>To be more direct, I don't think the templates themselves are where this
>meta-data belongs. A template is self-aware by definition, it doesn't
>need the global metadata section to tell it that it is WordPress. For
>anything else that needs to be globally referenced there are parameters.
>Having less defined inside the template means that you get _more_ wiggle
>room for your template repository.

Clint, you are correct that the Template does not need to know what it is.
 It's every other service (and users of those services) that a Template
passes through or to that would care to know what it is. We are suggesting
we put that meta data in the template file and expressly ignore it for
purposes of parsing the template language in the Heat engine, so we agree
it not a necessary part of the template.  Sure, we could encode the
metadata info in a separate catalog...  but, take the template out of the
catalog and now all that useful associated data is lost or would need to
be recreated by someone or some service.  That does not make the template
portable, and that is a key aspect of what we are trying to achieve (all
user-facing clients, like Horizon, or humans reading the file, can take
advantage). We don't entirely know yet what is most useful in portability
and wh

Re: [openstack-dev] [Heat] Continue discussing multi-region orchestration

2013-11-15 Thread Keith Bray
The way I view 2 vs. 4 is that 2 is more complicated and you don't gain
any benefit of availability.  If, in 2, your global heat endpoint is down,
you can't update the whole stack.  You have to work around it by talking
to Heat (or the individual service endpoints) in the region that is still
alive.

4 is much simpler in that only one Heat instance is involved.  If Heat is
down, you still have just as bad/good workaround, which is you talk to
service endpoints in the region that is still available.  If you want to
use Heat in that region to do it, you can Adopt the resources into a Heat
stack in that region. I don't see how 2 is "Robust against failure of
whole region" because if the region on the left part of the picture in 2
goes down, you can't manage your global stack or any of the resources in
the left region that are part of that global stack.  All you could manage
is a subset of resources by manipulating the substack in the right region,
but you can do this in 4 as well using Adopt.  4 is a simpler starting use
case and easier (IMO) for a user of the system to digest, and has the HUGE
advantage of being able to orchestrate deploying resources to multiple
regions without a service operator having to have Heat setup and installed
in EVERY region.  This is particular important for a private cloud/heat
person trying to deploy to public cloud.. If doing so requires the public
cloud operator have Heat running, then you can't deploy there.  If no Heat
in that region is required, then you can use your own instance of Heat to
deploy to any available openstack cloud.  That is a HUGE benefit of 4.

-Keith

On 11/15/13 2:58 PM, "Zane Bitter"  wrote:

>On 15/11/13 18:24, Bartosz Górski wrote:
>> Hi Thomas,
>>
>> Each of the two engines will be able to create resources in both
>>regions.
>> We do not need to add anything in the heat client.
>>
>> Right now when you want to create a new stack (using heat client or
>> directly API) you need to provide:
>> - stack name
>> - template
>> - parameters (if need)
>> - tenant
>> - username
>> - password
>> - region name (optional)
>>
>> The last four (tenant, username, password and region_name) we will call
>> default context.
>> This context is used in Heat to configure all the openstack clients to
>> other service.
>> Username, password and tenant is used for authentication.
>> Region name is use to get appropriate API endpoint from keystone catalog
>> for other openstack service (like nova).
>> In case with one region you do not need to specify it because there is
>> only one endpoint for each service.
>> In multi-region case we have more than one and region name is used to
>> get correct one.
>>
>> Each nested stack have its own set of openstack clients (nova client,
>> neutron client, ... etc.) inside the heat engine.
>> Right now for all of them the default context is used to configure
>> clients which will be used to create resources.
>> There is not option to change the default context for now. What I'm
>> trying to do it to add possibility to define different
>> context inside the template file. New context can be passed to nested
>> stack resource to create clients set with different
>> endpoints to call. Heat engine will get appropriate endpoints from
>> keystone catalog for specified region name.
>>
>> So from heat engine point of view there is not big change in the
>> workflow. Heat will parse the template, create the
>> dependencies graph and start creating resources in the same way as
>> usual. When he will need to created nested
>> stack with different context he will just use different set of openstack
>> clients (ex. will call services in other region).
>>
>> So to sum up each of the two heat engine will be able to create
>> resources in both regions if different context will
>> be specified. If only default context will be used heat will create all
>> resource in the same region where it is located.
>
>So, to be clear, this is option (4) from the diagram I put together here:
>https://wiki.openstack.org/wiki/Heat/Blueprints/Multi_Region_Support_for_H
>eat/The_Missing_Diagram
>
>It's got a couple of major problems:
>
>* When a whole region goes down, you can lose access to the Heat
>instance that was managing still-available resources. This makes it more
>or less impossible to use Heat to manage a highly-available global
>application.
>
>* Instances have to communicate back to the Heat instance that owns them
>(e.g. for WaitConditions), and it's not yet clear that this is feasible
>in general.
>
>There are also a number of other things I really don't like about this
>solution (listed on the wiki page), though reasonable people may disagree.
>
>cheers,
>Zane.
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lis

Re: [openstack-dev] [Heat] HOT Software configuration proposal

2013-10-28 Thread Keith Bray
In-line comments.

On 10/28/13 5:43 PM, "Steve Baker"  wrote:

>On 10/26/2013 05:25 AM, Clint Byrum wrote:
>> Excerpts from Angus Salkeld's message of 2013-10-24 18:48:16 -0700:
>>> On 24/10/13 11:54 +0200, Patrick Petit wrote:
 Hi Clint,
 Thank you! I have few replies/questions in-line.
 Cheers,
 Patrick
 On 10/23/13 8:36 PM, Clint Byrum wrote:
> I think this fits into something that I want for optimizing
> os-collect-config as well (our in-instance Heat-aware agent). That is
> a way for us to wait for notification of changes to Metadata without
> polling.
 Interesting... If I understand correctly that's kinda replacement of
 cfn-hup... Do you have a blueprint pointer or something more
 specific? While I see the benefits of it, in-instance notifications
 is not really what we are looking for. We are looking for a
 notification service that exposes an API whereby listeners can
 register for Heat notifications. AWS Alarming / CloudFormation has
 that. Why not Ceilometer / Heat? That would be extremely valuable for
 those who build PaaS-like solutions above Heat. To say it bluntly,
 I'd like to suggest we explore ways to integrate Heat with Marconi.
>>> Yeah, I am trying to do a PoC of this now. I'll let you know how
>>> it goes.
>>>
>>> I am trying to implement the following:
>>>
>>> heat_template_version: 2013-05-23
>>> parameters:
>>>key_name:
>>>  type: String
>>>flavor:
>>>  type: String
>>>  default: m1.small
>>>image:
>>>  type: String
>>>  default: fedora-19-i386-heat-cfntools
>>> resources:
>>>config_server:
>>>  type: OS::Marconi::QueueServer
>>>  properties:
>>>image: {get_param: image}
>>>flavor: {get_param: flavor}
>>>key_name: {get_param: key_name}
>>>
>>>configA:
>>>  type: OS::Heat::OrderedConfig
>>>  properties:
>>>marconi_server: {get_attr: [config_server, url]}
>>>hosted_on: {get_resource: serv1}
>>>script: |
>>>  #!/bin/bash
>>>  logger "1. hello from marconi"
>>>
>>>configB:
>>>  type: OS::Heat::OrderedConfig
>>>  properties:
>>>marconi_server: {get_attr: [config_server, url]}
>>>hosted_on: {get_resource: serv1}
>>>depends_on: {get_resource: configA}
>>>script: |
>>>  #!/bin/bash
>>>  logger "2. hello from marconi"
>>>
>>>serv1:
>>>  type: OS::Nova::Server
>>>  properties:
>>>image: {get_param: image}
>>>flavor: {get_param: flavor}
>>>key_name: {get_param: key_name}
>>>user_data: |
>>>  #!/bin/sh
>>>  # poll /v1/queues/{hostname}/messages
>>>  # apply config
>>>  # post a response message with any outputs
>>>  # delete request message
>>>
>> If I may diverge this a bit, I'd like to consider the impact of
>> hosted_on on reusability in templates. hosted_on feels like an
>> anti-pattern, and I've never seen anything quite like it. It feels wrong
>> for a well contained component to then reach out and push itself onto
>> something else which has no mention of it.
>>
>> I'll rewrite your template as I envision it working:
>>
>> resources:
>>config_server:
>>  type: OS::Marconi::QueueServer
>>  properties:
>>image: {get_param: image}
>>flavor: {get_param: flavor}
>>key_name: {get_param: key_name}
>>
>>configA:
>>  type: OS::Heat::OrderedConfig
>>  properties:
>>marconi_server: {get_attr: [config_server, url]}
>>script: |
>>  #!/bin/bash
>>  logger "1. hello from marconi"
>>
>>configB:
>>  type: OS::Heat::OrderedConfig
>>  properties:
>>marconi_server: {get_attr: [config_server, url]}
>>depends_on: {get_resource: configA}
>>script: |
>>  #!/bin/bash
>>  logger "2. hello from marconi"
>>
>>serv1:
>>  type: OS::Nova::Server
>>  properties:
>>image: {get_param: image}
>>flavor: {get_param: flavor}
>>key_name: {get_param: key_name}
>>components:
>>  - configA
>>  - configB
>>user_data: |
>>  #!/bin/sh
>>  # poll /v1/queues/{hostname}/messages
>>  # apply config
>>  # post a response message with any outputs
>>  # delete request message
>>
>> This only becomes obvious why it is important when you want to do this:
>>
>> configC:
>>   type: OS::Heat::OrderedConfig
>>   properties:
>> script: |
>>   #!/bin/bash
>>   logger "?. I can race with A, no dependency needed"
>>
>> serv2:
>>   type: OS::Nova::Server
>>   properties:
>>   ...
>>   components:
>> - configA
>> - configC
>>
>> This is proper composition, where the caller defines the components, not
>> the callee. Now you can re-use configA with a different component in the
>> same template. As we get smarter we 

Re: [openstack-dev] [Heat] HOT Software configuration proposal

2013-10-24 Thread Keith Bray
Hi Thomas, here's my opinion:  Heat and Solum contributors will work
closely together to figure out where specific feature implementations
belong... But, in general, Solum is working at a level above Heat.  To
write a Heat template, you have to know about infrastructure setup and
configuration settings of infrastructure and API services.  I believe
Solum intends to provide the ability to tweak and configure the amount of
complexity that gets exposed or hidden so that it becomes easier for cloud
consumers to just deal with their application and not have to necessarily
know or care about the underlying infrastructure and API services, but
that level of detail can be exposed to them if necessary. Solum will know
what infrastructure and services to set up to run applications, and it
will leverage Heat and Heat templates for this.

The Solum project has been very vocal about leveraging Heat under the hood
for the functionality and vision of orchestration that it intends to
provide.  It seems, based on this thread (and +1 from me), enough people
are interested in having Heat provide some level of software
orchestration, even if it's just bootstrapping other CM tools and
coordinating the "when are you done", and I haven't heard any Solum folks
object to Heat implementing software orchestration capabilities... So, I'm
looking forward to great discussions on this topic for Heat at the summit.
 If you recall, Adrian Otto (who announced project Solum) was also the one
who was vocal at the Portland summit about the need for HOT syntax.  I
think both projects are on a good path with a lot of fun collaboration
time ahead.

Kind regards,
-Keith

On 10/24/13 7:56 AM, "Thomas Spatzier"  wrote:

>Hi all,
>
>maybe a bit off track with respect to latest concrete discussions, but I
>noticed the announcement of project "Solum" on openstack-dev.
>Maybe this is playing on a different level, but I still see some relation
>to all the software orchestration we are having. What are your opinions on
>this?
>
>BTW, I just posted a similar short question in reply to the Solum
>announcement mail, but some of us have mail filters an might read [Heat]
>mail with higher prio, and I was interested in the Heat view.
>
>Cheers,
>Thomas
>
>Patrick Petit  wrote on 24.10.2013 12:15:13:
>> From: Patrick Petit 
>> To: OpenStack Development Mailing List
>,
>> Date: 24.10.2013 12:18
>> Subject: Re: [openstack-dev] [Heat] HOT Software configuration proposal
>>
>> Sorry, I clicked the 'send' button too quickly.
>>
>> On 10/24/13 11:54 AM, Patrick Petit wrote:
>> > Hi Clint,
>> > Thank you! I have few replies/questions in-line.
>> > Cheers,
>> > Patrick
>> > On 10/23/13 8:36 PM, Clint Byrum wrote:
>> >> Excerpts from Patrick Petit's message of 2013-10-23 10:58:22 -0700:
>> >>> Dear Steve and All,
>> >>>
>> >>> If I may add up on this already busy thread to share our experience
>> >>> with
>> >>> using Heat in large and complex software deployments.
>> >>>
>> >> Thanks for sharing Patrick, I have a few replies in-line.
>> >>
>> >>> I work on a project which precisely provides additional value at the
>> >>> articulation point between resource orchestration automation and
>> >>> configuration management. We rely on Heat and chef-solo respectively
>> >>> for
>> >>> these base management functions. On top of this, we have developed
>>an
>> >>> event-driven workflow to manage the life-cycles of complex software
>> >>> stacks which primary purpose is to support middleware components as
>> >>> opposed to end-user apps. Our use cases are peculiar in the sense
>that
>> >>> software setup (install, config, contextualization) is not a
>>one-time
>> >>> operation issue but a continuous thing that can happen any time in
>> >>> life-span of a stack. Users can deploy (and undeploy) apps long time
>> >>> after the stack is created. Auto-scaling may also result in an
>> >>> asynchronous apps deployment. More about this latter. The framework
>we
>> >>> have designed works well for us. It clearly refers to a PaaS-like
>> >>> environment which I understand is not the topic of the HOT software
>> >>> configuration proposal(s) and that's absolutely fine with us.
>However,
>> >>> the question for us is whether the separation of software config
>>from
>> >>> resources would make our life easier or not. I think the answer is
>> >>> definitely yes but at the condition that the DSL extension preserves
>> >>> almost everything from the expressiveness of the resource element.
>>In
>> >>> practice, I think that a strict separation between resource and
>> >>> component will be hard to achieve because we'll always need a little
>> >>> bit
>> >>> of application's specific in the resources. Take for example the
>> >>> case of
>> >>> the SecurityGroups. The ports open in a SecurityGroup are
>>application
>> >>> specific.
>> >>>
>> >> Components can only be made up of the things that are common to all
>> >> users
>> >> of said component. Also components would, if I understand the concept
>

Re: [openstack-dev] [heat] [scheduler] Bringing things together for Icehouse

2013-09-23 Thread Keith Bray

I think this picture is relevant to Heat context:
https://docs.google.com/drawings/d/1Y_yyIpql5_cdC8116XrBHzn6GfP_g0NHTTG_W4o
0R9U/edit


As more and more types of compute (containers, VMs, bare metal) and other
resources (geographically dispersed) become available from the cloud with
boarder capabilities (e.g. regionally dispersed backups,
failover/recovery, etc.), the concept of scheduling and optimizing
resource placement becomes more important, particularly when a customer
wants to deploy an application that has multiple underlying resource needs
but doesn't want to know (or care) about specifying the details of those
resources and their placement.

I'm not advocating that this does or does not belongs in Heat (in general
I think Stack resource placement, region, etc., belongs with the template
author or authoring system, and I think physical resource placement
belongs with the underlying service, Nova, Trove, etc.), but I appreciate
Mike including Heat on this. I for one would vote that we consider this
"in-context" for discussion purposes, regardless of action.  Placement
coordination across disparate resource services is likely to become a more
prominent problem, and given Heat has the most holistic view of the
application topology stack within the cloud, Heat may have something to
offer in being a piece of the solution.

Kind regards,
-Keith


On 9/23/13 11:22 AM, "Zane Bitter"  wrote:

>On 15/09/13 09:19, Mike Spreitzer wrote:
>> But first I must admit that I am still a newbie to OpenStack, and still
>> am missing some important clues.  One thing that mystifies me is this: I
>> see essentially the same thing, which I have generally taken to calling
>> holistic scheduling, discussed in two mostly separate contexts: (1) the
>> (nova) scheduler context, and (2) the ambitions for heat.  What am I
>> missing?
>
>I think what you're missing is that the only person discussing this in
>the context of Heat is you. Beyond exposing the scheduling parameters in
>other APIs to the user, there's nothing here for Heat to do.
>
>So if you take [heat] out of the subject line then it will be discussed
>in only one context, and you will be mystified no longer. Hope that helps
>:)
>
>cheers,
>Zane.
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] How the autoscale API should control scaling in Heat

2013-09-12 Thread Keith Bray
Steve, I think I see where I introduced some confusion...   Below, when
you draw:
User -> Trove -> (Heat -> Nova)
I come at it from a view that the version of Nova that Trove talks to (via
Heat or not) is not necessarily a publicly available Nova endpoint (I.e.
Not in the catalog), although it *could* be. For example, there are
reasons that Trove may provision to an internal-only Nova end-point that
is tricked out with custom scheduler or virt driver (e.g. Containers) or
special DB performant hardware, etc.  This Nova endpoint would be
different than the Nova endpoint in the end-user's catalog.  But, I
realize that Trove could interact with the catalog endpoint for Nova as
well. I'm sorry for the confusion I introduced by how I was thinking about
that.  I guess this is one of those differences between a default
OpenStack setup vs. how a service provider might want to run the system
for scale and performance.  The cool part is, I think Heat and all these
general services can work in a  variety of cool configurations!

-Keith  

On 9/12/13 2:30 AM, "Steven Hardy"  wrote:

>On Thu, Sep 12, 2013 at 01:07:03AM +, Keith Bray wrote:
>> There is context missing here.  heat==>trove interaction is through the
>> trove API.  trove==>heat interaction is a _different_ instance of Heat,
>> internal to trove's infrastructure setup, potentially provisioning
>> instances.   Public Heat wouldn't be creating instances and then telling
>> trove to make them into databases.
>
>Well that's a deployer decision, you wouldn't need (or necessarily want)
>to
>run an additional heat service (if that's what you mean by "instance" in
>this case).
>
>What you may want is for the trove-owned stacks to be created in
>a different tenant (owned by the trove service user in the services
>tenant?)
>
>So the top level view would be:
>
>User -> Trove -> (Heat -> Nova)
>
>Or if the user is interacting via a Trove Heat resource
>
>User -> Heat -> Trove -> (Heat -> Nova)
>
>There is nothing circular here, Trove uses Heat as an internal
>implementation detail:
>
>* User defines a Heat template, and passes it to Heat
>* Heat parses the template and translates a Trove resource into API calls
>* Trove internally defines a stack, which is passes to Heat
>
>In the last step, although Trove *could* just pass on the user token it
>has
>from the top level API interaction to Heat, you may not want it to,
>particularly in public cloud environments.
>
>Steve
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] How the autoscale API should control scaling in Heat

2013-09-11 Thread Keith Bray
There is context missing here.  heat==>trove interaction is through the
trove API.  trove==>heat interaction is a _different_ instance of Heat,
internal to trove's infrastructure setup, potentially provisioning
instances.   Public Heat wouldn't be creating instances and then telling
trove to make them into databases.

At least, that's what I understand from conversations with the Trove
folks.  I could be wrong here also.

-Keith

On 9/11/13 11:11 AM, "Joshua Harlow"  wrote:

>Sure,
>
>I was thinking that since heat would do autoscaling persay, then heat
>would say ask trove to make more databases (autoscale policy here) then
>this would cause trove to actually callback into heat to make more
>instances.
>
>Just feels a little weird, idk.
>
>Why didn't heat just make those instances "on behalf of trove" to begin
>with and then tell trove "make these instances into databases". Then
>trove doesn't really need to worry about calling into heat to do the
>instance creation "work", and trove can just worry about converting those
>"blank instances " into databases (for example).
>
>But maybe I am missing other context also :)
>
>Sent from my really tiny device...
>
>On Sep 11, 2013, at 8:04 AM, "Clint Byrum"  wrote:
>
>> Excerpts from Joshua Harlow's message of 2013-09-11 01:00:37 -0700:
>>> +1
>>> 
>>> The assertions are not just applicable to autoscaling but to software
>>>in general. I hope we can make autoscaling "just enough" simple to work.
>>> 
>>> The circular heat<=>trove example is one of those that does worry me a
>>>little. It feels like something is not structured right if that it is
>>>needed (rube goldberg like). I am not sure what could be done
>>>differently, just my gut feeling that something is "off".
>> 
>> Joshua, can you elaborate on "the circular heat<=>trove example"?
>> 
>> I don't see Heat and Trove's relationship as circular. Heat has a Trove
>> resource, and (soon? now?) Trove can use Heat to simplify its control
>> of underlying systems. This is a stack, not a circle, or did I miss
>> something?
>> 
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Heat] Where does "Shelving" belong

2013-06-25 Thread Keith Bray
I tend toward this shelving feature having general use and applicability
to the Nova service.  If this shelving feature existed in Nova, users of
Heat could certainly make use of it through operations on their Stack, but
if something has applicability to a specific service, that feature should
exist in that service.

As for implementation of such a feature, you may want to take a look at
the TaskFlow[1][2] project:  A python library for OpenStack that makes
task and flow (a.k. workflow) execution easy, consistent, and reliable.
It's a work in progress, but coming along quickly and hopes to centralize
this needed functionality across many of the OpenStack projects.

[1] https://wiki.openstack.org/wiki/TaskFlow
[2] https://github.com/stackforge/taskflow

-Keith

On 6/25/13 9:22 AM, "Andrew Laski"  wrote:

>I have a couple of reviews up to introduce the concept of shelving an
>instance into Nova.  The question has been raised as to whether or not
>this belongs in Nova, or more rightly belongs in Heat.  The blueprint
>for this feature can be found at
>https://blueprints.launchpad.net/nova/+spec/shelve-instance, but to make
>things easy I'll outline some of the goals here.
>
>The main use case that's being targeted is a user who wishes to stop an
>instance at the end of a workday and then restart it again at the start
>of their next workday, either the next day or after a weekend.  From a
>service provider standpoint the difference between shelving and stopping
>an instance is that the contract allows removing that instance from the
>hypervisor at any point so unshelving may move it to another host.
>
> From a user standpoint what they're looking for is:
>
>The ability to retain the endpoint for API calls on that instance.  So
>v2//servers/ continues to work after the instance
>is unshelved.
>
>All networking, attached volumes, admin pass, metadata, and other user
>configurable properties remain unchanged when shelved/unshelved.  Other
>properties like task/vm/power state, host, *_at, may change.
>
>The ability to see that instance in their list of servers when shelved.
>
>
>
>Again, the objection that has been raised is that it seems like
>orchestration and therefore would belong in Heat.  While this is
>somewhat similar to a snapshot/destroy/rebuild workflow there are
>certain properties of shelving in Nova that I can't see how to reproduce
>by handling this externally.  At least not without exposing Nova
>internals beyond a comfortable level.
>
>So I'd like to understand what the thinking is around why this belongs
>in Heat, and how that could be accomplished.
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev