Re: [openstack-dev] [heat] [nova] How should a holistic scheduler relate to Heat?

2014-04-03 Thread Mike Spreitzer
Clint Byrum  wrote on 04/03/2014 07:01:16 PM:

> ... The whole question raises many more
> questions, and I wonder if there's just something you haven't told us
> about this use case. :-P

Yes, I seem to have made a muddle of things by starting in one corner of a 
design space.  Let me try to reset this conversation and start from the 
beginning and go slowly enough.  I have adjusted the email subject line to 
describe the overall discussion and invite Nova people, who should also 
participate because this involves the evolution of the Nova API.

Let's start with the simple exercise of designing a resource type for the 
existing server-groups feature of Nova, and then consider how to take one 
evolutionary step forward (from sequential to holistic scheduling).  By 
"scheduling" here I mean simply placement, not a more sophisticated thing 
that includes time as well.

The server-groups feature of Nova (
https://blueprints.launchpad.net/nova/+spec/instance-group-api-extension) 
allows a Nova client to declare a group (just the group as a thing unto 
itself, not listing its members) and associate placement policies with it, 
and include a reference to the group in each Nova API call that creates a 
member of the group --- thereby putting those instances in that group, for 
the purpose of letting the scheduling for those instances take the group's 
policies into account.  The policies currently supported are affinity and 
anti-affinity.  This does what might be called sequential scheduling: when 
an instance is created, its placement decision can take into account its 
group's policies and the placement decisions already made for instances 
previously created, but cannot take into account the issues of placing 
instances that have yet to be created.

We can define a Heat resource type for a server-group.  Such a resource 
would include its policy set, and not its members, among its properties. 
In the Heat snippet for an OS::Nova::Server there could be a reference to 
a server-group resource.  This directly reflects the API outlined above, 
the dependencies run in the right direction for that API, and it looks to 
me like a pretty simple and clear design. Do not ask me whether a 
server-group's attributes include its members.

If the only placement policies are anti-affinity policies and all servers 
are eligible for the same places then I think that there is no advantage 
in scheduling holistically.  But I am interested in a broader set of 
scenarios, and for those holistic scheduling can get better results than 
sequential scheduling in some cases.

Now let us consider how to evolve the Nova API so that a server-group can 
be scheduled holistically.  That is, we want to enable the scheduler to 
look at both the group's policies and its membership, all at once, and 
make a joint decision about how to place all the servers (instances) in 
the group.  There is no agreed answer here yet, but let me suggest one 
that I hope can move this discussion forward.  The key idea is to first 
associate not just the policies but also a description of the group's 
members with the group, then get the joint scheduling decision made, then 
let the client orchestrate the actual creation of the servers.  This could 
be done with a two-step API: one step creates the group, given its 
policies and member descriptions, and in the second step the client makes 
the calls that cause the individual servers to be made; as before, each 
such call includes a reference to the group --- which is now associated 
(under the covers) with a table that lists the chosen placement for each 
server.  The server descriptions needed in the first step are not as 
extensive as the descriptions needed in the second step.  For example, the 
holistic scheduler would not care about the user_data of a server.  We 
could define a new data structure for member descriptions used in the 
first step (this would probably be a pared-down version of what is used in 
the second step).

Now let us consider how to expose this through Heat.  We could take a 
direct approach: modify our original server-group resource type so that 
its properties include not only the policy set but also the list of member 
descriptions, and the rest remains unchanged.  That would work, but it 
would be awkward for template authors.  They now have to write two 
descriptions of each server --- with no help at authoring time for 
ensuring the requisite consistency between the two descriptions.  Of 
course, the Nova API is no better regarding consistency, it can (at best) 
check for consistency when it sees the second description of a given 
server.  But the Nova API is imperative, while a Heat template is intended 
to be declarative.  I do not like double description because it adds bulk 
and creates additional opportunities for mistakes (compared to single 
description).

How can we avoid double-description?  A few ideas come to mind.

One approach involves a change in the Heat engine's f

Re: [openstack-dev] [Mistral] Next crack at real workflows

2014-04-03 Thread Renat Akhmerov
Dmitri, nice work, will research them carefully early next week. I would ask 
other folks to do the same (especially Nikolay).

Renat Akhmerov
@ Mirantis Inc.

On 03 Apr 2014, at 06:22, Dmitri Zimine  wrote:

> Two more workflows drafted - cloud cron, and lifecycle, version 1. 
> 
> The mindset questions are: 
> 1) is DSL syntax expressive, and capable and convenient to handle real use 
> cases?
> 2) most importantly: what are the implied workflow capabilities which make it 
> all work? 
>  
> * Take a look here: 
> https://github.com/dzimine/mistral-workflows/tree/add-usecases
> 
> * Leave your comments - generally, or  line-by-line, in the pull request  
> https://github.com/dzimine/mistral-workflows/pull/1/files
> 
> * Fork, do your own modifications and do another pull request. 
> 
> * Or just reply with your comments  in email (lazy option :)) 
> 
> NOTE: please keep this thread for specific comments on DSL and workflow 
> capabilities, create another thread if changing topic. Thanks! 
> 
> DZ> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] How Mistral handling long running delegate tasks

2014-04-03 Thread Renat Akhmerov

On 04 Apr 2014, at 07:33, Kirill Izotov  wrote:

>> Then, we can make task executor interface public and allow clients to
>> provide their own task executors. It will be possible then for Mistral
>> to implement its own task executor, or several, and share the
>> executors between all the engine instances.
> I'm afraid that if we start to tear apart the TaskFlow engine, it would 
> quickly become a mess to support. Besides, the amount of things left to 
> integrate after we throw out engine might be so low it proof the whole 
> process of integration to be just nominal and we are back to square one. Any 
> way, task execution is the part that least bothers me, both graph action and 
> the engine itself is where the pain will be.

Would love to see something additional (boxed&arrows) explaining this approach. 
Sorry, I’m hardly following the idea.

>> That is part of our public API, it is stable and good enough. Basically,
>> I don't think this API needs any major change.
> 
>> But whatever should and will be done about it, I daresay all that work
>> can be done without affecting API more then I described above.
> 
> I completely agree that we should not change the public API of the sync 
> engine, especially the one in helpers. What we need is, on the contrary, a 
> low level construct that would do the number of things i stated previously, 
> but will be a part of public API of TaskFlow so we can be sure it would work 
> exactly the same way it worked yesterday.

I’m 99.9% sure we’ll have to change API because all we’ve been discussing 
so far made me think this is a key point going implicitly through all our 
discussions: without have a public method like “task_done” we won’t build truly 
passive/async execution model. And it doesn’t matter wether it uses futures, 
callbacks or whatever else inside.

And again, just want to repeat. If we will be able to deal with all the 
challenges that passive/async execution model exposes then other models can be 
built trivially on top of it.

@Ivan,

Thanks for joining the conversation. Looks like we really need your active 
participation for you’re the one who knows all the TF internals and concepts 
very well. As for what you wrote about futures and callbacks, it would be 
helpful to see some illustration of your idea.

Renat Akhmerov
@ Mirantis Inc.___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev][Nova][VMWare] Enable live migration with one nova compute

2014-04-03 Thread Jay Lau
2014-04-04 12:46 GMT+08:00 Jay Pipes :

> On Fri, 2014-04-04 at 11:08 +0800, Jay Lau wrote:
> > Thanks Jay and Chris for the comments!
> >
> > @Jay Pipes, I think that we still need to enable "one nova compute
> > live migration" as one nova compute can manage multiple clusters and
> > VMs can be migrated between those clusters managed by one nova
> > compute.
>
> Why, though? That is what I am asking... seems to me like this is an
> anti-feature. What benefit does the user get from moving an instance
> from one VCenter cluster to another VCenter cluster if the two clusters
> are on the same physical machine?
>
@Jay Pipes, for VMWare, one physical machine (ESX server) can only belong
to one VCenter cluster, so we may have following scenarios.
DC
 |
 |---Cluster1
 |  |
 |  |---host1
 |
 |---Cluser2
|
|---host2

Then when using VCDriver, I can use one nova compute manage both Cluster1
and Cluster2, this caused me cannot migrate VM from host2 to host1 ;-(

The bp was introduced by
https://blueprints.launchpad.net/nova/+spec/multiple-clusters-managed-by-one-service

>
> Secondly, why is it that a single nova-compute manages multiple VCenter
> clusters? This seems like a hack to me... perhaps someone who wrote the
> code for this or knows the decision behind it could chime in here?
>
> >  For cell, IMHO, each "cell" can be treated as a small "cloud" but not
> > a "compute", each "cell cloud" should be able to handle VM operations
> > in the small cloud itself. Please correct me if I am wrong.
>
> Yes, I agree with you that a cell is not a compute. Not sure if I said
> otherwise in my previous response. Sorry if it was confusing! :)
>
> Best,
> -jay
>
> > @Chris, "OS-EXT-SRV-ATTR:host" is the host where nova compute is
> > running and "OS-EXT-SRV-ATTR:hypervisor_hostname" is the hypervisor
> > host where the VM is running. Live migration is now using "host" for
> > live migration. What I want to do is enable migration with one "host"
> > and the "host" managing multiple "hyperviosrs".
> >
> >
> > I'm planning to draft a bp for review which depend on
> > https://blueprints.launchpad.net/nova/+spec/vmware-auto-inventory
> >
> >
> > Thanks!
> >
> >
> >
> > 2014-04-04 8:03 GMT+08:00 Chris Friesen :
> > On 04/03/2014 05:48 PM, Jay Pipes wrote:
> > On Mon, 2014-03-31 at 17:11 +0800, Jay Lau wrote:
> > Hi,
> >
> > Currently with VMWare VCDriver, one nova
> > compute can manage multiple
> > clusters/RPs, this caused cluster admin cannot
> > do live migration
> > between clusters/PRs if those clusters/PRs
> > managed by one nova compute
> > as the current live migration logic request at
> > least two nova
> > computes.
> >
> >
> > A bug [1] was also filed to trace VMWare live
> > migration issue.
> >
> > I'm now trying the following solution to see
> > if it is acceptable for a
> > fix, the fix wants enable live migration with
> > one nova compute:
> > 1) When live migration check if host are same,
> > check both host and
> > node for the VM instance.
> > 2) When nova scheduler select destination for
> > live migration, the live
> > migration task should put (host, node) to
> > attempted hosts.
> > 3) Nova scheduler needs to be enhanced to
> > support ignored_nodes.
> > 4) nova compute need to be enhanced to check
> > host and node when doing
> > live migration.
> >
> > What precisely is the point of "live migrating" an
> > instance to the exact
> > same host as it is already on? The failure domain is
> > the host, so moving
> > the instance from one "cluster" to another, but on the
> > same host is kind
> > of a silly use case IMO.
> >
> >
> > Here is where precise definitions of "compute node",
> > "OS-EXT-SRV-ATTR:host", and
> > "OS-EXT-SRV-ATTR:hypervisor_hostname", and "host" as
> > understood by novaclient would be nice.
> >
> > Currently the "nova live-migration" command takes a "host"
> > argument. It's not clear which of the above this corresponds
> > to.
> >
> > My understanding is that one nova-compute process can manage
> > multiple VMWare physical hosts.  So it could make sense to
> > support live migration betwee

Re: [openstack-dev] [heat] metadata for a HOT

2014-04-03 Thread Mark Washenberger
On Thu, Apr 3, 2014 at 10:50 AM, Keith Bray wrote:

>  Steve, agreed.  Your description I believe is the conclusion that the
> community came to when this was perviously discussed, and we managed to get
> the implementation of parameter grouping and ordering [1] that you
> mentioned which has been very helpful.  I don't think we landed the
> keywords blueprint [2], which may be controversial because it is
> essentially unstructured. I wanted to make sure Mike had the links for
> historical context, but certainly understand and appreciate your point of
> view here.  I wasn't able to find the email threads to point Mike to, but
> assume they exist in the list archives somewhere.
>
>  We proposed another specific piece of template data [3] which I can't
> remember whether it was met with resistance or we just didn't get to
> implementing it since we knew we would have to store other data specific to
> our uses cases in other files anyway.   We decided to go with storing our
> extra information in a catalog (really just a Git repo with a README.MD[4]) 
> for now  until we can implement acceptable catalog functionality
> somewhere like Glance, hopefully in the Juno cycle.  When we want to share
> the template, we share all the files in the repo (inclusive of the
> README.MD).  It would be more ideal if we could share a single file
> (package) inclusive of the template and corresponding help text and any
> other UI hint info that would helpful.  I expect service providers to have
> differing views of the extra data they want to store with a template... So
> it'd just be nice to have a way to account for service providers to store
> their unique data along with a template that is easy to share and is part
> of the template package.  We bring up portability and structured data
> often, but I'm starting to realize that portability of a template breaks
> down unless every service provider runs exactly the same Heat resources,
> same image IDs, flavor types, etc.). I'd like to drive more standardization
> of data for image and template data into Glance so that in HOT we can just
> declare things like "Linux, Flavor Ubuntu, latest LTS, minimum 1Gig" and
> automatically discover and choose the right image to provision, or error if
> a suitable match can not be found.
>

Yes, this is exactly the use case that has been driving our consideration
of the artifacts resource in Glance.

You mentioned discovery of compatible resources. I think its an important
use case, but I think the export and import approach can also be very
useful and I'd like to believe it is the general solution to cloud
portability.


>  The Murano team has been hinting at wanting to solve a similar problem,
> but with a broader vision from a complex-multi application declaration
> perspective that crosses multiple templates or is a layer above just
> matching to what capabilities Heat resources provide and matching against
> capabilities that a catalog of templates provide (and mix that with
> capabilities the cloud API services provide).  I'm not yet convinced that
> can't be done with a parent Heat template since we already have the
> declarative constructs and language well defined, but I appreciate the use
> case and perspective those folks are bringing to the conversation.
>
>  [1]
> https://blueprints.launchpad.net/heat/+spec/parameter-grouping-ordering
>  https://wiki.openstack.org/wiki/Heat/UI#Parameter_Grouping_and_Ordering
>
>  [2] https://blueprints.launchpad.net/heat/+spec/stack-keywords
> https://wiki.openstack.org/wiki/Heat/UI#Stack_Keywords
>
>  [3] https://blueprints.launchpad.net/heat/+spec/add-help-text-to-template
> https://wiki.openstack.org/wiki/Heat/UI#Help_Text
>
>  [4] Ex. Help Text accompanying a template in README.MD format:
> https://github.com/rackspace-orchestration-templates/docker
>
>  -Keith
>
>   From: Steven Dake 
>
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: Thursday, April 3, 2014 10:30 AM
>
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Subject: Re: [openstack-dev] [heat] metadata for a HOT
>
>   On 04/02/2014 08:41 PM, Keith Bray wrote:
>
> https://wiki.openstack.org/wiki/Heat/StackMetadata
>
>  https://wiki.openstack.org/wiki/Heat/UI
>
>  -Keith
>
>  Keith,
>
> Taking a look at the UI specification, I thought I'd take a look at adding
> parameter grouping and ordering to the hot_spec.rst file.  That seems like
> a really nice constrained use case with a clear way to validate that folks
> aren't adding magic to the template for their custom environments.  During
> that, I noticed it is is already implemented.
>
> What is nice about this specific use case is it is something that can be
> validated by the parser.  For example, the parser could enforce that
> parameters in the parameter-groups section actually exist as parameters in
> the parameters section.  Essentially this pa

Re: [openstack-dev] [OpenStack-Dev][Nova][VMWare] Enable live migration with one nova compute

2014-04-03 Thread Jay Pipes
On Fri, 2014-04-04 at 11:08 +0800, Jay Lau wrote:
> Thanks Jay and Chris for the comments!
> 
> @Jay Pipes, I think that we still need to enable "one nova compute
> live migration" as one nova compute can manage multiple clusters and
> VMs can be migrated between those clusters managed by one nova
> compute.

Why, though? That is what I am asking... seems to me like this is an
anti-feature. What benefit does the user get from moving an instance
from one VCenter cluster to another VCenter cluster if the two clusters
are on the same physical machine?

Secondly, why is it that a single nova-compute manages multiple VCenter
clusters? This seems like a hack to me... perhaps someone who wrote the
code for this or knows the decision behind it could chime in here?

>  For cell, IMHO, each "cell" can be treated as a small "cloud" but not
> a "compute", each "cell cloud" should be able to handle VM operations
> in the small cloud itself. Please correct me if I am wrong.

Yes, I agree with you that a cell is not a compute. Not sure if I said
otherwise in my previous response. Sorry if it was confusing! :)

Best,
-jay

> @Chris, "OS-EXT-SRV-ATTR:host" is the host where nova compute is
> running and "OS-EXT-SRV-ATTR:hypervisor_hostname" is the hypervisor
> host where the VM is running. Live migration is now using "host" for
> live migration. What I want to do is enable migration with one "host"
> and the "host" managing multiple "hyperviosrs".
> 
> 
> I'm planning to draft a bp for review which depend on
> https://blueprints.launchpad.net/nova/+spec/vmware-auto-inventory 
> 
> 
> Thanks!
> 
> 
> 
> 2014-04-04 8:03 GMT+08:00 Chris Friesen :
> On 04/03/2014 05:48 PM, Jay Pipes wrote:
> On Mon, 2014-03-31 at 17:11 +0800, Jay Lau wrote:
> Hi,
> 
> Currently with VMWare VCDriver, one nova
> compute can manage multiple
> clusters/RPs, this caused cluster admin cannot
> do live migration
> between clusters/PRs if those clusters/PRs
> managed by one nova compute
> as the current live migration logic request at
> least two nova
> computes.
> 
> 
> A bug [1] was also filed to trace VMWare live
> migration issue.
> 
> I'm now trying the following solution to see
> if it is acceptable for a
> fix, the fix wants enable live migration with
> one nova compute:
> 1) When live migration check if host are same,
> check both host and
> node for the VM instance.
> 2) When nova scheduler select destination for
> live migration, the live
> migration task should put (host, node) to
> attempted hosts.
> 3) Nova scheduler needs to be enhanced to
> support ignored_nodes.
> 4) nova compute need to be enhanced to check
> host and node when doing
> live migration.
> 
> What precisely is the point of "live migrating" an
> instance to the exact
> same host as it is already on? The failure domain is
> the host, so moving
> the instance from one "cluster" to another, but on the
> same host is kind
> of a silly use case IMO.
> 
> 
> Here is where precise definitions of "compute node",
> "OS-EXT-SRV-ATTR:host", and
> "OS-EXT-SRV-ATTR:hypervisor_hostname", and "host" as
> understood by novaclient would be nice.
> 
> Currently the "nova live-migration" command takes a "host"
> argument. It's not clear which of the above this corresponds
> to.
> 
> My understanding is that one nova-compute process can manage
> multiple VMWare physical hosts.  So it could make sense to
> support live migration between separate VMWare hosts even if
> they're managed by a single nova-compute process.
> 
> Chris
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> -- 
> Thanks,
> 
> 
> Jay
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-b

Re: [openstack-dev] [Cinder]a question about os-volume_upload_image

2014-04-03 Thread Mike Perez
On 01:32 Fri 04 Apr , Bohai (ricky) wrote:
> > -Original Message-
> > From: Mike Perez [mailto:thin...@gmail.com]
> > Sent: Friday, April 04, 2014 1:20 AM
> > To: OpenStack Development Mailing List (not for usage questions)
> > Subject: Re: [openstack-dev] [Cinder]a question about
> > os-volume_upload_image
> > 
> > What use case is that exactly? I mentioned earlier the original purpose was 
> > for
> > knowing if something was bootable. I'm curious on how else this is being 
> > used.
> > 
> 
> An image has the property for example: hw_scsi_model=virtio-scsi or other 
> user specify property.
> In process (1), cinder has saved the image property in the cinder DB.
> I hope in process(2), cinder could provide an option to save the properties 
> back to the new glance image.
> Without this ability, user has to find all the properties in origin image and 
> set them back by hand.
> This is useful when user just want to make a little modify to an origin image.
> 
> An image --(1)--> cinder volume --(2)--> An new Image

Hi Ricky,

Thanks for further explaining that to me. It does make sense, but I'm wondering
if there would be cases that people can think of where the properties specified
could become outdated because of changes that happen to the volume.  In glance
the properties make sense because an image is uploaded and has properties set.
The image is not going to be changed. If you want to change something about
that image, you upload a new one with the specified properties.  Rather with
a volume, it's constantly having blocks changed. Is there potential for that?
Could this problem be potential if a volume is migrated to a different backend?

-- 
Mike Perez

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] [Murano] [Solum] applications in the cloud

2014-04-03 Thread Steve Baker
On 03/04/14 13:04, Georgy Okrokvertskhov wrote:
> Hi Steve,
>
> I think this is exactly the place where we have a boundary between
> Murano catalog and HOT.
>
> In your example one can use abstract resource type and specify a
> correct implementation via environment file. This is how it will be
> done on the final stage in Murano too.
>
> Murano will solve another issue. In your example user should know what
> template to use as a provider template. In Murano this will be done in
> the following way:
> 1) User selects an app which requires a DB
> 2) Murano sees this requirement for DB and do a search in the app
> catalog to find all apps which expose this functionality. Murano uses
> app package definitions for that.
> 3) User select in UI specific DB implementation he wants to use.
>
> As you see, in Murano case user has no preliminary knowledge of
> available apps\templates and it uses catalog to find it. A search
> criteria can be quite complex with using different application
> attribute. If we think about moving application definition to HOT
> format it should provide all necessary information for catalog.
>
> In order to search apps in catalog which uses HOT format we need
> something like that:
>
> One needs to define abstract resource like
> OS:HOT:DataBase
>
> Than in each DB implementation of DB resource one has to somehow refer
> this abstract resource as a parent like
>
> Resource OS:HOT:MySQLDB
>   Parent: OS:HOT:DataBase
>
> Then catalog part can use this information and build a list of all
> apps\HOTs with resources with parents OS:HOT:DataBase
>
> That is what we are looking for. As you see, in this example I am not
> talking about version and other attributes which might be required for
> catalog.
>

This sounds like a vision for Murano that I could get behind. It would
be a tool which allows fully running applications to be assembled and
launched from a catalog of Heat templates (plus some app lifecycle
workflow beyond the scope of this email).

We could add type interfaces to HOT but I still think duck typing would
be worth considering. To demonstrate, lets assume that when a template
gets cataloged, metadata is also indexed about what parameters and
outputs the template has. So for the case above:
1) User selects an app to launch from the catalog
2) Murano performs a heat resource-type-list and compares that with the
types in the template. The resource-type list is missing
My::App::Database for a resource named my_db
3) Murano analyses the template and finds that My::App::Database is
assigned 2 properties (db_username, db_password) and elsewhere in the
template is a {get_attr: [my_db, db_url]} attribute access.
4) Murano queries glance for templates, filtering by templates which
have parameters [db_username, db_password] and outputs [db_url] (plus
whatever appropriate metadata filters)
5) Glance returns 2 matches. Murano prompts the user for a choice
6) Murano constructs an environment based on the chosen template,
mapping My::App::Database to the chosen template
7) Murano launches the stack

Sure, there could be a type interface called My::App::Database which
declares db_username, db_password and db_url, but since a heat template
is in a readily parsable declarative format, all required information is
available to analyze, both during glance indexing and app launching.
 

>
>
> On Wed, Apr 2, 2014 at 3:30 PM, Steve Baker  > wrote:
>
> On 03/04/14 10:39, Ruslan Kamaldinov wrote:
> > This is a continuation of the "MuranoPL questions" thread.
> >
> > As a result of ongoing discussions, we figured out that
> definition of layers
> > which each project operates on and has responsibility for is not
> yet agreed
> > and discussed between projects and teams (Heat, Murano, Solum (in
> > alphabetical order)).
> >
> > Our suggestion and expectation from this working group is to
> have such
> > a definition of layers, agreement on it and an approach of
> reaching it.
> >
> > As a starting point, we suggest the following:
> >
> > There are three layers of responsibility:
> > 1. Resources of the cloud
> > 2. Applications of the cloud
> > 3. Layers specific for Murano and Solum (to be further discussed and
> >clarified, for this discussion it's out of scope)
> >
> > Layer 1 is obviously covered by Heat.
> >
> > Most of the disagreement is around layer 2. Our suggestion is to
> figure out
> > the way where we can have common engine, DSL and approach to
> apps description.
> > For this we'll take HOT and TOSCA as a basis and will work on
> addition of
> > functionality missing from Murano and Solum point of view.
> >
> > We'll be happy if existing Heat team continue working on it,
> having the full
> > control of the project, provided that we agree on functionality
> missing there
> > from Murano and Solum point of view and

Re: [openstack-dev] [OpenStack][neutron][docs] netconn-api doc - adding a new chapter

2014-04-03 Thread Tom Fifield
Feel free to submit doc patches that don't build for review - docs 
reviewers are known to fix markup for you :)


On 04/04/14 11:11, Rajdeep Dua wrote:

I was trying to modify netconn-api docs with a new chapter.
Added a file in v2.0/ch_neutron_python_client.xml

modified:   v2.0/neutron-api-guide.xml by adding the following line

  

It gives a compilation error on executing "mvn generate-sources"

Details about the error

http://pastebin.com/LqTKb0Ct

Thanks
Rajdeep


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][neutron][docs] netconn-api doc - adding a new chapter

2014-04-03 Thread Anne Gentle
Reading further, I also wanted to be sure you are placing that information
in the correct location.

We have an End User Guide and references for python-neutronclient:
http://docs.openstack.org/user-guide/content/neutron_client_sample_commands.html
http://docs.openstack.org/cli-reference/content/neutronclient_commands.html
http://docs.openstack.org/cli-reference/content/neutron-debug_commands.html

Plus that guide contains a chapter about the Python SDK, such as:
http://docs.openstack.org/user-guide/content/sdk_auth_neutron.html

The neutron-api document is much more for a dev who wants to understand the
spec for the API itself, not for end-users of the API.

Let me know what you're attempting to document so we can find a good place
for it.

Anne


On Thu, Apr 3, 2014 at 10:11 PM, Rajdeep Dua  wrote:

> I was trying to modify netconn-api docs with a new chapter.
> Added a file in v2.0/ch_neutron_python_client.xml
>
> modified:   v2.0/neutron-api-guide.xml by adding the following line
>
>  
>
> It gives a compilation error on executing "mvn generate-sources"
>
> Details about the error
>
> http://pastebin.com/LqTKb0Ct
>
> Thanks
> Rajdeep
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] Some Thoughts on Log Message ID Generation Blueprint

2014-04-03 Thread Peng Wu
Hi,

  Recently I read the "Separate translation domain for log messages"
blueprint[1], and I found that we can store both English Message Log and
Translated Message Log with some configurations.

  I am an i18n Software Engineer, and we are thinking about "Add message
IDs for log messages" blueprint[2]. My thought is that if we can store
both English Message Log and Translated Message Log, we can skip the
need of Log Message ID Generation.

  I also commented the "Add message IDs for log messages" blueprint[2].

  If the servers always store English Log Messages, maybe we don't need
the "Add message IDs for log messages" blueprint[2] any more.

  Feel free to comment this proposal.

Thanks,
  Peng Wu

Refer URL:
[1]
https://blueprints.launchpad.net/oslo/+spec/log-messages-translation-domain
[2] https://blueprints.launchpad.net/oslo/+spec/log-messages-id



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][neutron][docs] netconn-api doc - adding a new chapter

2014-04-03 Thread Anne Gentle
Hi Rajdeep,
We have tox tests for the documentation that will check the syntax. Run
this:

tox -v -e checksyntax


to see if your xml source is correct.

Hope that gets you further along, let us know how it goes.

Anne


On Thu, Apr 3, 2014 at 10:11 PM, Rajdeep Dua  wrote:

> I was trying to modify netconn-api docs with a new chapter.
> Added a file in v2.0/ch_neutron_python_client.xml
>
> modified:   v2.0/neutron-api-guide.xml by adding the following line
>
>  
>
> It gives a compilation error on executing "mvn generate-sources"
>
> Details about the error
>
> http://pastebin.com/LqTKb0Ct
>
> Thanks
> Rajdeep
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OpenStack][neutron][docs] netconn-api doc - adding a new chapter

2014-04-03 Thread Rajdeep Dua
I was trying to modify netconn-api docs with a new chapter.
Added a file in v2.0/ch_neutron_python_client.xml

modified:   v2.0/neutron-api-guide.xml by adding the following line

 

It gives a compilation error on executing "mvn generate-sources"

Details about the error

http://pastebin.com/LqTKb0Ct

Thanks
Rajdeep
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev][Nova][VMWare] Enable live migration with one nova compute

2014-04-03 Thread Jay Lau
Thanks Jay and Chris for the comments!

@Jay Pipes, I think that we still need to enable "one nova compute live
migration" as one nova compute can manage multiple clusters and VMs can be
migrated between those clusters managed by one nova compute. For cell,
IMHO, each "cell" can be treated as a small "cloud" but not a "compute",
each "cell cloud" should be able to handle VM operations in the small cloud
itself. Please correct me if I am wrong.

@Chris, "OS-EXT-SRV-ATTR:host" is the host where nova compute is running
and "OS-EXT-SRV-ATTR:hypervisor_hostname" is the hypervisor host where the
VM is running. Live migration is now using "host" for live migration. What
I want to do is enable migration with one "host" and the "host" managing
multiple "hyperviosrs".

I'm planning to draft a bp for review which depend on
https://blueprints.launchpad.net/nova/+spec/vmware-auto-inventory

Thanks!


2014-04-04 8:03 GMT+08:00 Chris Friesen :

> On 04/03/2014 05:48 PM, Jay Pipes wrote:
>
>> On Mon, 2014-03-31 at 17:11 +0800, Jay Lau wrote:
>>
>>> Hi,
>>>
>>> Currently with VMWare VCDriver, one nova compute can manage multiple
>>> clusters/RPs, this caused cluster admin cannot do live migration
>>> between clusters/PRs if those clusters/PRs managed by one nova compute
>>> as the current live migration logic request at least two nova
>>> computes.
>>>
>>>
>>> A bug [1] was also filed to trace VMWare live migration issue.
>>>
>>> I'm now trying the following solution to see if it is acceptable for a
>>> fix, the fix wants enable live migration with one nova compute:
>>> 1) When live migration check if host are same, check both host and
>>> node for the VM instance.
>>> 2) When nova scheduler select destination for live migration, the live
>>> migration task should put (host, node) to attempted hosts.
>>> 3) Nova scheduler needs to be enhanced to support ignored_nodes.
>>> 4) nova compute need to be enhanced to check host and node when doing
>>> live migration.
>>>
>>
>> What precisely is the point of "live migrating" an instance to the exact
>> same host as it is already on? The failure domain is the host, so moving
>> the instance from one "cluster" to another, but on the same host is kind
>> of a silly use case IMO.
>>
>
> Here is where precise definitions of "compute node",
> "OS-EXT-SRV-ATTR:host", and "OS-EXT-SRV-ATTR:hypervisor_hostname", and
> "host" as understood by novaclient would be nice.
>
> Currently the "nova live-migration" command takes a "host" argument. It's
> not clear which of the above this corresponds to.
>
> My understanding is that one nova-compute process can manage multiple
> VMWare physical hosts.  So it could make sense to support live migration
> between separate VMWare hosts even if they're managed by a single
> nova-compute process.
>
> Chris
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ML2][Ml2Plugin] Setting _original_network in NetworkContext:

2014-04-03 Thread Mohammad Banikazemi

Nader,

During the last ML2 IRC weekly meeting [1] having per-MD extensions was
mentioned. This is an important topic in my opinion. You may want to add a
proposal for a design session on this topic at [2] and/or add this topic to
the agenda for the next ML2 weekly meeting [3] for further discussion.

Mohammad

[1]
http://eavesdrop.openstack.org/meetings/networking_ml2/2014/networking_ml2.2014-04-02-16.00.log.html

[2] http://summit.openstack.org
[3] https://wiki.openstack.org/wiki/Meetings/ML2



From:   Nader Lahouti 
To: "OpenStack Development Mailing List (not for usage questions)"
, Andre Pech
,
Date:   04/03/2014 01:16 PM
Subject:Re: [openstack-dev] [Neutron][ML2][Ml2Plugin] Setting
_original_network in NetworkContext:



Thanks a lot Andre for the reply.
My comments inline:

On Wed, Apr 2, 2014 at 12:37 PM, Andre Pech 
wrote:



  On Fri, Mar 28, 2014 at 6:44 PM, Nader Lahouti 
  wrote:
   Hi Mathieu,

   Thanks a lot for your reply.

   Even in the neutron/neutron/db/db_base_plugin_v2.py: create_network()
   passes network object:
   911def create_network(self, context, network):
   912"""Handle creation of a single network."""
   913# single request processing
   914n = network['network']    'n' has all the
   network info (including extensions)
   915# NOTE(jkoelker) Get the tenant_id outside of the session to
   avoid
   916#unneeded db action if the operation raises
   917tenant_id = self._get_tenant_id_for_create(context, n)
   918with context.session.begin(subtransactions=True):
   919args = {'tenant_id': tenant_id,
   920'id': n.get('id') or uuidutils.generate_uuid(),
   921'name': n['name'],
   922'admin_state_up': n['admin_state_up'],
   923'shared': n['shared'],
   924'status': n.get('status', constants.
   NET_STATUS_ACTIVE)}
   925network = models_v2.Network(**args)  <<= 'network' does
   not include extensions.
   926context.session.add(network)
   927return self._make_network_dict(network, process_extensions=
   False)
   even if process_extensions set to True, we still have issue.

   If using original_network, causes confusion can we add a new parameter
   and use it in mechanism driver?
   Also haven't received any reply from salvotore.

  Yes, not re-using original_network would be my preference.


Will add new parameter to avoid re-using original_network.


   * Another issue with the Ml2Plugin regarding the extensions is that
   neutron api can fail as it cannot find any handler in the plugin for
   request such as get/update/create/delete.


   For instance I added 'config_profile' as an extensions to network
   resource and get this error.


   2014-03-28 12:27:02.495 TRACE  resource.py: neutron.api.v2.resource
   File


   "/opt/stack/neutron/neutron/api/v2/base.py", line 273, in index


   2014-03-28 12:27:02.495 TRACE  resource.py: neutron.api.v2.resource
   ret


   urn self._items(request, True, parent_id)


   2014-03-28 12:27:02.495 TRACE  resource.py: neutron.api.v2.resource
   File


   "/opt/stack/neutron/neutron/api/v2/base.py", line 226, in _items


   2014-03-28 12:27:02.495 TRACE  resource.py: neutron.api.v2.resource
   obj_getter = getattr(self._plugin, self._plugin_handlers[self.LIST])


   2014-03-28 12:27:02.495 TRACE resource.py: neutron.api.v2.resource
   AttributeError: 'Ml2Plugin' object has no attribute
   'get_config_profiles'


   2014-03-28 12:27:02.495 TRACE  resource.py: neutron.api.v2.resource



   We need to add either (1) Make Ml2Plugin code aware of such an attribute
   (e.g. adding another base class, which may not be a good solution) or
   (2) make the changes in neutron/neutron/api/v2/base.py to understand if
   Ml2Plugin is supported then get the attributes from mechanism driver.
   (3) any other idea?

   I already implemented (2) in my private repo, to fix this error.
   Also, I have already opened a BP to for supporting extensions in MD, and
   if it is okay I can include the fix as part of that BP.

  Yes, we don't really have great support for extensions today, so fixing
  this in the context of making extensions work in general with ML2 and
  MechanismDrivers, this sounds great.

  Thanks for taking this on,

Sure. Will add it as part of
https://blueprints.launchpad.net/neutron/+spec/neutron-ml2-mechanismdriver-extensions

Thanks,
Nader.



  Andre



   Thanks,
   Nader.



   On Fri, Mar 28, 2014 at 8:22 AM, Mathieu Rohon 
   wrote:
 hi nader,

 I don't think this parameter could be used in this case. As andre said
 , tha original-network is usefull for update and delete commands. It
 would led to misunderstandings if we use this param in other cases,
 and particulary in create commands.
 I'm still thinking that the result of  su

Re: [openstack-dev] [Cinder]a question about os-volume_upload_image

2014-04-03 Thread Lingxian Kong
2014-04-04 9:32 GMT+08:00 Bohai (ricky) :

>
> An image has the property for example: hw_scsi_model=virtio-scsi or other
> user specify property.
> In process (1), cinder has saved the image property in the cinder DB.
> I hope in process(2), cinder could provide an option to save the
> properties back to the new glance image.
> Without this ability, user has to find all the properties in origin image
> and set them back by hand.
> This is useful when user just want to make a little modify to an origin
> image.
>
> An image --(1)--> cinder volume --(2)--> An new Image
>
>
y
eah, this is what I want, but I wander whether it's reasonable for sombody
else.


-- 
*---*
*Lingxian Kong*
Huawei Technologies Co.,LTD.
IT Product Line CloudOS PDU
China, Xi'an
Mobile: +86-18602962792
Email: konglingx...@huawei.com; anlin.k...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Problems with Heat software configurations and KeystoneV2

2014-04-03 Thread Steve Baker
On 04/04/14 14:05, Michael Elder wrote:
> Hello,
>
> I'm looking for insights about the interaction between keystone and
> the software configuration work that's gone into Icehouse in the last
> month or so.
>
> I've found that when using software configuration, the KeystoneV2 is
> broken because the server.py#_create_transport_credentials()
> explicitly depends on KeystoneV3 methods.
>
> Here's what I've come across:
>
> In the following commit, the introduction of
> _create_transport_credentials() on server.py begins to create a user
> for each OS::Nova::Server resource in the template:
>
> commit b776949ae94649b4a1eebd72fabeaac61b404e0f
> Author: Steve Baker 
> Date:   Mon Mar 3 16:39:57 2014 +1300
> Change: https://review.openstack.org/#/c/77798/
>
> server.py lines 470-471:
>
> if/self/.user_data_software_config():
> /self/._create_transport_credentials()
>
> With the introduction of this change, each server resource which is
> provisioned results in the creation of a new user ID. The call
> delegates through to stack_user.py lines 40-54:
>
>
> def*_create_user*(/self/):
> # Check for stack user project, create if not yet set
> ifnot/self/.stack.stack_user_project_id:
> project_id = /self/.keystone().create_stack_domain_project(
> /self/.stack.id)
> /self/.stack.set_stack_user_project_id(project_id)
>
> # Create a _keystone_ user in the stack domain project
> user_id = /self/.keystone().create_stack_domain_user(
> username=/self/.physical_resource_name(),## HERE
> THE USERNAME IS SET TO THE RESOURCE NAME
> password=/self/.password,
> project_id=/self/.stack.stack_user_project_id)
>
> # Store the ID in resource data, for compatibility with
> SignalResponder
> db_api.resource_data_set(/self/, /'user_id'/, user_id)
>
> My concerns with this approach:
>
> - Each resource is going to result in the creation of a unique user in
> Keystone. That design point seems hardly teneble if you're
> provisioning a large number of templates by an organization every day.
Compared to the resources consumed by creating a new nova server (or a
keystone token!), I don't think creating new users will present a
significant overhead.

As for creating users bound to resources, this is something heat has
done previously but we're doing it with more resources now. With havana
heat (or KeystoneV2) those users will be created in the same project as
the stack launching user, and the stack launching user needs admin
permissions to create these users.
> - If you attempt to set your resource names to some human-readable
> string (like "web_server"), you get one shot to provision the
> template, wherein future attempts to provision it will result in
> exceptions due to duplicate user ids. 
This needs a bug raised. This isn't an issue on KeystoneV3 since the
users are created in a project which is specific to the stack. Also for
v3 operations the username is ignored as the user_id is used exclusively.
>
> - The change prevents compatibility between Heat on Icehouse and
> KeystoneV2.
Please continue to test this with KeystoneV2. However any typical
icehouse OpenStack should really have the keystone v3 API enabled. Can
you explain the reasons why yours isn't?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Problems with software config and Heat standalone configurations

2014-04-03 Thread Steve Baker
On 04/04/14 14:26, Michael Elder wrote:
> Hello,
>
> While adopting the latest from the software configurations in
> Icehouse, we discovered an issue with the new software configuration
> type and its assumptions about using the heat client to perform behavior.
>
> The change was introduced in:
>
> commit 21f60b155e4b65396ebf77e05a0ef300e7c3c1cf
> Author: Steve Baker 
> Change: https://review.openstack.org/#/c/67621/
>
> The net is that the software config type in software_config.py lines
> 147-152 relies on the heat client to create/clone software
> configuration resources in the heat database:
>
> def *handle_create*(/self/):
> props = dict(/self/.properties)
> props[/self/.NAME] = /self/.physical_resource_name()
>
> sc = /self/.heat().software_configs.create(**props) ## HERE
> THE HEAT CLIENT IS CREATING A NEW SOFTWARE_CONFIG TO MAKE EACH ONE
> IMMUTABLE
> /self/.resource_id_set(sc.id)
>
> My concerns with this approach:
>
> When used in standalone mode, the Heat engine receives headers which
> are used to drive authentication (X-Auth-Url, X-Auth-User, X-Auth-Key,
> ..):
>
> curl -i -X POST -H 'X-Auth-Key: password' -H 'Accept:
> application/json' -H 'Content-Type: application/json' -H 'X-Auth-Url:
> http://[host]:5000/v2.0' -H
> 'X-Auth-User: admin' -H 'User-Agent: python-heatclient' -d '{...}'
> http://10.0.2.15:8004/v1/{tenant_id}
>
> In this mode, the heat config file indicates standalone mode and can
> also indicate multicloud support:
>
> # /etc/heat/heat.conf
> [paste_deploy]
> flavor = standalone
>
> [auth_password]
> allowed_auth_uris = http://[host1]:5000/v2.0,http://[host2]:5000/v2.0
> multi_cloud = true
>
> Any keystone URL which is referenced is unaware of the orchestration
> engine which is interacting with it. Herein lies the design flaw.
Its not so much a design flaw, its a bug where a new piece of code
interacts poorly with a mode that currently has few users and no
integration test coverage.

>
> When software_config calls self.heat(), it resolves clients.py's heat
> client:
>
> def *heat*(/self/):
> if /self/._heat:
> return /self/._heat
>
> con = /self/.context
> if /self/.auth_token is None:
> logger.error(_(/"Heat connection failed, no auth_token!"/))
> return None
> # try the token
> args = {
> /'auth_url'/: con.auth_url,
> /'token'/: /self/.auth_token,
> /'_username_'/: None,
> /'password'/: None,
> /'ca_file'/: /self/._get_client_option(/'heat'/,
> /'ca_file'/),
> /'cert_file'/: /self/._get_client_option(/'heat'/,
> /'cert_file'/),
> /'key_file'/: /self/._get_client_option(/'heat'/,
> /'key_file'/),
> /'insecure'/: /self/._get_client_option(/'heat'/,
> /'insecure'/)  
>  }
>
> endpoint_type = /self/._get_client_option(/'heat'/,
> /'endpoint_type'/)
> endpoint = /self/._get_heat_url()
> if not endpoint:
> endpoint = /self/.url_for(service_type=/'_orchestration_'/,
> endpoint_type=endpoint_type)
> /self/._heat = heatclient.Client(/'1'/, endpoint, **args)
>
> return /self/._heat
>
> Here, an attempt to look up the orchestration URL (which is already
> executing in the context of the heat engine) comes up wrong because
> Keystone doesn't know about this remote standalone Heat engine.
>
If you look at self._get_heat_url() you'll see that the heat.conf
[clients_heat] url will be used for the heat endpoint if it is set. I
would recommend setting that for standalone mode. A devstack change for
HEAT_STANDALONE would be helpful here.

> Further, at this point, the username and password are null, and when
> the auth_password standza is applied in the config file, Heat will
> deny any attempts at authorization which only provide a token. As I
> understand it today, that's because it doesn't have individual
> keystone admin users for all remote keystone services in the list of
> allowed_auth_urls. Hence, if only provided with a token, I don't think
> the heat engine can validate the token against the remote keystone.
>
> One workaround that I've implemented locally is to change the logic to
> check for standalone mode and send the username and password.
>
>flavor = /'default'/
> try:
> logger.info(/"Configuration is %s"/ % str(cfg.CONF))
> flavor = cfg.CONF.paste_deploy.flavor
> except cfg.NoSuchOptError as _nsoe_:
> flavor = /'default'/
> logger.info(/"Flavor is %s"/ % flavor)
>
> # We really should examine the pipeline to determine whether
> we're using _authtoken_ or _authpassword_.
> if flavor == /'_standalone_'/:
>
> context_map = /self/.context.to_dict()
>
> if /'_username_'

Re: [openstack-dev] Quota Management

2014-04-03 Thread Cazzolato, Sergio J

Glad to see that, for sure I'll participate of this session.

Thanks

-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com] 
Sent: Thursday, April 03, 2014 7:21 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Quota Management

On Thu, 2014-04-03 at 14:41 -0500, Kevin L. Mitchell wrote:
> On Thu, 2014-04-03 at 19:16 +, Cazzolato, Sergio J wrote:
> > Jay, thanks for taking ownership on this idea, we are really 
> > interested to contribute to this, so what do you think are the next 
> > steps to move on?
> 
> Perhaps a summit session on quota management would be in order?

Done:

http://summit.openstack.org/cfp/details/221

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder]a question about os-volume_upload_image

2014-04-03 Thread Bohai (ricky)
> -Original Message-
> From: Mike Perez [mailto:thin...@gmail.com]
> Sent: Friday, April 04, 2014 1:20 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Cinder]a question about
> os-volume_upload_image
> 
> On 18:37 Thu 03 Apr , Lingxian Kong wrote:
> > Thanks Duncan for your answer.
> >
> > I am very interested in making a contribution towards this effort, but
> > what to do next? Waiting for approving for this blueprint? Or see
> > others' opinions on this before we putting more efforts in achieving
> > this? I just want to make sure that we could handle other people's use
> > cases and not just our own.
> 
> What use case is that exactly? I mentioned earlier the original purpose was 
> for
> knowing if something was bootable. I'm curious on how else this is being used.
> 

An image has the property for example: hw_scsi_model=virtio-scsi or other user 
specify property.
In process (1), cinder has saved the image property in the cinder DB.
I hope in process(2), cinder could provide an option to save the properties 
back to the new glance image.
Without this ability, user has to find all the properties in origin image and 
set them back by hand.
This is useful when user just want to make a little modify to an origin image.

An image --(1)--> cinder volume --(2)--> An new Image

Best regards to you.
Ricky

> --
> Mike Perez
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] Problems with software config and Heat standalone configurations

2014-04-03 Thread Michael Elder
Hello,

While adopting the latest from the software configurations in Icehouse, we 
discovered an issue with the new software configuration type and its 
assumptions about using the heat client to perform behavior. 

The change was introduced in:

commit 21f60b155e4b65396ebf77e05a0ef300e7c3c1cf
Author: Steve Baker 
Change: https://review.openstack.org/#/c/67621/

The net is that the software config type in software_config.py lines 
147-152 relies on the heat client to create/clone software configuration 
resources in the heat database:

def handle_create(self):
props = dict(self.properties)
props[self.NAME] = self.physical_resource_name()

sc = self.heat().software_configs.create(**props) ## HERE THE HEAT 
CLIENT IS CREATING A NEW SOFTWARE_CONFIG TO MAKE EACH ONE IMMUTABLE
self.resource_id_set(sc.id)

My concerns with this approach:

When used in standalone mode, the Heat engine receives headers which are 
used to drive authentication (X-Auth-Url, X-Auth-User, X-Auth-Key, ..):

curl -i -X POST -H 'X-Auth-Key: password' -H 'Accept: application/json' -H 
'Content-Type: application/json' -H 'X-Auth-Url: http://[host]:5000/v2.0' 
-H 'X-Auth-User: admin' -H 'User-Agent: python-heatclient' -d '{...}' 
http://10.0.2.15:8004/v1/{tenant_id}

In this mode, the heat config file indicates standalone mode and can also 
indicate multicloud support:

# /etc/heat/heat.conf
[paste_deploy]
flavor = standalone

[auth_password]
allowed_auth_uris = http://[host1]:5000/v2.0,http://[host2]:5000/v2.0
multi_cloud = true

Any keystone URL which is referenced is unaware of the orchestration 
engine which is interacting with it. Herein lies the design flaw.

When software_config calls self.heat(), it resolves clients.py's heat 
client:

def heat(self):
if self._heat:
return self._heat
 
con = self.context
if self.auth_token is None:
logger.error(_("Heat connection failed, no auth_token!"))
return None
# try the token
args = {
'auth_url': con.auth_url,
'token': self.auth_token,
'username': None,
'password': None,
'ca_file': self._get_client_option('heat', 'ca_file'),
'cert_file': self._get_client_option('heat', 'cert_file'),
'key_file': self._get_client_option('heat', 'key_file'),
'insecure': self._get_client_option('heat', 'insecure') 
 }

endpoint_type = self._get_client_option('heat', 'endpoint_type')
endpoint = self._get_heat_url()
if not endpoint:
endpoint = self.url_for(service_type='orchestration',
endpoint_type=endpoint_type)
self._heat = heatclient.Client('1', endpoint, **args)

return self._heat

Here, an attempt to look up the orchestration URL (which is already 
executing in the context of the heat engine) comes up wrong because 
Keystone doesn't know about this remote standalone Heat engine. 

Further, at this point, the username and password are null, and when the 
auth_password standza is applied in the config file, Heat will deny any 
attempts at authorization which only provide a token. As I understand it 
today, that's because it doesn't have individual keystone admin users for 
all remote keystone services in the list of allowed_auth_urls. Hence, if 
only provided with a token, I don't think the heat engine can validate the 
token against the remote keystone. 

One workaround that I've implemented locally is to change the logic to 
check for standalone mode and send the username and password. 

   flavor = 'default'
try:
logger.info("Configuration is %s" % str(cfg.CONF))
flavor = cfg.CONF.paste_deploy.flavor
except cfg.NoSuchOptError as nsoe:
flavor = 'default'
logger.info("Flavor is %s" % flavor)
 
# We really should examine the pipeline to determine whether we're 
using authtoken or authpassword.
if flavor == 'standalone':
 
context_map = self.context.to_dict()
 
if 'username' in context_map.keys():
username = context_map['username']
else:
username = None
 
if 'password' in context_map.keys():
password = context_map['password']
else:
password = None
 
logger.info("Configuring username='%s' and password='%s'" % 
(username, password))
args = {
'auth_url': con.auth_url,
'token': None,
'username': username,
'password': password,
'ca_file': self._get_client_option('heat', 'ca_file'),
'cert_file': self._get_client_option('heat', 'cert_file'),
'key_file': self._get_client_option('heat', 'key_file'),
'insecure': self._get_client_option('he

[openstack-dev] [heat] Problems with Heat software configurations and KeystoneV2

2014-04-03 Thread Michael Elder
Hello,

I'm looking for insights about the interaction between keystone and the 
software configuration work that's gone into Icehouse in the last month or 
so. 

I've found that when using software configuration, the KeystoneV2 is 
broken because the server.py#_create_transport_credentials() explicitly 
depends on KeystoneV3 methods. 

Here's what I've come across:

In the following commit, the introduction of 
_create_transport_credentials() on server.py begins to create a user for 
each OS::Nova::Server resource in the template:

commit b776949ae94649b4a1eebd72fabeaac61b404e0f
Author: Steve Baker 
Date:   Mon Mar 3 16:39:57 2014 +1300
Change: https://review.openstack.org/#/c/77798/

server.py lines 470-471:

if self.user_data_software_config():
self._create_transport_credentials()

With the introduction of this change, each server resource which is 
provisioned results in the creation of a new user ID. The call delegates 
through to stack_user.py lines 40-54:


def _create_user(self):
# Check for stack user project, create if not yet set
if not self.stack.stack_user_project_id:
project_id = self.keystone().create_stack_domain_project(
self.stack.id)
self.stack.set_stack_user_project_id(project_id)
 
# Create a keystone user in the stack domain project
user_id = self.keystone().create_stack_domain_user(
username=self.physical_resource_name(), ## HERE THE 
USERNAME IS SET TO THE RESOURCE NAME
password=self.password,
project_id=self.stack.stack_user_project_id)

# Store the ID in resource data, for compatibility with 
SignalResponder
db_api.resource_data_set(self, 'user_id', user_id)

My concerns with this approach: 

- Each resource is going to result in the creation of a unique user in 
Keystone. That design point seems hardly teneble if you're provisioning a 
large number of templates by an organization every day. 
- If you attempt to set your resource names to some human-readable string 
(like "web_server"), you get one shot to provision the template, wherein 
future attempts to provision it will result in exceptions due to duplicate 
user ids. 
- The change prevents compatibility between Heat on Icehouse and 
KeystoneV2. 

The change comments were a bit sparse on the design reasoning behind this 
approach, and my search of the mail archives was unsuccessful. 
http://openstack.markmail.org/search/?q=heat+on+keystone+v2

-M


Kind Regards,

Michael D. Elder

STSM | Master Inventor
mdel...@us.ibm.com  | linkedin.com/in/mdelder

"Success is not delivering a feature; success is learning how to solve the 
customer’s problem.” -Mark Cook
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] How Mistral handling long running delegate tasks

2014-04-03 Thread Kirill Izotov
> Then, we can make task executor interface public and allow clients to
> provide their own task executors. It will be possible then for Mistral
> to implement its own task executor, or several, and share the
> executors between all the engine instances.

I'm afraid that if we start to tear apart the TaskFlow engine, it would quickly 
become a mess to support. Besides, the amount of things left to integrate after 
we throw out engine might be so low it proof the whole process of integration 
to be just nominal and we are back to square one. Any way, task execution is 
the part that least bothers me, both graph action and the engine itself is 
where the pain will be.

> That is part of our public API, it is stable and good enough. Basically,
> I don't think this API needs any major change.


> But whatever should and will be done about it, I daresay all that work
> can be done without affecting API more then I described above.



I completely agree that we should not change the public API of the sync engine, 
especially the one in helpers. What we need is, on the contrary, a low level 
construct that would do the number of things i stated previously, but will be a 
part of public API of TaskFlow so we can be sure it would work exactly the same 
way it worked yesterday.

--  
Kirill Izotov


пятница, 4 апреля 2014 г. в 2:04, Ivan Melnikov написал:

>  
> I'm trying to catch up this rather long and interesting discussion,
> sorry for somewhat late reply.
>  
> I can see aspects of 'lazy model' support in TaskFlow:
> - how tasks are executed and reverted
> - how flows are run
> - how engine works internally
>  
> Let me address those aspects separately.
>  
> == Executing and reverting tasks ==
>  
> I think that should be done via different interface then running a flow
> (or scheduling it to run), as it is completely different thing. In
> current TaskFlow this interface is called task executor:
> https://github.com/openstack/taskflow/blob/master/taskflow/engines/action_engine/executor.py#L57
>  
> That is actually how our WorkerBasedEngine was implemented: it's the
> same engine with special task executor that schedules tasks on worker
> instead of running task code locally.
>  
> Task executors are not aware of flows by design, all they do is
> executing and reverting tasks. That means that task executors can be
> easily shared between engines if that's wanted.
>  
> Current TaskExecutorBase interface uses futures (PEP 3148-like). When I
> proposed it, futures looked like good tool for the task at hand (see
> e.g. async task etherpad
> https://etherpad.openstack.org/p/async-taskflow-tasks)
>  
> Now it may be time to reconsider that: having one future object per
> running task may become a scalability issue. It may be worth to use
> callbacks instead. It should not be too hard to refactor current engine
> for that. Also, as TaskExecutorBase is an internal API, there should not
> be any compatibility issues.
>  
> Then, we can make task executor interface public and allow clients to
> provide their own task executors. It will be possible then for Mistral
> to implement its own task executor, or several, and share the
> executors between all the engine instances.
>  
> You can call it a plan;)
>  
> == Running flows ==
>  
> To run the flow TaskFlow client uses engine interface; also, there are
> few of helper functions provided for convenience:
>  
> http://docs.openstack.org/developer/taskflow/engines.html#module-taskflow.engines.base
> http://docs.openstack.org/developer/taskflow/engines.html#creating-engines
>  
> That is part of our public API, it is stable and good enough. Basically,
> I don't think this API needs any major change.
>  
> Maybe it worth to add function or method to schedule running flow
> without actually waiting for flow completion (at least, it was on my top
> secret TODO list for quite a long time).
>  
> == Engine internals ==
>  
> Each engine eats resources, like thread it runs on; using these
> resources to run one flow only is somewhat wasteful. Some work is
> already planned to address this situation (see e.g.
> https://blueprints.launchpad.net/taskflow/+spec/share-engine-thread).
> Also, it might be good idea to implement different 'type' of engine to
> support 'lazy' model, as Joshua suggests.
>  
> But whatever should and will be done about it, I daresay all that work
> can be done without affecting API more then I described above.
>  
> --  
> WBR,
> Ivan A. Melnikov
>  
> ... tasks must flow ...
>  
>  
> On 02.04.2014 01:51, Dmitri Zimine wrote:
> > Even more responses inline :)
>  
> [...]
>  
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org (mailto:OpenStack-dev@lists.openstack.org)
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>  
>  


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailm

Re: [openstack-dev] [OpenStack-Dev][Nova][VMWare] Enable live migration with one nova compute

2014-04-03 Thread Chris Friesen

On 04/03/2014 05:48 PM, Jay Pipes wrote:

On Mon, 2014-03-31 at 17:11 +0800, Jay Lau wrote:

Hi,

Currently with VMWare VCDriver, one nova compute can manage multiple
clusters/RPs, this caused cluster admin cannot do live migration
between clusters/PRs if those clusters/PRs managed by one nova compute
as the current live migration logic request at least two nova
computes.


A bug [1] was also filed to trace VMWare live migration issue.

I'm now trying the following solution to see if it is acceptable for a
fix, the fix wants enable live migration with one nova compute:
1) When live migration check if host are same, check both host and
node for the VM instance.
2) When nova scheduler select destination for live migration, the live
migration task should put (host, node) to attempted hosts.
3) Nova scheduler needs to be enhanced to support ignored_nodes.
4) nova compute need to be enhanced to check host and node when doing
live migration.


What precisely is the point of "live migrating" an instance to the exact
same host as it is already on? The failure domain is the host, so moving
the instance from one "cluster" to another, but on the same host is kind
of a silly use case IMO.


Here is where precise definitions of "compute node", 
"OS-EXT-SRV-ATTR:host", and "OS-EXT-SRV-ATTR:hypervisor_hostname", and 
"host" as understood by novaclient would be nice.


Currently the "nova live-migration" command takes a "host" argument. 
It's not clear which of the above this corresponds to.


My understanding is that one nova-compute process can manage multiple 
VMWare physical hosts.  So it could make sense to support live migration 
between separate VMWare hosts even if they're managed by a single 
nova-compute process.


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev][Nova][VMWare] Enable live migration with one nova compute

2014-04-03 Thread Jay Pipes
On Mon, 2014-03-31 at 17:11 +0800, Jay Lau wrote:
> Hi,
> 
> Currently with VMWare VCDriver, one nova compute can manage multiple
> clusters/RPs, this caused cluster admin cannot do live migration
> between clusters/PRs if those clusters/PRs managed by one nova compute
> as the current live migration logic request at least two nova
> computes.
> 
> 
> A bug [1] was also filed to trace VMWare live migration issue.
> 
> I'm now trying the following solution to see if it is acceptable for a
> fix, the fix wants enable live migration with one nova compute:
> 1) When live migration check if host are same, check both host and
> node for the VM instance.
> 2) When nova scheduler select destination for live migration, the live
> migration task should put (host, node) to attempted hosts.
> 3) Nova scheduler needs to be enhanced to support ignored_nodes.
> 4) nova compute need to be enhanced to check host and node when doing
> live migration.

What precisely is the point of "live migrating" an instance to the exact
same host as it is already on? The failure domain is the host, so moving
the instance from one "cluster" to another, but on the same host is kind
of a silly use case IMO. 

But... if this really is something that is considered useful, then it
seems to me that it would be more useful to simply expand the definition
of the "compute node" object within Nova to be generic enough that a
compute node could be a VCenter cluster/PR. That way there would be no
need to hack the scheduler to account for more than the host (compute
node internally in Nova)?

In the same way, a cell could be a "compute node" as well, and we
wouldn't need separate hacks in the scheduler and elsewhere that treated
cells differently than "regular" compute nodes.

Best,
-jay

> I also uploaded a WIP patch [2] for you to review the idea of the fix
> and hope can get some comments from you.
> 
> 
> [1] https://bugs.launchpad.net/nova/+bug/1192192
> [2] https://review.openstack.org/#/c/84085
> 
> 
> -- 
> Thanks,
> 
> 
> Jay
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] metadata for a HOT

2014-04-03 Thread Clint Byrum
Excerpts from Mike Spreitzer's message of 2014-04-03 11:05:10 -0700:
> Clint Byrum  wrote on 04/03/2014 01:10:30 PM:
> 
> > Things that affect the stack as a whole really belong in the stack
> > API. That would also put them in the OS::Heat::Stack resource, so the
> > template language already supports that.
> 
> The OS::Heat::Stack resource is one of several that create nested stacks;
> we should be able to apply holistic scheduling to all stacks, regardless
> of whether they are nest or which kind of nested stack they are.
> Yes, if holistic scheduling were a feature in the Heat API then all kinds 
> of
> resources that create nested stacks "should" expose that feature
> (shout out to Trove, autoscaling groups, ...).
> 
> > As for policies which might affect a holistic scheduler, those can just
> > be resources as well. Just like deployments relate to servers, resources
> > can relate to any policies they need.
> 
> A holistic scheduler needs input that describes all the resources to be 
> scheduled as well as all the policies that apply.  Should a template 
> contain a resource whose input includes a copy of the rest of the 
> template?
> 

Are you suggesting that a scheduler should try and parse the template
itself? Or that the scheduler would take over scheduling which
heat-engine takes the work? The whole question raises many more
questions, and I wonder if there's just something you haven't told us
about this use case. :-P

A holistic scheduler is meant to relate things. Templates relate things
naturally, so doing it with resources seems obvious to me.

Something like OS::Scheduler::ResourceGroup which would inform the
scheduler that a grouping is needed. And then the resources all are
part of that group via their properties, something like 'resource_group:
{get_resource: group1}'. If there's a policy for that group that I want
applied, that is a policy that would also refer to the group and inform
the scheduler that this group gets this policy.

For instances where the whole stack needs to be considered in a group,
this is where I suggest that all resources should just be added to the
group. When we have proof that this approach works, we can talk about
introducing shorthand for it.

What I don't want to see is a special template section which introduces
unnecessary complexity before we have concrete evidence that the approach
is viable.

> > I would prefer that we focus on making HOT composable rather than
> > extensible. If there is actually something missing from the root of the
> > language, then it should be in the language. Use cases should almost
> > always try to find a way to work as resources first, and then if that
> > is unwieldy, look into language enhancements to factor things out.
> 
> Yeah, I would too.  Like I said, I have no satisfactory solution yet. Here 
> is more of the problem.  I would like to follow an evolutionary path 
> starting from the instance groups that are in Nova today.  I think I can 
> outline such an evolution.  I am sure there will be debate about it.  I am 
> even more sure that it will take time to accomplish that evolution.  OTOH, 
> locally we have a holistic scheduler already working.  We want to be able 
> to start using it today.  What can we do in this interim, and how can we 
> arrange things to do progressive convergence so that the interim solution 
> evolves as Nova & scheduling evolve, so that there is no big switch at the 
> end to the end solution?

Big switches are fine as long as they're simplifications. So if we make
you be explicit about the groupings today, but then you find that you
have many stacks where everything is in fact in one group, then you can
make a clear argument for a root level item to set a property on all
resources which support it, which I think is the right way to go.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rally] Tenant quotas can now be updated during a benchmark

2014-04-03 Thread Joshua Harlow
Cool, so would that mean that once a quota is reached (for whatever reason) and 
the scenario wants to continue running (instead of failing due to quota issues) 
that it can expand that quota automatically (for cases where this is 
needed/necessary). Or is this also useful for benchmarking how fast quotas can 
be  changed, or is it maybe a combination of both?

From: Boris Pavlovic mailto:bo...@pavlovic.me>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Thursday, April 3, 2014 at 1:43 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [rally] Tenant quotas can now be updated during a 
benchmark

Bruno,

Well done. Finally we have this feature in Rally!


Best regards,
Boris Pavlovic


On Thu, Apr 3, 2014 at 11:37 PM, Bruno Semperlotti 
mailto:bruno.semperlo...@gmail.com>> wrote:
Hi Rally users,

I would like to inform you that the feature allowing to update tenant's quotas 
during a benchmark is available with the implementation of this blueprint: 
https://blueprints.launchpad.net/rally/+spec/benchmark-context-tenant-quotas

Currently, only Nova and Cinder quotas are supported (Neutron coming soon).

Here a small sample of how to do it:

In the json file describing the benchmark scenario, use the "context" section 
to indicate quotas for each service. Quotas will be applied for each generated 
tenants.

{
"NovaServers.boot_server": [
{
"args": {
"flavor_id": "1",
"image_id": "6e25e859-2015-4c6b-9940-aa21b2ab8ab2"
},
"runner": {
"type": "continuous",
"times":100,
"active_users": 10
},
"context": {
"users": {
"tenants": 1,
"users_per_tenant": 1
},
"quotas": {
"nova": {
"instances": 150,
"cores": 150,
"ram": -1
}
}
}
}
]
}

Following, the list of supported quotas:
nova:
instances, cores, ram, floating-ips, fixed-ips, metadata-items, injected-files, 
injected-file-content-bytes, injected-file-path-bytes, key-pairs, 
security-groups, security-group-rules

cinder:
gigabytes, snapshots, volumes

neutron (coming soon):
network, subnet, port, router, floatingip, security-group, security-group-rule


Regards,

--
Bruno Semperlotti

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Quota Management

2014-04-03 Thread Jay Pipes
On Thu, 2014-04-03 at 14:41 -0500, Kevin L. Mitchell wrote:
> On Thu, 2014-04-03 at 19:16 +, Cazzolato, Sergio J wrote:
> > Jay, thanks for taking ownership on this idea, we are really
> > interested to contribute to this, so what do you think are the next
> > steps to move on?
> 
> Perhaps a summit session on quota management would be in order?

Done:

http://summit.openstack.org/cfp/details/221

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo][Nova][Heat] Sample config generator issue

2014-04-03 Thread Zane Bitter

On 03/04/14 08:48, Doug Hellmann wrote:

On Wed, Apr 2, 2014 at 9:55 PM, Zane Bitter  wrote:

We have an issue in Heat where the sample config generator from Oslo is
currently broken (see bug #1288586). Unfortunately it turns out that there
is no fix to the generator script itself that can do the Right Thing for
both Heat and Nova.

A brief recap on how the sample config generator works: it goes through all
of the files specified and finds all the ConfigOpt objects at the top level.
It then searches for them in the registered options, and returns the name of
the group in which they are registered. Previously it looked for the
identical object being registered, but now it just looks for any equivalent
ones. When you register two or more equivalent options, the second and
subsequent ones are just ignored by oslo.config.

The situation in Heat is that we have a bunch of equivalent options
registered in multiple groups. This is because we have a set of options for
each client library (i.e. python-novaclient, python-cinderclient, &c.), with
each set containing equivalent options (e.g. every client has an
"endpoint_type" option for looking up the keystone catalog). This used to
work, but now that equivalent options (and not just identical options) match
when searching for them in a group, we just end up with multiple copies of
each option in the first group to be searched, and none in any of the other
groups, in the generated sample config.

Nova, on the other hand, has the opposite problem (see bug #1262148). Nova
adds the auth middleware from python-keystoneclient to its list of files to
search for options. That middleware imports a file from oslo-incubator that
registers the option in the default group - a registration that is *not*
wanted by the keystone middleware, because it registers an equivalent option
in a different group instead (or, as it turns out, as well). Just to make it
interesting, Nova uses the same oslo-incubator module and relies on the
option being registered in the default group. Of course, oslo-incubator is
not a real library, so it gets registered a second time but ignored (since
an equivalent one is already present). Crucially, the oslo-incubator file
from python-keystoneclient is not on the list of extra modules to search in
Nova, so when the generator script was looking for options identical to the
ones it found in modules, it didn't see this option at all. Hence the change
to looking for equivalent options, which broke Heat.

Neither comparing for equivalence nor for identity in the generator script
can solve both use cases. It's hard to see what Heat could or should be
doing differently. I think it follows that the fix needs to be in either
Nova or python-keystoneclient in the first instance.

One option I suggested was for the auth middleware to immediately deregister
the extra option that had accidentally been registered upon importing a
module from oslo-incubator. I put up patches to do this, but it seemed to be
generally agreed by Oslo folks that this was a Bad Idea.

Another option would be to specifically include the relevant module from
keystoneclient.openstack.common when generating the sample config. This
seems quite brittle to me.

We could fix it by splitting the oslo-incubator module into one that
provides the code needed by the auth middleware and one that does the
registration of options, but this will likely result in cascading changes to
a whole bunch of projects.

Does anybody have any thoughts on what the right fix looks like here?
Currently, verification of the sample config is disabled in the Heat gate
because of this issue, so it would be good to get it resolved.

cheers,
Zane.


We've seen some similar issues in other projects where the "guessing"
done by the generator is not matching the newer ways we use
configuration options. In those cases, I suggested that projects use
the new entry-point feature that allows them to explicitly list
options within groups, instead of scanning a set of files. This
feature was originally added so apps can include the options from
libraries that use oslo.config (such as oslo.messaging), but it can be
used for options define by the applications as well.

To define an option discovery entry point, create a function that
returns a sequence of (group name, option list) pairs. For an example,
see list_opts() in oslo.messaging [1]. Then define the entry point in
your setup.cfg under the "oslo.config.opts" namespace [2]. If you need
more than one function, register them separately.

Then change the way generate_sample.sh is called for your project so
it passes the -l option [3] once for each name you have given to the
entry points. So if you have just "heat" you would pass "-l heat" and
if you have "heat-core" and "heat-some-driver" you would pass "-l
heat-core -l heat-some-driver".

For application options, you shouldn't mix the -l option with the file
scanner, since you will end up with duplicate options.

Doug

[1] 
http://git.ope

Re: [openstack-dev] [Heat] Some thoughts on the mapping section

2014-04-03 Thread Thomas Spatzier
> From: Zane Bitter 
> To: openstack-dev@lists.openstack.org
> Date: 03/04/2014 22:09
> Subject: Re: [openstack-dev] [Heat] Some thoughts on the mapping section
>
> On 03/04/14 03:21, Thomas Herve wrote:
> >> Speaking of offering options for selection, there is another proposal
on
> >> >adding conditional creation of resources [3], whose use case to
enable
> >> >or disable a resource creation (among others).  My perception is that
> >> >these are all relevant enhancements to the reusability of HOT
templates,
> >> >though I don't think we really need very sophisticated combinatory
> >> >conditionals.
> > I think that's interesting that you mentioned that, because Zane
> talked about a "variables" section, which would encompass what
> "conditions" and "mappings" mean. That's why we're discussing
> extensively about those design points, to see where we can be a bit
> more generic to handle more use cases.
>
> There was some discussion in the review[1] of having an if/then function
> (equivalent of the ternary ?: operator in C) for calculating variable...
> on reflection that is nothing more than a dumbed down version of
> Fn::Select in CloudFormation (which we have no equivalent to in HOT) in
> which the only possible index values are "true" and "false".
>
> The differences between Fn::Select and Fn::FindInMap are:
>
> 1) The bizarre double-indirect lookup, of course; and
> 2) The actual mappings are defined once in a single place, rather than
> everywhere you need to access them.
>
> I think we're all agreed that (1) is undesirable in itself. It occurs to
> me that the existence of a variables section could render (2) moot also
> (since you could calculate the result in one place, and just reference
> it from there on).
>
> So if we had the variables section, we probably no longer need to
> consider a mapping section and a replacement for Fn::FindInMap, just a
> replacement for Fn::Select that could also cover the if/then use case.
>
> Thoughts?

+1 for solving this in one place and coming up with such a solution that
introduces just one new "thing" to solve problems that are addressed with
two different things in CFN.

Regards,
Thomas

>
> cheers,
> Zane.
>
> [1] https://review.openstack.org/84468
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] metadata for a HOT

2014-04-03 Thread Thomas Spatzier
> From: Keith Bray 
> To: "OpenStack Development Mailing List (not for usage questions)"
> 
> Date: 03/04/2014 19:51
> Subject: Re: [openstack-dev] [heat] metadata for a HOT
>
> Steve, agreed.  Your description I believe is the conclusion that
> the community came to when this was perviously discussed, and we
> managed to get the implementation of parameter grouping and ordering
> [1] that you mentioned which has been very helpful.  I don't think
> we landed the keywords blueprint [2], which may be controversial
> because it is essentially unstructured. I wanted to make sure Mike
> had the links for historical context, but certainly understand and
> appreciate your point of view here.  I wasn't able to find the email
> threads to point Mike to, but assume they exist in the list archives
> somewhere.
>
> We proposed another specific piece of template data [3] which I
> can't remember whether it was met with resistance or we just didn't
> get to implementing it since we knew we would have to store other
> data specific to our uses cases in other files anyway.   We decided
> to go with storing our extra information in a catalog (really just a
> Git repo with a README.MD [4]) for now  until we can implement
> acceptable catalog functionality somewhere like Glance, hopefully in
> the Juno cycle.  When we want to share the template, we share all
> the files in the repo (inclusive of the README.MD).  It would be
> more ideal if we could share a single file (package) inclusive of
> the template and corresponding help text and any other UI hint info
> that would helpful.  I expect service providers to have differing

I agree that packaging all stuff that makes up a template (which will in
many cases not be a single template file, but nested templates,
environments, scripts, ...) in one archive. We have this concept in TOSCA
and I am sure we will have to implement a solution for this as part of the
TOSCA YAML to HOT converter work that we started. If several people see
this requirement, let's see if we can join forces on a common solution.

> views of the extra data they want to store with a template... So
> it'd just be nice to have a way to account for service providers to
> store their unique data along with a template that is easy to share
> and is part of the template package.  We bring up portability and
> structured data often, but I'm starting to realize that portability
> of a template breaks down unless every service provider runs exactly
> the same Heat resources, same image IDs, flavor types, etc.). I'd
> like to drive more standardization of data for image and template
> data into Glance so that in HOT we can just declare things like
> "Linux, Flavor Ubuntu, latest LTS, minimum 1Gig" and automatically
> discover and choose the right image to provision, or error if a
> suitable match can not be found.  The Murano team has been hinting

Sahdev from our team recently created a BP for exactly that scenario.
Please have a look and see if that is in line with your thinking and
provide comments as necessary:

https://blueprints.launchpad.net/heat/+spec/constraint-based-flavors-and-images

> at wanting to solve a similar problem, but with a broader vision
> from a complex-multi application declaration perspective that
> crosses multiple templates or is a layer above just matching to what
> capabilities Heat resources provide and matching against
> capabilities that a catalog of templates provide (and mix that with
> capabilities the cloud API services provide).  I'm not yet convinced
> that can't be done with a parent Heat template since we already have
> the declarative constructs and language well defined, but I
> appreciate the use case and perspective those folks are bringing to
> the conversation.
>
> [1]
https://blueprints.launchpad.net/heat/+spec/parameter-grouping-ordering
>  https://wiki.openstack.org/wiki/Heat/UI#Parameter_Grouping_and_Ordering
>
> [2] https://blueprints.launchpad.net/heat/+spec/stack-keywords
> https://wiki.openstack.org/wiki/Heat/UI#Stack_Keywords
>
> [3] https://blueprints.launchpad.net/heat/+spec/add-help-text-to-template
> https://wiki.openstack.org/wiki/Heat/UI#Help_Text
>
> [4] Ex. Help Text accompanying a template in README.MD format:
> https://github.com/rackspace-orchestration-templates/docker
>
> -Keith
>
> From: Steven Dake 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
<
> openstack-dev@lists.openstack.org>
> Date: Thursday, April 3, 2014 10:30 AM
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Subject: Re: [openstack-dev] [heat] metadata for a HOT
>
> On 04/02/2014 08:41 PM, Keith Bray wrote:
> https://wiki.openstack.org/wiki/Heat/StackMetadata
>
> https://wiki.openstack.org/wiki/Heat/UI
>
> -Keith
>
> Keith,
>
> Taking a look at the UI specification, I thought I'd take a look at
> adding parameter grouping and ordering to the hot_spec.rst file.
> That seems like a really nice cons

Re: [openstack-dev] [Mistral] How Mistral handling long running delegate tasks

2014-04-03 Thread Joshua Harlow
Thank Ivan,

This does seem like a possible way forward also.

I'd be interesting to see what/how callbacks would work vs. futures and
what extension point we could provide to mistral for task execution (maybe
there task executor would complete by doing a call to some service, not
amqp for example?).

Maybe some example/POC code would help all :-)

-Josh

-Original Message-
From: Ivan Melnikov 
Date: Thursday, April 3, 2014 at 12:04 PM
To: "OpenStack Development Mailing List (not for usage questions)"
, Joshua Harlow 
Subject: Re: [openstack-dev] [Mistral] How Mistral handling long running
delegate tasks

>
>I'm trying to catch up this rather long and interesting discussion,
>sorry for somewhat late reply.
>
>I can see aspects of 'lazy model' support in TaskFlow:
>- how tasks are executed and reverted
>- how flows are run
>- how engine works internally
>
>Let me address those aspects separately.
>
>== Executing and reverting tasks ==
>
>I think that should be done via different interface then running a flow
>(or scheduling it to run), as it is completely different thing. In
>current TaskFlow this interface is called task executor:
>https://github.com/openstack/taskflow/blob/master/taskflow/engines/action_
>engine/executor.py#L57
>
>That is actually how our WorkerBasedEngine was implemented: it's the
>same engine with special task executor that schedules tasks on worker
>instead of running task code locally.
>
>Task executors are not aware of flows by design, all they do is
>executing and reverting tasks. That means that task executors can be
>easily shared between engines if that's wanted.
>
>Current TaskExecutorBase interface uses futures (PEP 3148-like). When I
>proposed it, futures looked like good tool for the task at hand (see
>e.g. async task etherpad
>https://etherpad.openstack.org/p/async-taskflow-tasks)
>
>Now it may be time to reconsider that: having one future object per
>running task may become a scalability issue. It may be worth to use
>callbacks instead. It should not be too hard to refactor current engine
>for that. Also, as TaskExecutorBase is an internal API, there should not
>be any compatibility issues.
>
>Then, we can make task executor interface public and allow clients to
>provide their own task executors. It will be possible then for Mistral
>to implement its own task executor, or several, and share the
>executors between all the engine instances.
>
>You can call it a plan;)
>
>== Running flows ==
>
>To run the flow TaskFlow client uses engine interface; also, there are
>few of helper functions provided for convenience:
>
>http://docs.openstack.org/developer/taskflow/engines.html#module-taskflow.
>engines.base
>http://docs.openstack.org/developer/taskflow/engines.html#creating-engines
>
>That is part of our public API, it is stable and good enough. Basically,
>I don't think this API needs any major change.
>
>Maybe it worth to add function or method to schedule running flow
>without actually waiting for flow completion (at least, it was on my top
>secret TODO list for quite a long time).
>
>== Engine internals ==
>
>Each engine eats resources, like thread it runs on; using these
>resources to run one flow only is somewhat wasteful. Some work is
>already planned to address this situation (see e.g.
>https://blueprints.launchpad.net/taskflow/+spec/share-engine-thread).
>Also, it might be good idea to implement different 'type' of engine to
>support 'lazy' model, as Joshua suggests.
>
>But whatever should and will be done about it, I daresay all that work
>can be done without affecting API more then I described above.
>
>-- 
>WBR,
>Ivan A. Melnikov
>
>... tasks must flow ...
>
>
>On 02.04.2014 01:51, Dmitri Zimine wrote:
>> Even more responses inline :)
>[...]


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Quota Management

2014-04-03 Thread Cazzolato, Sergio J
+1

-Original Message-
From: Kevin L. Mitchell [mailto:kevin.mitch...@rackspace.com] 
Sent: Thursday, April 03, 2014 4:42 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Quota Management

On Thu, 2014-04-03 at 19:16 +, Cazzolato, Sergio J wrote:
> Jay, thanks for taking ownership on this idea, we are really 
> interested to contribute to this, so what do you think are the next 
> steps to move on?

Perhaps a summit session on quota management would be in order?
--
Kevin L. Mitchell  Rackspace


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rally] Tenant quotas can now be updated during a benchmark

2014-04-03 Thread Boris Pavlovic
Bruno,

Well done. Finally we have this feature in Rally!


Best regards,
Boris Pavlovic


On Thu, Apr 3, 2014 at 11:37 PM, Bruno Semperlotti <
bruno.semperlo...@gmail.com> wrote:

> Hi Rally users,
>
> I would like to inform you that the feature allowing to update tenant's
> quotas during a benchmark is available with the implementation of this
> blueprint:
> https://blueprints.launchpad.net/rally/+spec/benchmark-context-tenant-quotas
>
> Currently, only Nova and Cinder quotas are supported (Neutron coming soon).
>
> Here a small sample of how to do it:
>
> In the json file describing the benchmark scenario, use the "context"
> section to indicate quotas for each service. Quotas will be applied for
> each generated tenants.
>
> {
> "NovaServers.boot_server": [
> {
> "args": {
> "flavor_id": "1",
> "image_id": "6e25e859-2015-4c6b-9940-aa21b2ab8ab2"
> },
> "runner": {
> "type": "continuous",
>  "times":100,
> "active_users": 10
> },
> "context": {
> "users": {
> "tenants": 1,
> "users_per_tenant": 1
> },
> *"quotas": {*
> *"nova": {*
> *"instances": 150,*
> *"cores": 150,*
> *"ram": -1*
> *}*
> *}*
> }
> }
> ]
> }
>
>
> Following, the list of supported quotas:
> *nova:*
> instances, cores, ram, floating-ips, fixed-ips, metadata-items,
> injected-files, injected-file-content-bytes, injected-file-path-bytes,
> key-pairs, security-groups, security-group-rules
>
> *cinder:*
> gigabytes, snapshots, volumes
>
> *neutron (coming soon):*
> network, subnet, port, router, floatingip, security-group,
> security-group-rule
>
>
> Regards,
>
> --
> Bruno Semperlotti
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] reviewer update march

2014-04-03 Thread Derek Higgins
On 03/04/14 12:02, Robert Collins wrote:
> Getting back in the swing of things...
> 
> Hi,
> like most OpenStack projects we need to keep the core team up to
> date: folk who are not regularly reviewing will lose context over
> time, and new folk who have been reviewing regularly should be trusted
> with -core responsibilities.
> 
> In this months review:
>  - Dan Prince for -core
+1, will be good to have Dan on board

>  - Jordan O'Mara for removal from -core
>  - Jiri Tomasek for removal from -core
>  - Jamomir Coufal for removal from -core

+1, all seems reasonable to me.

> 
> Existing -core members are eligible to vote - please indicate your
> opinion on each of the three changes above in reply to this email.
> 
> Ghe, please let me know if you're willing to be in tripleo-core. Jan,
> Jordan, Martyn, Jiri & Jaromir, if you are planning on becoming
> substantially more active in TripleO reviews in the short term, please
> let us know.
> 
> My approach to this caused some confusion a while back, so I'm keeping
> the boilerplate :) - I'm
> going to talk about stats here, but they are only part of the picture
> : folk that aren't really being /felt/ as effective reviewers won't be
> asked to take on -core responsibility, and folk who are less active
> than needed but still very connected to the project may still keep
> them : it's not pure numbers.
> 
> Also, it's a vote: that is direct representation by the existing -core
> reviewers as to whether they are ready to accept a new reviewer as
> core or not. This mail from me merely kicks off the proposal for any
> changes.
> 
> But, the metrics provide an easy fingerprint - they are a useful tool
> to avoid bias (e.g. remembering folk who are just short-term active) -
> human memory can be particularly treacherous - see 'Thinking, Fast and
> Slow'.
> 
> With that prelude out of the way:
> 
> Please see Russell's excellent stats:
> http://russellbryant.net/openstack-stats/tripleo-reviewers-30.txt
> http://russellbryant.net/openstack-stats/tripleo-reviewers-90.txt
> 
> For joining and retaining core I look at the 90 day statistics; folk
> who are particularly low in the 30 day stats get a heads up so they
> aren't caught by surprise.
> 
> 90 day active-enough stats:
> 
> +-+---++
> | Reviewer| Reviews   -2  -1  +1  +2  +A+/- % |
> Disagreements* |
> +-+---++
> |slagle **| 6550 145   7 503 15477.9% |
> 36 (  5.5%)  |
> | clint-fewbar ** | 5494 120  11 414 11577.4% |
> 32 (  5.8%)  |
> |   lifeless **   | 518   34 203   2 279 11354.2% |
> 21 (  4.1%)  |
> |  rbrady | 4530  14 439   0   096.9% |
> 60 ( 13.2%)  |
> | cmsj ** | 3220  24   1 297 13692.5% |
> 22 (  6.8%)  |
> |derekh **| 2610  50   1 210  9080.8% |
> 12 (  4.6%)  |
> |dan-prince   | 2570  67 157  33  1673.9% |
> 15 (  5.8%)  |
> |   jprovazn **   | 1900  21   2 167  4388.9% |
> 13 (  6.8%)  |
> |ifarkas **   | 1860  28  18 140  8284.9% |
> 6 (  3.2%)  |
> ===
> | jistr **| 1770  31  16 130  2882.5% |
> 4 (  2.3%)  |
> |  ghe.rivero **  | 1761  21  25 129  5587.5% |
> 7 (  4.0%)  |
> |lsmola **| 1722  12  55 103  6391.9% |
> 21 ( 12.2%)  |
> |   jdob  | 1660  31 135   0   081.3% |
> 9 (  5.4%)  |
> |  bnemec | 1380  38 100   0   072.5% |
> 17 ( 12.3%)  |
> |greghaynes   | 1260  21 105   0   083.3% |
> 22 ( 17.5%)  |
> |  dougal | 1250  26  99   0   079.2% |
> 13 ( 10.4%)  |
> |   tzumainn **   | 1190  30  69  20  1774.8% |
> 2 (  1.7%)  |
> |rpodolyaka   | 1150  15 100   0   087.0% |
> 15 ( 13.0%)  |
> | ftcjeff | 1030   3 100   0   097.1% |
> 9 (  8.7%)  |
> | thesheep|  930  26  31  36  2172.0% |
> 3 (  3.2%)  |
> |pblaho **|  881   8  37  42  2289.8% |
> 3 (  3.4%)  |
> | jonpaul-sullivan|  800  33  47   0   058.8% |
> 17 ( 21.2%)  |
> |   tomas-8c8 **  |  780  15   4  59  2780.8% |
> 4 (  5.1%)  |
> |marios **|  750   7  53  15  1090.7% |
> 14 ( 18.7%)  |
> | stevenk |  750  15  60   0   080.0% |
> 9 ( 12.0%)  |
> |   rwsu  |  740   3  71   0   095.9% |
> 11 ( 14.9%)  |
> | mkerrin |  700  14  56   0   080.0% |
> 14 ( 20.0%)  |
> 
> The  line is set at the just voted on minimum expected of core: 3

Re: [openstack-dev] [Heat] Some thoughts on the mapping section

2014-04-03 Thread Zane Bitter

On 03/04/14 03:21, Thomas Herve wrote:

Speaking of offering options for selection, there is another proposal on
>adding conditional creation of resources [3], whose use case to enable
>or disable a resource creation (among others).  My perception is that
>these are all relevant enhancements to the reusability of HOT templates,
>though I don't think we really need very sophisticated combinatory
>conditionals.

I think that's interesting that you mentioned that, because Zane talked about a "variables" 
section, which would encompass what "conditions" and "mappings" mean. That's why we're 
discussing extensively about those design points, to see where we can be a bit more generic to handle more 
use cases.


There was some discussion in the review[1] of having an if/then function 
(equivalent of the ternary ?: operator in C) for calculating variable... 
on reflection that is nothing more than a dumbed down version of 
Fn::Select in CloudFormation (which we have no equivalent to in HOT) in 
which the only possible index values are "true" and "false".


The differences between Fn::Select and Fn::FindInMap are:

1) The bizarre double-indirect lookup, of course; and
2) The actual mappings are defined once in a single place, rather than 
everywhere you need to access them.


I think we're all agreed that (1) is undesirable in itself. It occurs to 
me that the existence of a variables section could render (2) moot also 
(since you could calculate the result in one place, and just reference 
it from there on).


So if we had the variables section, we probably no longer need to 
consider a mapping section and a replacement for Fn::FindInMap, just a 
replacement for Fn::Select that could also cover the if/then use case.


Thoughts?

cheers,
Zane.

[1] https://review.openstack.org/84468

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Marconi PTL Candidacy

2014-04-03 Thread Anita Kuno
confirmed

On 04/03/2014 01:53 PM, Kurt Griffiths wrote:
> Hi folks, I'd like to submit my name for serving during the Juno cycle as
> the Queue Service PTL.
> 
> During my career I've had the opportunity to work in a wide variety of
> roles in fields such as video game development, system utilities,
> Internet marketing, and web services. This experience has given me a
> holistic, pragmatic view on software development that I have tried to
> leverage in my contributions to OpenStack. I believe that the best
> software is smart (flexes to work at the user's level), useful (informed
> by what users really need, not what we think they need), and pleasant
> (optimized for happiness).
> 
> I've been heavily involved with Marconi from its inception, leading the
> initial unconference session at the Grizzly summit, where we came together
> as a community to fill what many saw as an obvious gap in the OpenStack
> portfolio. I'd like to give a shout-out to Mark Atwood, Monty Taylor,
> Jamie Painter, Allan Metts, Tim Simpson, and Flavio Percoco for their
> early involvement in kick-starting the project. Thanks guys!
> 
> Marconi is key to enabling the development of web and mobile apps on top
> of OpenStack, and we have also been hearing from several other programs
> who are interested in using Marconi to surface events to end users (among
> other things.)
> 
> The Marconi team has taken a pragmatic approach to the design of the API
> and its architecture, inviting and valuing feedback from users and
> operators all along the way. I think we can learn to do an even better job
> at this during the Juno cycle.
> 
> A PTL has many responsibilities, but the ones I feel are most important
> are these:
> 
> 1. As a program facilitator, a PTL is responsible for keeping launchpad
> groomed and up to date; watching out for logjams and misunderstandings,
> working to resolve them quickly as they arise; and, finally, creating and
> moderating multiple communication channels between contributors, and
> between the team and the broader community.
> 2. As a culture champion, the PTL is responsible for leading by example
> and growing a constructive team culture that values software quality and
> application security. A culture where every voice is heard and valued. A
> place where everyone feels safe expressing their ideas and concerns,
> whatever they may be. A place where every individual feels appreciated and
> supported.
> 3. As a user champion, the PTL is responsible for keeping the program
> oriented toward a clear vision that is highly informed by user and
> operator feedback.
> 4. As a senior technologist, the PTL is responsible for ensuring major
> implementation decisions are rigorously vetted and revisited over time, as
> necessary, to ensure the code is delivering on the program's vision (and
> not creating scope creep).
> 5. As a liaison, the PTL is responsible for keeping their project aligned
> with the broader OpenStack, Python and web development communities.
> 
> If elected, my priorities during Juno will include:
> 
> 1. Operational Maturity: Marconi is already production-ready, but we still
> have work to do to get to world-class reliability, monitoring, logging,
> and efficiency.
> 2. Documentation: During Icehouse, Marconi made a good start on user and
> operator manuals, and I would like to see those docs fleshed out, as well
> as reworking the program wiki to make it much more informative and
> engaging.
> 3. Security: During Juno I want to start doing per-milestone threat
> modeling, and build out a suite of security tests.
> 4. Integration: I have heard from several other OpenStack programs who
> would like to use Marconi, and so I look forward to working with them to
> understand their needs and to assist them however we can.
> 5. Notifications: Beginning the work on the missing pieces needed to build
> a notifications service on top of the Marconi messaging platform, that can
> be used to surface events to end-users via SMS, email, web hooks, etc.
> 6. Graduation: Completing all remaining graduation requirements so that
> Marconi can become integrated in the "K" cycle, which will allow other
> programs to be more confident about taking dependencies on the service for
> features they are planning.
> 7. Growth: I'd like to welcome several more contributors to the Marconi
> core team, continue on-boarding new contributors and interns, and see
> several more large deployments of Marconi in production.
> 
> ---
> Kurt Griffiths | @kgriffs
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] reviewer update march

2014-04-03 Thread Chris Jones
Hi

+1 for your proposed -core changes.

Re your question about whether we should retroactively apply the 3-a-day
rule to the 3 month review stats, my suggestion would be a qualified no.

I think we've established an agile approach to the member list of -core, so
if there are a one or two people who we would have added to -core before
the goalposts moved, I'd say look at their review quality. If they're
showing the right stuff, let's get them in and helping. If they don't feel
our new goalposts are achievable with their workload, they'll fall out
again naturally before long.

Cheers,

Chris


On 3 April 2014 12:02, Robert Collins  wrote:

> Getting back in the swing of things...
>
> Hi,
> like most OpenStack projects we need to keep the core team up to
> date: folk who are not regularly reviewing will lose context over
> time, and new folk who have been reviewing regularly should be trusted
> with -core responsibilities.
>
> In this months review:
>  - Dan Prince for -core
>  - Jordan O'Mara for removal from -core
>  - Jiri Tomasek for removal from -core
>  - Jamomir Coufal for removal from -core
>
> Existing -core members are eligible to vote - please indicate your
> opinion on each of the three changes above in reply to this email.
>
> Ghe, please let me know if you're willing to be in tripleo-core. Jan,
> Jordan, Martyn, Jiri & Jaromir, if you are planning on becoming
> substantially more active in TripleO reviews in the short term, please
> let us know.
>
> My approach to this caused some confusion a while back, so I'm keeping
> the boilerplate :) - I'm
> going to talk about stats here, but they are only part of the picture
> : folk that aren't really being /felt/ as effective reviewers won't be
> asked to take on -core responsibility, and folk who are less active
> than needed but still very connected to the project may still keep
> them : it's not pure numbers.
>
> Also, it's a vote: that is direct representation by the existing -core
> reviewers as to whether they are ready to accept a new reviewer as
> core or not. This mail from me merely kicks off the proposal for any
> changes.
>
> But, the metrics provide an easy fingerprint - they are a useful tool
> to avoid bias (e.g. remembering folk who are just short-term active) -
> human memory can be particularly treacherous - see 'Thinking, Fast and
> Slow'.
>
> With that prelude out of the way:
>
> Please see Russell's excellent stats:
> http://russellbryant.net/openstack-stats/tripleo-reviewers-30.txt
> http://russellbryant.net/openstack-stats/tripleo-reviewers-90.txt
>
> For joining and retaining core I look at the 90 day statistics; folk
> who are particularly low in the 30 day stats get a heads up so they
> aren't caught by surprise.
>
> 90 day active-enough stats:
>
>
> +-+---++
> | Reviewer| Reviews   -2  -1  +1  +2  +A+/- % |
> Disagreements* |
>
> +-+---++
> |slagle **| 6550 145   7 503 15477.9% |
> 36 (  5.5%)  |
> | clint-fewbar ** | 5494 120  11 414 11577.4% |
> 32 (  5.8%)  |
> |   lifeless **   | 518   34 203   2 279 11354.2% |
> 21 (  4.1%)  |
> |  rbrady | 4530  14 439   0   096.9% |
> 60 ( 13.2%)  |
> | cmsj ** | 3220  24   1 297 13692.5% |
> 22 (  6.8%)  |
> |derekh **| 2610  50   1 210  9080.8% |
> 12 (  4.6%)  |
> |dan-prince   | 2570  67 157  33  1673.9% |
> 15 (  5.8%)  |
> |   jprovazn **   | 1900  21   2 167  4388.9% |
> 13 (  6.8%)  |
> |ifarkas **   | 1860  28  18 140  8284.9% |
> 6 (  3.2%)  |
> ===
> | jistr **| 1770  31  16 130  2882.5% |
> 4 (  2.3%)  |
> |  ghe.rivero **  | 1761  21  25 129  5587.5% |
> 7 (  4.0%)  |
> |lsmola **| 1722  12  55 103  6391.9% |
> 21 ( 12.2%)  |
> |   jdob  | 1660  31 135   0   081.3% |
> 9 (  5.4%)  |
> |  bnemec | 1380  38 100   0   072.5% |
> 17 ( 12.3%)  |
> |greghaynes   | 1260  21 105   0   083.3% |
> 22 ( 17.5%)  |
> |  dougal | 1250  26  99   0   079.2% |
> 13 ( 10.4%)  |
> |   tzumainn **   | 1190  30  69  20  1774.8% |
> 2 (  1.7%)  |
> |rpodolyaka   | 1150  15 100   0   087.0% |
> 15 ( 13.0%)  |
> | ftcjeff | 1030   3 100   0   097.1% |
> 9 (  8.7%)  |
> | thesheep|  930  26  31  36  2172.0% |
> 3 (  3.2%)  |
> |pblaho **|  881   8  37  42  2289.8% |
> 3 (  3.4%)  |
> | jonpaul-sullivan|  800  33  47   0   058.8% |

[openstack-dev] [TripleO] Documenting supported platforms

2014-04-03 Thread Ricardo Carrillo Cruz
Hi guys

I opened a bug to state in the documentation that Ubuntu 12.04 is
unsupported and sent a change for it:

https://bugs.launchpad.net/tripleo/+bug/1296576
https://review.openstack.org/#/c/84801/

However, per the initial feedback it seems some developers are interested
in widening the scope of the change a bit and put all platforms that are
currently supported.
My plan is to add a table to the README.md and write up a supported
platforms matrix.
As I'm an Ubuntu user I'm only aware of Ubuntu 12.04 not working and Ubuntu
13.10 working just fine.
I'd like to get your feedback for platforms you use that you know that work
just fine for devtest and those that you know that do not, be Fedora,
OpenSuse or whatever.


Regards
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [IPv6] Ubuntu PPA with IPv6 enabled, need help to achieve it

2014-04-03 Thread Simon Leinen
Martinx  writes:
> 1- Create and maintain a Ubuntu PPA Archive to host Neutron with IPv6
> patches (from Nephos6 / Shixiong?).
[...]
> Let me know if there are interest on this...

Great initiative! We're building a new Icehouse cluster soon and are
very interested in trying these packages, because we really want to
support IPv6 properly.

I see you already got some help from the developers - cool!
-- 
Simon.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Quota Management

2014-04-03 Thread Kevin L. Mitchell
On Thu, 2014-04-03 at 19:16 +, Cazzolato, Sergio J wrote:
> Jay, thanks for taking ownership on this idea, we are really
> interested to contribute to this, so what do you think are the next
> steps to move on?

Perhaps a summit session on quota management would be in order?
-- 
Kevin L. Mitchell 
Rackspace


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [rally] Tenant quotas can now be updated during a benchmark

2014-04-03 Thread Bruno Semperlotti
Hi Rally users,

I would like to inform you that the feature allowing to update tenant's
quotas during a benchmark is available with the implementation of this
blueprint:
https://blueprints.launchpad.net/rally/+spec/benchmark-context-tenant-quotas

Currently, only Nova and Cinder quotas are supported (Neutron coming soon).

Here a small sample of how to do it:

In the json file describing the benchmark scenario, use the "context"
section to indicate quotas for each service. Quotas will be applied for
each generated tenants.

{
"NovaServers.boot_server": [
{
"args": {
"flavor_id": "1",
"image_id": "6e25e859-2015-4c6b-9940-aa21b2ab8ab2"
},
"runner": {
"type": "continuous",
"times":100,
"active_users": 10
},
"context": {
"users": {
"tenants": 1,
"users_per_tenant": 1
},
*"quotas": {*
*"nova": {*
*"instances": 150,*
*"cores": 150,*
*"ram": -1*
*}*
*}*
}
}
]
}


Following, the list of supported quotas:
*nova:*
instances, cores, ram, floating-ips, fixed-ips, metadata-items,
injected-files, injected-file-content-bytes, injected-file-path-bytes,
key-pairs, security-groups, security-group-rules

*cinder:*
gigabytes, snapshots, volumes

*neutron (coming soon):*
network, subnet, port, router, floatingip, security-group,
security-group-rule


Regards,

--
Bruno Semperlotti
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Quota Management

2014-04-03 Thread Cazzolato, Sergio J

Jay, thanks for taking ownership on this idea, we are really interested to 
contribute to this, so what do you think are the next steps to move on?

Please let me know whatever you need to accelerate on this.

Sergio Cazzolato

-Original Message-
From: Kevin L. Mitchell [mailto:kevin.mitch...@rackspace.com] 
Sent: Thursday, April 03, 2014 3:13 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Quota Management

On Thu, 2014-04-03 at 10:54 -0700, Jay Pipes wrote:
> On Thu, 2014-04-03 at 12:13 -0500, Kevin L. Mitchell wrote:
> > On Thu, 2014-04-03 at 09:22 -0700, Jay Pipes wrote:
> > > Boson does indeed look interesting, but who is working on it, if 
> > > anyone at this point? I agree that having a centralized quota 
> > > management system makes sense, in order to make the handling of 
> > > quotas and reservations consistent across projects as well as to 
> > > deal with global quota management properly (i.e. quotas that span 
> > > multiple cells/AZs)
> > 
> > At present, no one is really working on Boson.  The CERN folks were 
> > interested in it, but I wasn't able to free up the time I would have 
> > needed to actually develop on it, partly because of lack of interest 
> > from the rest of the community at the time.  I'd say that, 
> > currently, Boson is facing the chicken-and-egg problem—no one works 
> > on it because no one is working on it.
> 
> I would personally be interested in picking up the work. Kevin, any 
> issues with me moving the code on your GitHub repo into stackforge and 
> linking up the upstream CI harness?

None at all; please do so!  :)
--
Kevin L. Mitchell  Rackspace


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] How Mistral handling long running delegate tasks

2014-04-03 Thread Ivan Melnikov

I'm trying to catch up this rather long and interesting discussion,
sorry for somewhat late reply.

I can see aspects of 'lazy model' support in TaskFlow:
- how tasks are executed and reverted
- how flows are run
- how engine works internally

Let me address those aspects separately.

== Executing and reverting tasks ==

I think that should be done via different interface then running a flow
(or scheduling it to run), as it is completely different thing. In
current TaskFlow this interface is called task executor:
https://github.com/openstack/taskflow/blob/master/taskflow/engines/action_engine/executor.py#L57

That is actually how our WorkerBasedEngine was implemented: it's the
same engine with special task executor that schedules tasks on worker
instead of running task code locally.

Task executors are not aware of flows by design, all they do is
executing and reverting tasks. That means that task executors can be
easily shared between engines if that's wanted.

Current TaskExecutorBase interface uses futures (PEP 3148-like). When I
proposed it, futures looked like good tool for the task at hand (see
e.g. async task etherpad
https://etherpad.openstack.org/p/async-taskflow-tasks)

Now it may be time to reconsider that: having one future object per
running task may become a scalability issue. It may be worth to use
callbacks instead. It should not be too hard to refactor current engine
for that. Also, as TaskExecutorBase is an internal API, there should not
be any compatibility issues.

Then, we can make task executor interface public and allow clients to
provide their own task executors. It will be possible then for Mistral
to implement its own task executor, or several, and share the
executors between all the engine instances.

You can call it a plan;)

== Running flows ==

To run the flow TaskFlow client uses engine interface; also, there are
few of helper functions provided for convenience:

http://docs.openstack.org/developer/taskflow/engines.html#module-taskflow.engines.base
http://docs.openstack.org/developer/taskflow/engines.html#creating-engines

That is part of our public API, it is stable and good enough. Basically,
I don't think this API needs any major change.

Maybe it worth to add function or method to schedule running flow
without actually waiting for flow completion (at least, it was on my top
secret TODO list for quite a long time).

== Engine internals ==

Each engine eats resources, like thread it runs on; using these
resources to run one flow only is somewhat wasteful. Some work is
already planned to address this situation (see e.g.
https://blueprints.launchpad.net/taskflow/+spec/share-engine-thread).
Also, it might be good idea to implement different 'type' of engine to
support 'lazy' model, as Joshua suggests.

But whatever should and will be done about it, I daresay all that work
can be done without affecting API more then I described above.

-- 
WBR,
Ivan A. Melnikov

... tasks must flow ...


On 02.04.2014 01:51, Dmitri Zimine wrote:
> Even more responses inline :)
[...]

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [IPv6] Ubuntu PPA with IPv6 enabled, need help to achieve it

2014-04-03 Thread Martinx - ジェームズ
Well, at first, I'm planning to maintain this "Neutron IPv6 PPA repository"
only for Ubuntu 14.04 anyway... But, of course, if new dnsmasq arrives into
Ubuntu 12.04 on Cloud Archive, I see no problem in working on it too...

On 3 April 2014 13:19, Collins, Sean wrote:

> On Thu, Apr 03, 2014 at 02:28:39AM EDT, Sebastian Herzberg wrote:
> > Concerning dnsmasq: There is still no 2.66 version in the repos for
> Ubuntu 12.04. You always need to remove 2.59 and dpkg a newer version into
> it.
> >
>
> I think it was resolved with this bug:
>
> https://bugs.launchpad.net/neutron/+bug/129
>
> --
> Sean M. Collins
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] metadata for a HOT

2014-04-03 Thread Mike Spreitzer
Keith Bray  wrote on 04/03/2014 01:50:28 PM:

> We proposed another specific piece of template data [3] which I 
> can't remember whether it was met with resistance or we just didn't 
> get to implementing it since we knew we would have to store other 
> data specific to our uses cases in other files anyway.   We decided 
> to go with storing our extra information in a catalog (really just a
> Git repo with a README.MD [4]) for now  until we can implement 
> acceptable catalog functionality somewhere like Glance, hopefully in
> the Juno cycle.  When we want to share the template, we share all 
> the files in the repo (inclusive of the README.MD).  It would be 
> more ideal if we could share a single file (package) inclusive of 
> the template and corresponding help text and any other UI hint info 
> that would helpful.  I expect service providers to have differing 
> views of the extra data they want to store with a template... So 
> it'd just be nice to have a way to account for service providers to 
> store their unique data along with a template that is easy to share 
> and is part of the template package.  We bring up portability and 
> structured data often, but I'm starting to realize that portability 
> of a template breaks down unless every service provider runs exactly
> the same Heat resources, same image IDs, flavor types, etc.). I'd 
> like to drive more standardization of data for image and template 
> data into Glance so that in HOT we can just declare things like 
> "Linux, Flavor Ubuntu, latest LTS, minimum 1Gig" and automatically 
> discover and choose the right image to provision, or error if a 
> suitable match can not be found.  The Murano team has been hinting 
> at wanting to solve a similar problem, but with a broader vision 
> from a complex-multi application declaration perspective that 
> crosses multiple templates or is a layer above just matching to what
> capabilities Heat resources provide and matching against 
> capabilities that a catalog of templates provide (and mix that with 
> capabilities the cloud API services provide).  I'm not yet convinced
> that can't be done with a parent Heat template since we already have
> the declarative constructs and language well defined, but I 
> appreciate the use case and perspective those folks are bringing to 
> the conversation.

Keith, thanks for the background and wider context.  I am responding 
directly on my original point elsewhere, but let me pick up on a couple of 
things you mentioned in your wider context.  I definitely see a reason for 
interest in packaging something bigger than a single template.  As one 
very simple example, I have been exercising OS::Heat::AutoScalingGroup 
with a pair of templates (an outer template in which the ASG is a 
resource, where the element being scaled is a nested stack, prescribed by 
the other template in my package).  Since we are so fond here of solving 
problems with nested templates, I think there will be an increasing need 
to package together not only templates but also environment snippets (and, 
yeah, we need to smooth out how the receiver combines environment 
snippets).

I agree that template portability is problematic due to the 
non-portability of image UUIDs and flavors.  The approach you pointed 
towards looks attractive, but it is challenging to enable a template 
author to write a specification that is not too precise and not too 
liberal --- particularly since the template author has a hard time 
anticipating the creativity with which the receiving environment is 
populated.  I assume this has been extensively discussed already.  If not 
I may make some noise later.

Thanks,
Mike___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral][Taskflow] meet on IRC to talk over TaskFlow/Mistral integration

2014-04-03 Thread Dmitri Zimine
Works for me (and likely for Kirill), let's try, hopefully will work for Renat.
DZ.

On Apr 3, 2014, at 10:20 AM, Joshua Harlow  wrote:

> How about 2am UTC (7pm my local time, 8pm my time likely won't work out)?
> 
> If that doesn't work mistral has monday meetings right (afaik 8am local my 
> time). We can just hold off till then? 
> 
> Or u guys can just drop in #openstack-state-management anytime u are free and 
> usually someone is around (usually).
> 
> Thoughts?
> 
> -Josh
> 
> From: Dmitri Zimine 
> Date: Thursday, April 3, 2014 at 7:31 AM
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Cc: Joshua Harlow , Kirill Izotov 
> , Renat Akhmerov 
> Subject: [openstack-dev][Mistral][Taskflow] meet on IRC to talk over 
> TaskFlow/Mistral integration
> 
>> IRC to discuss http://tinyurl.com/k3s2gmy
>> 
>> Joshua, 2000 UTC doesn't quite work for Renat and Kirill (3 am their time). 
>> 
>> The overlap is: 
>> PST (UTC-7) UTCNOVT (UTC+7)
>> 
>> 04pm (16:00)  11pm (23:00) 6am (06:00)
>> 10pm (22:00)  05am (05:00) 12pm (12:00)
>> 
>> Kirill's pref is 3am UTC, early is ok, if needed. 
>> @Joshua can you do 3 am UTC (8 pm local?)
>> @Renat? 
>> 
>> Can we pencil 3:00 UTC on #openstack-mistral and adjust for Renat if needed?
>> 
>> DZ> 
>> 
>> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Support for multiple sort keys and sort directions in REST GET APIs

2014-04-03 Thread Mike Perez
Duncan, I think the point you raise could happen even without this change. In
the example of listing volumes, you would first query for the list in some
multi-key sort. The API extensions for example that add additional response
keys will do another lookup on that resource for the appropriate column it's
retrieving. There are some extensions that still do this unfortunately, but
quite a few got taken care of in Havana in using cache instead of doing these
wasteful lookups.

Overall Steven, I think this change is useful, especially from one of the
Horizon sessions I heard in Hong Kong for filtering/sorting.

-- 
Mike Perez

On 11:18 Thu 03 Apr , Duncan Thomas wrote:
> Some of the cinder APIs do weird database joins and double lookups and
> things, making every field sortable might have some serious database
> performance impact and open up a DoS attack. Will need more
> investigation to be sure.
> 
> On 2 April 2014 19:42, Steven Kaufer  wrote:
> > I have proposed blueprints in both nova and cinder for supporting multiple
> > sort keys and sort directions for the GET APIs (servers and volumes).  I am
> > trying to get feedback from other projects in order to have a more uniform
> > API across services.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Quota Management

2014-04-03 Thread Kevin L. Mitchell
On Thu, 2014-04-03 at 10:54 -0700, Jay Pipes wrote:
> On Thu, 2014-04-03 at 12:13 -0500, Kevin L. Mitchell wrote:
> > On Thu, 2014-04-03 at 09:22 -0700, Jay Pipes wrote:
> > > Boson does indeed look interesting, but who is working on it, if anyone
> > > at this point? I agree that having a centralized quota management system
> > > makes sense, in order to make the handling of quotas and reservations
> > > consistent across projects as well as to deal with global quota
> > > management properly (i.e. quotas that span multiple cells/AZs)
> > 
> > At present, no one is really working on Boson.  The CERN folks were
> > interested in it, but I wasn't able to free up the time I would have
> > needed to actually develop on it, partly because of lack of interest
> > from the rest of the community at the time.  I'd say that, currently,
> > Boson is facing the chicken-and-egg problem—no one works on it because
> > no one is working on it.
> 
> I would personally be interested in picking up the work. Kevin, any
> issues with me moving the code on your GitHub repo into stackforge and
> linking up the upstream CI harness?

None at all; please do so!  :)
-- 
Kevin L. Mitchell 
Rackspace


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] metadata for a HOT

2014-04-03 Thread Mike Spreitzer
Clint Byrum  wrote on 04/03/2014 01:10:30 PM:

> Things that affect the stack as a whole really belong in the stack
> API. That would also put them in the OS::Heat::Stack resource, so the
> template language already supports that.

The OS::Heat::Stack resource is one of several that create nested stacks;
we should be able to apply holistic scheduling to all stacks, regardless
of whether they are nest or which kind of nested stack they are.
Yes, if holistic scheduling were a feature in the Heat API then all kinds 
of
resources that create nested stacks "should" expose that feature
(shout out to Trove, autoscaling groups, ...).

> As for policies which might affect a holistic scheduler, those can just
> be resources as well. Just like deployments relate to servers, resources
> can relate to any policies they need.

A holistic scheduler needs input that describes all the resources to be 
scheduled as well as all the policies that apply.  Should a template 
contain a resource whose input includes a copy of the rest of the 
template?

> I would prefer that we focus on making HOT composable rather than
> extensible. If there is actually something missing from the root of the
> language, then it should be in the language. Use cases should almost
> always try to find a way to work as resources first, and then if that
> is unwieldy, look into language enhancements to factor things out.

Yeah, I would too.  Like I said, I have no satisfactory solution yet. Here 
is more of the problem.  I would like to follow an evolutionary path 
starting from the instance groups that are in Nova today.  I think I can 
outline such an evolution.  I am sure there will be debate about it.  I am 
even more sure that it will take time to accomplish that evolution.  OTOH, 
locally we have a holistic scheduler already working.  We want to be able 
to start using it today.  What can we do in this interim, and how can we 
arrange things to do progressive convergence so that the interim solution 
evolves as Nova & scheduling evolve, so that there is no big switch at the 
end to the end solution?

Thanks,
Mike___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Quota Management

2014-04-03 Thread Jay Pipes
On Thu, 2014-04-03 at 12:13 -0500, Kevin L. Mitchell wrote:
> On Thu, 2014-04-03 at 09:22 -0700, Jay Pipes wrote:
> > Boson does indeed look interesting, but who is working on it, if anyone
> > at this point? I agree that having a centralized quota management system
> > makes sense, in order to make the handling of quotas and reservations
> > consistent across projects as well as to deal with global quota
> > management properly (i.e. quotas that span multiple cells/AZs)
> 
> At present, no one is really working on Boson.  The CERN folks were
> interested in it, but I wasn't able to free up the time I would have
> needed to actually develop on it, partly because of lack of interest
> from the rest of the community at the time.  I'd say that, currently,
> Boson is facing the chicken-and-egg problem—no one works on it because
> no one is working on it.

I would personally be interested in picking up the work. Kevin, any
issues with me moving the code on your GitHub repo into stackforge and
linking up the upstream CI harness?

All the best,
-jay



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Marconi PTL Candidacy

2014-04-03 Thread Kurt Griffiths
Hi folks, I'd like to submit my name for serving during the Juno cycle as
the Queue Service PTL.

During my career I've had the opportunity to work in a wide variety of
roles in fields such as video game development, system utilities,
Internet marketing, and web services. This experience has given me a
holistic, pragmatic view on software development that I have tried to
leverage in my contributions to OpenStack. I believe that the best
software is smart (flexes to work at the user's level), useful (informed
by what users really need, not what we think they need), and pleasant
(optimized for happiness).

I've been heavily involved with Marconi from its inception, leading the
initial unconference session at the Grizzly summit, where we came together
as a community to fill what many saw as an obvious gap in the OpenStack
portfolio. I'd like to give a shout-out to Mark Atwood, Monty Taylor,
Jamie Painter, Allan Metts, Tim Simpson, and Flavio Percoco for their
early involvement in kick-starting the project. Thanks guys!

Marconi is key to enabling the development of web and mobile apps on top
of OpenStack, and we have also been hearing from several other programs
who are interested in using Marconi to surface events to end users (among
other things.)

The Marconi team has taken a pragmatic approach to the design of the API
and its architecture, inviting and valuing feedback from users and
operators all along the way. I think we can learn to do an even better job
at this during the Juno cycle.

A PTL has many responsibilities, but the ones I feel are most important
are these:

1. As a program facilitator, a PTL is responsible for keeping launchpad
groomed and up to date; watching out for logjams and misunderstandings,
working to resolve them quickly as they arise; and, finally, creating and
moderating multiple communication channels between contributors, and
between the team and the broader community.
2. As a culture champion, the PTL is responsible for leading by example
and growing a constructive team culture that values software quality and
application security. A culture where every voice is heard and valued. A
place where everyone feels safe expressing their ideas and concerns,
whatever they may be. A place where every individual feels appreciated and
supported.
3. As a user champion, the PTL is responsible for keeping the program
oriented toward a clear vision that is highly informed by user and
operator feedback.
4. As a senior technologist, the PTL is responsible for ensuring major
implementation decisions are rigorously vetted and revisited over time, as
necessary, to ensure the code is delivering on the program's vision (and
not creating scope creep).
5. As a liaison, the PTL is responsible for keeping their project aligned
with the broader OpenStack, Python and web development communities.

If elected, my priorities during Juno will include:

1. Operational Maturity: Marconi is already production-ready, but we still
have work to do to get to world-class reliability, monitoring, logging,
and efficiency.
2. Documentation: During Icehouse, Marconi made a good start on user and
operator manuals, and I would like to see those docs fleshed out, as well
as reworking the program wiki to make it much more informative and
engaging.
3. Security: During Juno I want to start doing per-milestone threat
modeling, and build out a suite of security tests.
4. Integration: I have heard from several other OpenStack programs who
would like to use Marconi, and so I look forward to working with them to
understand their needs and to assist them however we can.
5. Notifications: Beginning the work on the missing pieces needed to build
a notifications service on top of the Marconi messaging platform, that can
be used to surface events to end-users via SMS, email, web hooks, etc.
6. Graduation: Completing all remaining graduation requirements so that
Marconi can become integrated in the "K" cycle, which will allow other
programs to be more confident about taking dependencies on the service for
features they are planning.
7. Growth: I'd like to welcome several more contributors to the Marconi
core team, continue on-boarding new contributors and interns, and see
several more large deployments of Marconi in production.

---
Kurt Griffiths | @kgriffs


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] metadata for a HOT

2014-04-03 Thread Keith Bray
Steve, agreed.  Your description I believe is the conclusion that the community 
came to when this was perviously discussed, and we managed to get the 
implementation of parameter grouping and ordering [1] that you mentioned which 
has been very helpful.  I don't think we landed the keywords blueprint [2], 
which may be controversial because it is essentially unstructured. I wanted to 
make sure Mike had the links for historical context, but certainly understand 
and appreciate your point of view here.  I wasn't able to find the email 
threads to point Mike to, but assume they exist in the list archives somewhere.

We proposed another specific piece of template data [3] which I can't remember 
whether it was met with resistance or we just didn't get to implementing it 
since we knew we would have to store other data specific to our uses cases in 
other files anyway.   We decided to go with storing our extra information in a 
catalog (really just a Git repo with a README.MD [4]) for now  until we can 
implement acceptable catalog functionality somewhere like Glance, hopefully in 
the Juno cycle.  When we want to share the template, we share all the files in 
the repo (inclusive of the README.MD).  It would be more ideal if we could 
share a single file (package) inclusive of the template and corresponding help 
text and any other UI hint info that would helpful.  I expect service providers 
to have differing views of the extra data they want to store with a template... 
So it'd just be nice to have a way to account for service providers to store 
their unique data along with a template that is easy to share and is part of 
the template package.  We bring up portability and structured data often, but 
I'm starting to realize that portability of a template breaks down unless every 
service provider runs exactly the same Heat resources, same image IDs, flavor 
types, etc.). I'd like to drive more standardization of data for image and 
template data into Glance so that in HOT we can just declare things like 
"Linux, Flavor Ubuntu, latest LTS, minimum 1Gig" and automatically discover and 
choose the right image to provision, or error if a suitable match can not be 
found.  The Murano team has been hinting at wanting to solve a similar problem, 
but with a broader vision from a complex-multi application declaration 
perspective that crosses multiple templates or is a layer above just matching 
to what capabilities Heat resources provide and matching against capabilities 
that a catalog of templates provide (and mix that with capabilities the cloud 
API services provide).  I'm not yet convinced that can't be done with a parent 
Heat template since we already have the declarative constructs and language 
well defined, but I appreciate the use case and perspective those folks are 
bringing to the conversation.

[1] https://blueprints.launchpad.net/heat/+spec/parameter-grouping-ordering
 https://wiki.openstack.org/wiki/Heat/UI#Parameter_Grouping_and_Ordering

[2] https://blueprints.launchpad.net/heat/+spec/stack-keywords
https://wiki.openstack.org/wiki/Heat/UI#Stack_Keywords

[3] https://blueprints.launchpad.net/heat/+spec/add-help-text-to-template
https://wiki.openstack.org/wiki/Heat/UI#Help_Text

[4] Ex. Help Text accompanying a template in README.MD format:
https://github.com/rackspace-orchestration-templates/docker

-Keith

From: Steven Dake mailto:sd...@redhat.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Thursday, April 3, 2014 10:30 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [heat] metadata for a HOT

On 04/02/2014 08:41 PM, Keith Bray wrote:
https://wiki.openstack.org/wiki/Heat/StackMetadata

https://wiki.openstack.org/wiki/Heat/UI

-Keith

Keith,

Taking a look at the UI specification, I thought I'd take a look at adding 
parameter grouping and ordering to the hot_spec.rst file.  That seems like a 
really nice constrained use case with a clear way to validate that folks aren't 
adding magic to the template for their custom environments.  During that, I 
noticed it is is already implemented.

What is nice about this specific use case is it is something that can be 
validated by the parser.  For example, the parser could enforce that parameters 
in the parameter-groups section actually exist as parameters in the parameters 
section.  Essentially this particular use case *enforces* good heat template 
implementation without an opportunity for HOT template developers to jam 
customized data blobs into the template.

Stack keywords on the other hand doesn't necessarily follow this model.  I 
understand the use case, but it would be possible to jam unstructured metadata 
into the template.  That said, the limitations on the jamming custom metadata 
are one deep and it has a clear use case (categorization of templates for 
support/UI r

Re: [openstack-dev] [cinder] the ability about list the available volume back-ends and their capabilities

2014-04-03 Thread Mike Perez
On 06:11 Thu 03 Apr , Zhangleiqiang (Trump) wrote:
> Hi stackers:
> 
> I think the ability about list the available volume back-ends, along with 
> their capabilities, total capacity, available capacity is useful for admin. 
> For example, this can help admin to select a destination for volume migration.
> But I can't find the cinder api about this ability.
> 
> I find a BP about this ability:  
> https://blueprints.launchpad.net/cinder/+spec/list-backends-and-capabilities
> But the BP is not approved. Who can tell me the reason?

Hi Zhangleiqiang,

I think it's not approved because it has not been set to a series goal by the
drafter. I don't have permission myself to change the series goal, but I would
recommend going into the #openstack-cinder IRC channel and ask for the BP to be
set for the Juno release assuming there is a good approach. We'd also need
a contributor to take on this task.

I think it would be good to use the os-hosts extension which can be found in
cinder.api.contrib.hosts and add the additional response information there. It
already lists total volume/snapshot count and capacity used [1].

[1] - http://paste.openstack.org/show/74996

-- 
Mike Perez

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] PTL Candidacy

2014-04-03 Thread Anita Kuno
confirmed

On 04/03/2014 12:19 PM, Mike Perez wrote:
> Hello all,
> 
> My name is Mike Perez, and I would like to be your next PTL for the OpenStack
> block storage project Cinder.
> 
> I've been involved with the OpenStack community since October 2010. I'm
> a senior developer for Datera which contributes to Linux Bcache and the
> Linux-IO SCSI Target (LIO) in the kernel. Before that I was for seven years
> a senior developer for DreamHost, working on their core products and storage 
> in
> their OpenStack public cloud.
> 
> Since November 2012 I've been a core developer for Cinder. Besides code
> reviews, my main contributions include creating the v2 API, writing the v2 API
> reference and spec docs and rewriting the v1 api docs. These are contributions
> that I feel were were well thought out and complete. This is exactly how I 
> would like to see the future of Cinder's additional contributions and would
> like to lead the team that direction.
> 
> Instead of listing out the technical things that need to be improved in 
> Cinder,
> I would like to just talk about the things as PTL I would improve, which as
> a side effect will allow the team to focus better on those technical issues.
> 
> Cinder is a small but a very effective team. Just like other projects, we need
> more contributors to handle the requirements we get daily. First impressions
> with contributors who are very excited to make their name in OpenStack can be
> better helped by simple outreach in how they can be more effective with the
> team. Guiding those contributors on what are the goals, and spending a little
> time with them on how their interests can help those goals can go a long
> way. Currently I feel like potential long term contributors are discouraged in
> the time that they spend on evaluating what they could improve and to later
> find out that their proposed improvements don't fit the project plans.
> 
> Focus itself can help contributors be effective in what's important. With the
> support of the community, I would like to raise better guidelines on when
> certain contributions are appropriate. With these community agreed guidelines,
> it should be clearer on what is appropriate for review and what can be pushed
> to the next release. With a better focus we can allow more time for features 
> to
> be more complete as mentioned earlier. Being complete means having confidence
> something works. This can be ensured by trying changes before merge when
> possible and not relying on tests alone, having performance results, and
> actually having documentation so people know how to use new features. Release
> notes are not enough to figure out new Cinder features.
> 
> I want to help the team realize more they can do in Cinder. I don't want to be
> a single person people rely on in the project, but rather have this team help
> me carry this project forward.
> 
> Thank you,
> Mike Perez
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral][Taskflow] meet on IRC to talk over TaskFlow/Mistral integration

2014-04-03 Thread Joshua Harlow
How about 2am UTC (7pm my local time, 8pm my time likely won't work out)?

If that doesn't work mistral has monday meetings right (afaik 8am local my 
time). We can just hold off till then?

Or u guys can just drop in #openstack-state-management anytime u are free and 
usually someone is around (usually).

Thoughts?

-Josh

From: Dmitri Zimine mailto:d...@stackstorm.com>>
Date: Thursday, April 3, 2014 at 7:31 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Cc: Joshua Harlow mailto:harlo...@yahoo-inc.com>>, 
Kirill Izotov mailto:enyk...@stackstorm.com>>, Renat 
Akhmerov mailto:rakhme...@mirantis.com>>
Subject: [openstack-dev][Mistral][Taskflow] meet on IRC to talk over 
TaskFlow/Mistral integration

IRC to discuss http://tinyurl.com/k3s2gmy

Joshua, 2000 UTC doesn't quite work for Renat and Kirill (3 am their time).

The overlap is:
PST (UTC-7) UTCNOVT (UTC+7)

04pm (16:00) 11pm (23:00) 6am (06:00)
10pm (22:00) 05am (05:00) 12pm (12:00)

Kirill's pref is 3am UTC, early is ok, if needed.
@Joshua can you do 3 am UTC (8 pm local?)
@Renat?

Can we pencil 3:00 UTC on #openstack-mistral and adjust for Renat if needed?

DZ>


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder]a question about os-volume_upload_image

2014-04-03 Thread Mike Perez
On 18:37 Thu 03 Apr , Lingxian Kong wrote:
> Thanks Duncan for your answer.
> 
> I am very interested in making a contribution towards this effort, but
> what to do next? Waiting for approving for this blueprint? Or see
> others' opinions on this before we putting more efforts in achieving
> this? I just want to make sure that we could handle other people's use
> cases and not just our own.

What use case is that exactly? I mentioned earlier the original purpose was for
knowing if something was bootable. I'm curious on how else this is being used.

-- 
Mike Perez

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Quota Management

2014-04-03 Thread Kevin L. Mitchell
On Thu, 2014-04-03 at 09:22 -0700, Jay Pipes wrote:
> Boson does indeed look interesting, but who is working on it, if anyone
> at this point? I agree that having a centralized quota management system
> makes sense, in order to make the handling of quotas and reservations
> consistent across projects as well as to deal with global quota
> management properly (i.e. quotas that span multiple cells/AZs)

At present, no one is really working on Boson.  The CERN folks were
interested in it, but I wasn't able to free up the time I would have
needed to actually develop on it, partly because of lack of interest
from the rest of the community at the time.  I'd say that, currently,
Boson is facing the chicken-and-egg problem—no one works on it because
no one is working on it.
-- 
Kevin L. Mitchell 
Rackspace


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ML2][Ml2Plugin] Setting _original_network in NetworkContext:

2014-04-03 Thread Nader Lahouti
Thanks a lot Andre for the reply.
My comments inline:

On Wed, Apr 2, 2014 at 12:37 PM, Andre Pech wrote:

>
>
>
> On Fri, Mar 28, 2014 at 6:44 PM, Nader Lahouti wrote:
>
>> Hi Mathieu,
>>
>> Thanks a lot for your reply.
>>
>> Even in the neutron/neutron/db/db_base_plugin_v2.py: create_network()
>> passes network object:
>>
>> 911 
>> 
>> *def* create_network 
>> (self 
>> , context 
>> , network 
>> ):912 
>> 
>> """Handle creation of a single network."""913 
>> 
>> # single request processing914 
>> 
>> n = network 
>> ['network']   
>> * 'n' has all the network info (including extensions)*915 
>> 
>> # NOTE(jkoelker) Get the tenant_id outside of the session to 
>> avoid916 
>> 
>> #unneeded db action if the operation raises917 
>> 
>> tenant_id 
>>  = self 
>> ._get_tenant_id_for_create
>>  
>> (context
>>  , n)918 
>> 
>> *with* context 
>> .session 
>> .begin 
>> (subtransactions 
>> =True 
>> ):919 
>> 
>> args  = 
>> {'tenant_id': tenant_id 
>> ,920 
>> 
>> 'id': n.get 
>> ('id') *or* uuidutils 
>> .generate_uuid
>>  (),921 
>> 
>> 'name': n['name'],922 
>> 
>> 'admin_state_up': n['admin_state_up'],923 
>> 
>> 'shared': n['shared'],924 
>> 
>> 'status': n.get 
>> ('status', constants 
>> .NET_STATUS_ACTIVE
>>  )}925 
>> 
>> network 
>>  = models_v2 
>> .Network
>>  (**args 
>> )  <<*= 'network' does 
>> not include extensions.*926 
>> 
>> context 
>> .session 
>> .add 
>> (network 
>> )927 
>> 
>> *return* self 
>> 

Re: [openstack-dev] [heat] metadata for a HOT

2014-04-03 Thread Clint Byrum
Excerpts from Mike Spreitzer's message of 2014-04-02 22:10:21 -0700:
> Zane Bitter  wrote on 04/02/2014 05:36:43 PM:
> 
> > I think that if you're going to propose a new feature, you should at 
> > least give us a clue who you think is going to use it and what for ;)
> 
> I was not eager to do that yet because I have not found a fully 
> satisfactory answer yet, at this point I am exploring options.  But the 
> problem I am thinking about is how Heat might connect to a holistic 
> scheduler (a scheduler that makes a joint decision about a bunch of 
> resources of various types).  Such a scheduler needs input describing the 
> things to be scheduled and the policies to apply in scheduling; the first 
> half of that sounds a lot like a Heat template, so my thoughts go in that 
> direction.  But the HOT language today (since 
> https://review.openstack.org/#/c/83758/ was merged) does not have a place 
> to put policy that is not specific to a single resource.
> 

Things that affect the stack as a whole really belong in the stack
API. That would also put them in the OS::Heat::Stack resource, so the
template language already supports that.

As for policies which might affect a holistic scheduler, those can just
be resources as well. Just like deployments relate to servers, resources
can relate to any policies they need.

> > IIRC this has been discussed in the past and the justifications for 
> > including it in the template (as opposed to allowing metadata to be 
> > attached in the ReST API, as other projects already do for many things) 
> > were not compelling.
> 
> I see that Keith Bray mentioned 
> https://wiki.openstack.org/wiki/Heat/StackMetadata and 
> https://wiki.openstack.org/wiki/Heat/UI in another reply on this thread. 
> Are there additional places to look to find that discussion?
> 
> I have also heard that there has been discussion of language extension 
> issues.  Is that a separate discussion and, if so, where can I read it?

I would prefer that we focus on making HOT composable rather than
extensible. If there is actually something missing from the root of the
language, then it should be in the language. Use cases should almost
always try to find a way to work as resources first, and then if that
is unwieldy, look into language enhancements to factor things out.

I think the way hot-software-config has taken shape is a prime example
of this. We took the most common patterns and made a set of resources
that encapsulate them. But we didn't have to extend the language any. It
is all done in resources. (Kudos to Steve Baker for getting it done btw,
this was _not_ a small amount of work).

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] metadata for a HOT

2014-04-03 Thread Clint Byrum
Excerpts from Thomas Spatzier's message of 2014-04-03 08:36:20 -0700:
> > From: Mike Spreitzer 
> > To: "OpenStack Development Mailing List \(not for usage questions\)"
> > 
> > Date: 03/04/2014 07:10
> > Subject: Re: [openstack-dev] [heat] metadata for a HOT
> >
> > Zane Bitter  wrote on 04/02/2014 05:36:43 PM:
> >
> > > I think that if you're going to propose a new feature, you should at
> > > least give us a clue who you think is going to use it and what for ;)
> >
> > I was not eager to do that yet because I have not found a fully
> > satisfactory answer yet, at this point I am exploring options.  But
> > the problem I am thinking about is how Heat might connect to a
> > holistic scheduler (a scheduler that makes a joint decision about a
> > bunch of resources of various types).  Such a scheduler needs input
> > describing the things to be scheduled and the policies to apply in
> > scheduling; the first half of that sounds a lot like a Heat
> > template, so my thoughts go in that direction.  But the HOT language
> > today (since https://review.openstack.org/#/c/83758/ was merged)
> > does not have a place to put policy that is not specific to a
> singleresource.
> 
> I think you bring up a specific use case here, i.e. applying "policies" for
> placement/scheduling when deploying a stack. This is just a thought, but I
> wonder whether it would make more sense to then define a specific extension
> to HOT instead of having a generic metadata section and stuffing everything
> that does not fit into other places into metadata.
> 

Ever read about Larry "no modes" Tesler? Read up on his arguments
against modes.

I would much prefer any policies to be actual resources which the
resources interact with, rather than template wide modes.

> I mean, the use case Keith brought up are completely different (UI and user
> related), and I understand both use cases. But is the idea to put just
> everything into metadata, or would different classes of use cases justify
> different section? The latter would enforce better documentation of
> semantics. If everyhing goes into a metadata section, the contents also
> need to be clearly specified. Otherwise, the resulting template won't be
> portable. Ok, the standard HOT stuff will be portable, but not the
> metadata, so no two users will be able to interpret it the same way.
>

We had a fairly long debate about keywords and meta-information in HOT
and I thought we came to the conclusion that it belongs in the API and
not in the template language.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] metadata for a HOT

2014-04-03 Thread Steven Dake

On 04/02/2014 08:41 PM, Keith Bray wrote:

https://wiki.openstack.org/wiki/Heat/StackMetadata

https://wiki.openstack.org/wiki/Heat/UI

-Keith


Keith,

Taking a look at the UI specification, I thought I'd take a look at 
adding parameter grouping and ordering to the hot_spec.rst file. That 
seems like a really nice constrained use case with a clear way to 
validate that folks aren't adding magic to the template for their custom 
environments.  During that, I noticed it is is already implemented.


What is nice about this specific use case is it is something that can be 
validated by the parser.  For example, the parser could enforce that 
parameters in the parameter-groups section actually exist as parameters 
in the parameters section.  Essentially this particular use case 
*enforces* good heat template implementation without an opportunity for 
HOT template developers to jam customized data blobs into the template.


Stack keywords on the other hand doesn't necessarily follow this model.  
I understand the use case, but it would be possible to jam unstructured 
metadata into the template.  That said, the limitations on the jamming 
custom metadata are one deep and it has a clear use case (categorization 
of templates for support/UI rendering purposes).


I could be wrong, but I think the aversion to a general metadata section 
is centered around the problem of different people doing different 
things in a non-standardized way.


I think if we were to revisit the metadata proposal, one thing that 
might lead to a more successful outcome is actually defining what goes 
in the metadata, rather then allowing the metadata to be completely 
free-form as the HOT developer sees fit to implement it.


For example just taking the keywords proposal:
metadata:
  composed_of:
  - wordpress
  - mysql
  architecture:
  - lamp

Even though this metadata can't necessarily be validated, it can be 
documented.  I definitely have a -2 aversion to free-form metadata 
structuring, and am +0 on allowing the information to be declared in a 
non-validated way.


I don't believe the idea of structured metadata based upon real use 
cases has really been explored or -2'ed.


Regards,
-steve


From: Lingxian Kong mailto:anlin.k...@gmail.com>>
Reply-To: "OpenStack Development Mailing List (not for usage 
questions)" >

Date: Wednesday, April 2, 2014 9:31 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
>

Subject: Re: [openstack-dev] [heat] metadata for a HOT

Is there any relevant wiki or specification doc?


2014-04-03 4:45 GMT+08:00 Mike Spreitzer mailto:mspre...@us.ibm.com>>:

I would like to suggest that a metadata section be allowed at
the top level of a HOT.  Note that while resources in a stack
can have metadata, there is no way to put metadata on a stack
itself.  What do you think?

Thanks,
Mike
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
*---*

*Lingxian Kong*
Huawei Technologies Co.,LTD.
IT Product Line CloudOS PDU
China, Xi'an
Mobile: +86-18602962792
Email: konglingx...@huawei.com ;
anlin.k...@gmail.com 



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Hosts within two Availability Zones : possible or not ?

2014-04-03 Thread Meghal Gosalia
Hello folks,

Here is the bug [1] which is currently not allowing a host to be part of two 
availability zones.
This bug was targeted for havana.

The fix in the bug was made because it was assumed
that openstack does not support adding hosts to two zones by design.

The assumption was based on the fact that ---
if hostX is added to zoneA as well as zoneB,
and if you boot a vm vmY passing zoneB in boot params,
nova show vmY still returns zoneA.

In my opinion, we should fix the case of nova show
rather than changing aggregate api to not allow addition of hosts to multiple 
zones.

I have added my comments in comments #7 and #9 on that bug.

Thanks,
Meghal

[1] Bug - https://bugs.launchpad.net/nova/+bug/1196893


On Apr 3, 2014, at 9:05 AM, Steve Gordon 
mailto:sgor...@redhat.com>> wrote:

- Original Message -

Currently host aggregates are quite general, but the only ways for an
end-user to make use of them are:

1) By making the host aggregate an availability zones (where each host
is only supposed to be in one availability zone) and selecting it at
instance creation time.

2) By booting the instance using a flavor with appropriate metadata
(which can only be set up by admin).


I would like to see more flexibility available to the end-user, so I
think we should either:

A) Allow hosts to be part of more than one availability zone (and allow
selection of multiple availability zones when booting an instance), or

While changing to allow hosts to be in multiple AZs changes the concept from an 
operator/user point of view I do think the idea of being able to specify 
multiple AZs when booting an instance makes sense and would be a nice 
enhancement for users working with multi-AZ environments - "I'm OK with this 
instance running in AZ1 and AZ2, but not AZ*".

-Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [IPv6] Ubuntu PPA with IPv6 enabled, need help to achieve it

2014-04-03 Thread Collins, Sean
Sorry - not resolved: being tracked.

-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Quota Management

2014-04-03 Thread Dolph Mathews
On Thu, Apr 3, 2014 at 11:22 AM, Jay Pipes  wrote:

> On Thu, 2014-04-03 at 15:02 +, Cazzolato, Sergio J wrote:
> > Hi All,
> >
> > I’d like to know your thoughts regarding Quota Management… I’ve been
> > contributing to this topic for icehouse and noticed some issues and
> > discussions around its implementation like code is duplicated, synch
> > problems with database, not having an homogeneous logic, etc… so I was
> > thinking that maybe a centralized implementation could be a solution
> > for this… As far as I know there was a discussion during the last
> > summit and the decision was to use Keystone for a Centralized Quota
> > Management solution but I don’t have the details on that discussion…
> > Also I was looking at Boson (https://wiki.openstack.org/wiki/Boson)
> > that seems to be a nice solution for this and also addresses the
> > scenario where Nova is deployed in a multi-cell manner and some other
> > interesting things.
>
> Boson does indeed look interesting, but who is working on it, if anyone
> at this point? I agree that having a centralized quota management system
> makes sense, in order to make the handling of quotas and reservations
> consistent across projects as well as to deal with global quota
> management properly (i.e. quotas that span multiple cells/AZs)
>

++ but it looks to be an abandoned effort. There's a couple forks, but not
with any recent activity:

  https://github.com/klmitch/boson

(correct me if I'm just looking in the wrong place!)


>
> Best,
> -jay
>
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Quota Management

2014-04-03 Thread Jay Pipes
On Thu, 2014-04-03 at 15:02 +, Cazzolato, Sergio J wrote:
> Hi All, 
> 
> I’d like to know your thoughts regarding Quota Management… I’ve been
> contributing to this topic for icehouse and noticed some issues and
> discussions around its implementation like code is duplicated, synch
> problems with database, not having an homogeneous logic, etc… so I was
> thinking that maybe a centralized implementation could be a solution
> for this… As far as I know there was a discussion during the last
> summit and the decision was to use Keystone for a Centralized Quota
> Management solution but I don’t have the details on that discussion…
> Also I was looking at Boson (https://wiki.openstack.org/wiki/Boson)
> that seems to be a nice solution for this and also addresses the
> scenario where Nova is deployed in a multi-cell manner and some other
> interesting things.

Boson does indeed look interesting, but who is working on it, if anyone
at this point? I agree that having a centralized quota management system
makes sense, in order to make the handling of quotas and reservations
consistent across projects as well as to deal with global quota
management properly (i.e. quotas that span multiple cells/AZs)

Best,
-jay




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [IPv6] Ubuntu PPA with IPv6 enabled, need help to achieve it

2014-04-03 Thread Collins, Sean
On Thu, Apr 03, 2014 at 02:28:39AM EDT, Sebastian Herzberg wrote:
> Concerning dnsmasq: There is still no 2.66 version in the repos for Ubuntu 
> 12.04. You always need to remove 2.59 and dpkg a newer version into it.
> 

I think it was resolved with this bug:

https://bugs.launchpad.net/neutron/+bug/129

-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] PTL Candidacy

2014-04-03 Thread Mike Perez
Hello all,

My name is Mike Perez, and I would like to be your next PTL for the OpenStack
block storage project Cinder.

I've been involved with the OpenStack community since October 2010. I'm
a senior developer for Datera which contributes to Linux Bcache and the
Linux-IO SCSI Target (LIO) in the kernel. Before that I was for seven years
a senior developer for DreamHost, working on their core products and storage in
their OpenStack public cloud.

Since November 2012 I've been a core developer for Cinder. Besides code
reviews, my main contributions include creating the v2 API, writing the v2 API
reference and spec docs and rewriting the v1 api docs. These are contributions
that I feel were were well thought out and complete. This is exactly how I 
would like to see the future of Cinder's additional contributions and would
like to lead the team that direction.

Instead of listing out the technical things that need to be improved in Cinder,
I would like to just talk about the things as PTL I would improve, which as
a side effect will allow the team to focus better on those technical issues.

Cinder is a small but a very effective team. Just like other projects, we need
more contributors to handle the requirements we get daily. First impressions
with contributors who are very excited to make their name in OpenStack can be
better helped by simple outreach in how they can be more effective with the
team. Guiding those contributors on what are the goals, and spending a little
time with them on how their interests can help those goals can go a long
way. Currently I feel like potential long term contributors are discouraged in
the time that they spend on evaluating what they could improve and to later
find out that their proposed improvements don't fit the project plans.

Focus itself can help contributors be effective in what's important. With the
support of the community, I would like to raise better guidelines on when
certain contributions are appropriate. With these community agreed guidelines,
it should be clearer on what is appropriate for review and what can be pushed
to the next release. With a better focus we can allow more time for features to
be more complete as mentioned earlier. Being complete means having confidence
something works. This can be ensured by trying changes before merge when
possible and not relying on tests alone, having performance results, and
actually having documentation so people know how to use new features. Release
notes are not enough to figure out new Cinder features.

I want to help the team realize more they can do in Cinder. I don't want to be
a single person people rely on in the project, but rather have this team help
me carry this project forward.

Thank you,
Mike Perez

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Quota Management

2014-04-03 Thread Scott Devoid
Adding the Operators list to this since I think they will have some useful
comments.

My experience is that the current Nova quotas are not entirely useful. In
our environment we have a limited number of machines with 32 cores and 1TB
of ram (tens), and a large number with 8 cores and 32GB of ram (hundreds).
Aside from limits on the # of instances, the quota system would see the use
of 32 small machines as equivalent to the use of one big machine.
Economically and operationally these to cases are very different.

As a suggestion, how hard would it be to allow operators to create quotas
on the # of a given flavor that a tenant/domain may want to use?


On Thu, Apr 3, 2014 at 10:02 AM, Cazzolato, Sergio J <
sergio.j.cazzol...@intel.com> wrote:

>  Hi All,
>
>
>
> I’d like to know your thoughts regarding Quota Management… I’ve been
> contributing to this topic for icehouse and noticed some issues and
> discussions around its implementation like code is duplicated, synch
> problems with database, not having an homogeneous logic, etc… so I was
> thinking that maybe a centralized implementation could be a solution for
> this… As far as I know there was a discussion during the last summit and
> the decision was to use Keystone for a Centralized Quota Management
> solution but I don’t have the details on that discussion… Also I was
> looking at Boson (https://wiki.openstack.org/wiki/Boson) that seems to be
> a nice solution for this and also addresses the scenario where Nova is
> deployed in a multi-cell manner and some other interesting things.
>
>
>
> Sergio
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Hosts within two Availability Zones : possible or not ?

2014-04-03 Thread Steve Gordon
- Original Message -
 
> Currently host aggregates are quite general, but the only ways for an
> end-user to make use of them are:
> 
> 1) By making the host aggregate an availability zones (where each host
> is only supposed to be in one availability zone) and selecting it at
> instance creation time.
> 
> 2) By booting the instance using a flavor with appropriate metadata
> (which can only be set up by admin).
> 
> 
> I would like to see more flexibility available to the end-user, so I
> think we should either:
> 
> A) Allow hosts to be part of more than one availability zone (and allow
> selection of multiple availability zones when booting an instance), or

While changing to allow hosts to be in multiple AZs changes the concept from an 
operator/user point of view I do think the idea of being able to specify 
multiple AZs when booting an instance makes sense and would be a nice 
enhancement for users working with multi-AZ environments - "I'm OK with this 
instance running in AZ1 and AZ2, but not AZ*".

-Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Hosts within two Availability Zones : possible or not ?

2014-04-03 Thread Chris Friesen

On 04/03/2014 09:34 AM, Murray, Paul (HP Cloud Services) wrote:

Hi Sylvain,

I would go with keeping AZs exclusive. It is a well-established concept
even if it is up to providers to implement what it actually means in
terms of isolation. Some good use cases have been presented on this
topic recently, but for me they suggest we should develop a better
concept rather than bend the meaning of the old one. We certainly don’t
have hosts in more than one AZ in HP Cloud and I think some of our users
would be very surprised if we changed that.


I'm okay with only allowing hosts to be within a single availability 
zone, but in that case maybe we could have a per-instance way of 
matching host aggregate metadata when booting the instance--maybe a 
special form of scheduler hints?  This would be analogous to the 
existing matching of flavor metadata.


Also, while we're looking at availability zones I'd like to point out a 
bug I reported last year (https://bugs.launchpad.net/nova/+bug/1213224) 
where you can specify multiple aggregates with the same zone name. 
Since the user can only specify the zone name, this makes it 
indeterminate which aggregate is actually used.


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Hosts within two Availability Zones : possible or not ?

2014-04-03 Thread Steve Gordon
- Original Message -
> Hi,
> 
> I'm currently trying to reproduce [1]. This bug requires to have the same
> host on two different aggregates, each one having an AZ.
> 
> IIRC, Nova API prevents hosts of being part of two distinct AZs [2], so
> IMHO this request should not be possible.
> That said, there are two flaws where I can identify that no validation is
> done :
>  - when specifying an AZ in nova.conf, the host is overriding the existing
> AZ by its own
>  - when adding an host to an aggregate without AZ defined, and afterwards
> update the aggregate to add an AZ
> 
> 
> So, I need direction. Either we consider it is not possible to share 2 AZs
> for the same host and then we need to fix the two above scenarios, or we
> say it's nice to have 2 AZs for the same host and then we both remove the
> validation check in the API and we fix the output issue reported in the
> original bug [1].

Current operator and ultimately user expectations as a result of the validation 
on aggregate creation are that a host can only be in one AZ, based on that I'd 
expect to see the gaps identified in those scenarios filled rather than 
allowing a host to be in multiple AZs. Nice work identifying them though!

Thanks,

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Some thoughts on the mapping section

2014-04-03 Thread Thomas Spatzier
Excerpts from Thomas Herve's message on 03/04/2014 09:21:05:
> From: Thomas Herve 
> To: "OpenStack Development Mailing List (not for usage questions)"
> 
> Date: 03/04/2014 09:21
> Subject: Re: [openstack-dev] [Heat] Some thoughts on the mapping section
>



>
> > Speaking of offering options for selection, there is another proposal
on
> > adding conditional creation of resources [3], whose use case to enable
> > or disable a resource creation (among others).  My perception is that
> > these are all relevant enhancements to the reusability of HOT
templates,
> > though I don't think we really need very sophisticated combinatory
> > conditionals.
>
> I think that's interesting that you mentioned that, because Zane
> talked about a "variables" section, which would encompass what
> "conditions" and "mappings" mean. That's why we're discussing
> extensively about those design points, to see where we can be a bit
> more generic to handle more use cases.

+1 on bringing those suggestions together. It seems to me like there is
quite some overlap of what "mappings" and "variables" shall solve, so it
would be nice to have one solution for it. As you mentioned earlier, the
objection against "mappings" was not that CFN had it and we didn't want to
have it, but because the use case did not sell well. If there are things
that make sense, no objection, but maybe we can do it smarter in HOT ;-)

Regards,
Thomas

>
> Cheers,
>
> --
> Thomas
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] metadata for a HOT

2014-04-03 Thread Thomas Spatzier
> From: Mike Spreitzer 
> To: "OpenStack Development Mailing List \(not for usage questions\)"
> 
> Date: 03/04/2014 07:10
> Subject: Re: [openstack-dev] [heat] metadata for a HOT
>
> Zane Bitter  wrote on 04/02/2014 05:36:43 PM:
>
> > I think that if you're going to propose a new feature, you should at
> > least give us a clue who you think is going to use it and what for ;)
>
> I was not eager to do that yet because I have not found a fully
> satisfactory answer yet, at this point I am exploring options.  But
> the problem I am thinking about is how Heat might connect to a
> holistic scheduler (a scheduler that makes a joint decision about a
> bunch of resources of various types).  Such a scheduler needs input
> describing the things to be scheduled and the policies to apply in
> scheduling; the first half of that sounds a lot like a Heat
> template, so my thoughts go in that direction.  But the HOT language
> today (since https://review.openstack.org/#/c/83758/ was merged)
> does not have a place to put policy that is not specific to a
singleresource.

I think you bring up a specific use case here, i.e. applying "policies" for
placement/scheduling when deploying a stack. This is just a thought, but I
wonder whether it would make more sense to then define a specific extension
to HOT instead of having a generic metadata section and stuffing everything
that does not fit into other places into metadata.

I mean, the use case Keith brought up are completely different (UI and user
related), and I understand both use cases. But is the idea to put just
everything into metadata, or would different classes of use cases justify
different section? The latter would enforce better documentation of
semantics. If everyhing goes into a metadata section, the contents also
need to be clearly specified. Otherwise, the resulting template won't be
portable. Ok, the standard HOT stuff will be portable, but not the
metadata, so no two users will be able to interpret it the same way.

>
> > IIRC this has been discussed in the past and the justifications for
> > including it in the template (as opposed to allowing metadata to be
> > attached in the ReST API, as other projects already do for many things)

> > were not compelling.
>
> I see that Keith Bray mentioned https://wiki.openstack.org/wiki/
> Heat/StackMetadata and https://wiki.openstack.org/wiki/Heat/UI in
> another reply on this thread.  Are there additional places to look
> to find that discussion?
>
> I have also heard that there has been discussion of language
> extension issues.  Is that a separate discussion and, if so, where
> can I read it?
>
> Thanks,
> Mike
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Hosts within two Availability Zones : possible or not ?

2014-04-03 Thread Murray, Paul (HP Cloud Services)
Hi Sylvain,

I would go with keeping AZs exclusive. It is a well-established concept even if 
it is up to providers to implement what it actually means in terms of 
isolation. Some good use cases have been presented on this topic recently, but 
for me they suggest we should develop a better concept rather than bend the 
meaning of the old one. We certainly don't have hosts in more than one AZ in HP 
Cloud and I think some of our users would be very surprised if we changed that.

Paul.

From: Khanh-Toan Tran [mailto:khanh-toan.t...@cloudwatt.com]
Sent: 03 April 2014 15:53
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova] Hosts within two Availability Zones : 
possible or not ?

+1 for AZs not sharing hosts.

Because it's the only mechanism that allows us to segment the datacenter. 
Otherwise we cannot provide redundancy to client except using Region which is 
dedicated infrastructure and networked separated and anti-affinity filter which 
IMO is not pragmatic as it has tendency of abusive usage.  Why sacrificing this 
power so that users can select the types of his desired physical hosts ? The 
latter can be exposed using flavor metadata, which is a lot safer and more 
controllable than using AZs. If someone insists that we really need to let 
users choose the types of physical hosts, then I suggest creating a new hint, 
and use aggregates with it. Don't sacrifice AZ exclusivity!

Btw, there is a datacenter design called "dual-room" [1] which I think best fit 
for AZs to make your cloud redundant even with one datacenter.

Best regards,

Toan

[1] IBM and Cisco: Together for a World Class Data Center, Page 141. 
http://books.google.fr/books?id=DHjJAgAAQBAJ&pg=PA141#v=onepage&q&f=false



De : Sylvain Bauza [mailto:sylvain.ba...@gmail.com]
Envoyé : jeudi 3 avril 2014 15:52
À : OpenStack Development Mailing List (not for usage questions)
Objet : [openstack-dev] [Nova] Hosts within two Availability Zones : possible 
or not ?

Hi,

I'm currently trying to reproduce [1]. This bug requires to have the same host 
on two different aggregates, each one having an AZ.

IIRC, Nova API prevents hosts of being part of two distinct AZs [2], so IMHO 
this request should not be possible.
That said, there are two flaws where I can identify that no validation is done :
 - when specifying an AZ in nova.conf, the host is overriding the existing AZ 
by its own
 - when adding an host to an aggregate without AZ defined, and afterwards 
update the aggregate to add an AZ


So, I need direction. Either we consider it is not possible to share 2 AZs for 
the same host and then we need to fix the two above scenarios, or we say it's 
nice to have 2 AZs for the same host and then we both remove the validation 
check in the API and we fix the output issue reported in the original bug [1].


Your comments are welcome.
Thanks,
-Sylvain


[1] : https://bugs.launchpad.net/nova/+bug/1277230

[2] : 
https://github.com/openstack/nova/blob/9d45e9cef624a4a972c24c47c7abd57a72d74432/nova/compute/api.py#L3378
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Hosts within two Availability Zones : possible or not ?

2014-04-03 Thread Khanh-Toan Tran
Dual-room link:

[1] IBM and Cisco: Together for a World Class Data Center, Page 141.
http://books.google.fr/books?id=DHjJAgAAQBAJ&pg=PA141#v=onepage&q&f=false


> -Message d'origine-
> De : Khanh-Toan Tran [mailto:khanh-toan.t...@cloudwatt.com]
> Envoyé : jeudi 3 avril 2014 17:22
> À : OpenStack Development Mailing List (not for usage questions)
> Objet : RE: [openstack-dev] [Nova] Hosts within two Availability Zones :
possible
> or not ?
>
> +1 for AZs not sharing hosts.
>
> Because it’s the only mechanism that allows us to segment the
datacenter.
> Otherwise we cannot provide redundancy to client except using Region
which is
> dedicated infrastructure and networked separated and anti-affinity
filter which
> IMO is not pragmatic as it has tendency of abusive usage.  Why
sacrificing this
> power so that users can select the types of his desired physical hosts ?
The latter
> can be exposed using flavor metadata, which is a lot safer and more
controllable
> than using AZs. If someone insists that we really need to let users
choose the
> types of physical hosts, then I suggest creating a new hint, and use
aggregates
> with it. Don’t sacrifice AZ exclusivity!
>
> Btw, there is a datacenter design called “dual-room” [1] which I think
best fit for
> AZs to make your cloud redundant even with one datacenter.
>
> Best regards,
>
> Toan
>
> > -Message d'origine-
> > De : Chris Friesen [mailto:chris.frie...@windriver.com]
> > Envoyé : jeudi 3 avril 2014 16:51
> > À : openstack-dev@lists.openstack.org
> > Objet : Re: [openstack-dev] [Nova] Hosts within two Availability Zones
> > : possible or not ?
> >
> > On 04/03/2014 07:51 AM, Sylvain Bauza wrote:
> > > Hi,
> > >
> > > I'm currently trying to reproduce [1]. This bug requires to have the
> > > same host on two different aggregates, each one having an AZ.
> > >
> > > IIRC, Nova API prevents hosts of being part of two distinct AZs [2],
> > > so IMHO this request should not be possible.
> > > That said, there are two flaws where I can identify that no
> > > validation is done :
> > >   - when specifying an AZ in nova.conf, the host is overriding the
> > > existing AZ by its own
> > >   - when adding an host to an aggregate without AZ defined, and
> > > afterwards update the aggregate to add an AZ
> > >
> > >
> > > So, I need direction. Either we consider it is not possible to share
> > > 2 AZs for the same host and then we need to fix the two above
> > > scenarios, or we say it's nice to have 2 AZs for the same host and
> > > then we both remove the validation check in the API and we fix the
> > > output issue reported in the original bug [1].
> >
> > Currently host aggregates are quite general, but the only ways for an
> > end-user to make use of them are:
> >
> > 1) By making the host aggregate an availability zones (where each host
> > is only supposed to be in one availability zone) and selecting it at
> > instance creation time.
> >
> > 2) By booting the instance using a flavor with appropriate metadata
> > (which can only be set up by admin).
> >
> >
> > I would like to see more flexibility available to the end-user, so I
> > think we should either:
> >
> > A) Allow hosts to be part of more than one availability zone (and
> > allow selection of multiple availability zones when booting an
> > instance), or
> >
> > B) Allow the instance boot scheduler hints to interact with the host
> > aggregate metadata.
> >
> > Chris
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Hosts within two Availability Zones : possible or not ?

2014-04-03 Thread Khanh-Toan Tran
+1 for AZs not sharing hosts.

Because it’s the only mechanism that allows us to segment the datacenter.
Otherwise we cannot provide redundancy to client except using Region which
is dedicated infrastructure and networked separated and anti-affinity
filter which IMO is not pragmatic as it has tendency of abusive usage.
Why sacrificing this power so that users can select the types of his
desired physical hosts ? The latter can be exposed using flavor metadata,
which is a lot safer and more controllable than using AZs. If someone
insists that we really need to let users choose the types of physical
hosts, then I suggest creating a new hint, and use aggregates with it.
Don’t sacrifice AZ exclusivity!

Btw, there is a datacenter design called “dual-room” [1] which I think
best fit for AZs to make your cloud redundant even with one datacenter.

Best regards,

Toan

> -Message d'origine-
> De : Chris Friesen [mailto:chris.frie...@windriver.com]
> Envoyé : jeudi 3 avril 2014 16:51
> À : openstack-dev@lists.openstack.org
> Objet : Re: [openstack-dev] [Nova] Hosts within two Availability Zones :
possible
> or not ?
>
> On 04/03/2014 07:51 AM, Sylvain Bauza wrote:
> > Hi,
> >
> > I'm currently trying to reproduce [1]. This bug requires to have the
> > same host on two different aggregates, each one having an AZ.
> >
> > IIRC, Nova API prevents hosts of being part of two distinct AZs [2],
> > so IMHO this request should not be possible.
> > That said, there are two flaws where I can identify that no validation
> > is done :
> >   - when specifying an AZ in nova.conf, the host is overriding the
> > existing AZ by its own
> >   - when adding an host to an aggregate without AZ defined, and
> > afterwards update the aggregate to add an AZ
> >
> >
> > So, I need direction. Either we consider it is not possible to share 2
> > AZs for the same host and then we need to fix the two above scenarios,
> > or we say it's nice to have 2 AZs for the same host and then we both
> > remove the validation check in the API and we fix the output issue
> > reported in the original bug [1].
>
> Currently host aggregates are quite general, but the only ways for an
end-user
> to make use of them are:
>
> 1) By making the host aggregate an availability zones (where each host
is only
> supposed to be in one availability zone) and selecting it at instance
creation
> time.
>
> 2) By booting the instance using a flavor with appropriate metadata
(which can
> only be set up by admin).
>
>
> I would like to see more flexibility available to the end-user, so I
> think we should either:
>
> A) Allow hosts to be part of more than one availability zone (and allow
> selection of multiple availability zones when booting an instance), or
>
> B) Allow the instance boot scheduler hints to interact with the host
> aggregate metadata.
>
> Chris
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [climate] Energy efficiency BP

2014-04-03 Thread Dina Belova
OK, thanks for publishing it!


On Thu, Apr 3, 2014 at 6:51 PM, François Rossigneux <
francois.rossign...@inria.fr> wrote:

> Hello,
>
> I am writing a blueprint about energy efficiency:
> - Reservation aggregation to minimize the number of active physical hosts
> - Standby modes on inactive physical hosts
>
> https://blueprints.launchpad.net/climate/+spec/energy-efficiency
>
> Please feel free to comment it in the Etherpad...
> Francois
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Quota Management

2014-04-03 Thread Cazzolato, Sergio J
Hi All,

I'd like to know your thoughts regarding Quota Management... I've been 
contributing to this topic for icehouse and noticed some issues and discussions 
around its implementation like code is duplicated, synch problems with 
database, not having an homogeneous logic, etc... so I was thinking that maybe 
a centralized implementation could be a solution for this... As far as I know 
there was a discussion during the last summit and the decision was to use 
Keystone for a Centralized Quota Management solution but I don't have the 
details on that discussion... Also I was looking at Boson 
(https://wiki.openstack.org/wiki/Boson) that seems to be a nice solution for 
this and also addresses the scenario where Nova is deployed in a multi-cell 
manner and some other interesting things.

Sergio

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Hosts within two Availability Zones : possible or not ?

2014-04-03 Thread Khanh-Toan Tran
+1 for AZs not sharing hosts.



Because it’s the only mechanism that allows us to segment the datacenter.
Otherwise we cannot provide redundancy to client except using Region which
is dedicated infrastructure and networked separated and anti-affinity
filter which IMO is not pragmatic as it has tendency of abusive usage.
Why sacrificing this power so that users can select the types of his
desired physical hosts ? The latter can be exposed using flavor metadata,
which is a lot safer and more controllable than using AZs. If someone
insists that we really need to let users choose the types of physical
hosts, then I suggest creating a new hint, and use aggregates with it.
Don’t sacrifice AZ exclusivity!



Btw, there is a datacenter design called “dual-room” [1] which I think
best fit for AZs to make your cloud redundant even with one datacenter.



Best regards,



Toan



[1] IBM and Cisco: Together for a World Class Data Center, Page 141.
http://books.google.fr/books?id=DHjJAgAAQBAJ
 &pg=PA141#v=onepage&q&f=false







De : Sylvain Bauza [mailto:sylvain.ba...@gmail.com]
Envoyé : jeudi 3 avril 2014 15:52
À : OpenStack Development Mailing List (not for usage questions)
Objet : [openstack-dev] [Nova] Hosts within two Availability Zones :
possible or not ?



Hi,



I'm currently trying to reproduce [1]. This bug requires to have the same
host on two different aggregates, each one having an AZ.



IIRC, Nova API prevents hosts of being part of two distinct AZs [2], so
IMHO this request should not be possible.

That said, there are two flaws where I can identify that no validation is
done :

 - when specifying an AZ in nova.conf, the host is overriding the existing
AZ by its own

 - when adding an host to an aggregate without AZ defined, and afterwards
update the aggregate to add an AZ





So, I need direction. Either we consider it is not possible to share 2 AZs
for the same host and then we need to fix the two above scenarios, or we
say it's nice to have 2 AZs for the same host and then we both remove the
validation check in the API and we fix the output issue reported in the
original bug [1].





Your comments are welcome.

Thanks,

-Sylvain





[1] : https://bugs.launchpad.net/nova/+bug/1277230



[2] :
https://github.com/openstack/nova/blob/9d45e9cef624a4a972c24c47c7abd57a72d
74432/nova/compute/api.py#L3378

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Hosts within two Availability Zones : possible or not ?

2014-04-03 Thread Chris Friesen

On 04/03/2014 07:51 AM, Sylvain Bauza wrote:

Hi,

I'm currently trying to reproduce [1]. This bug requires to have the
same host on two different aggregates, each one having an AZ.

IIRC, Nova API prevents hosts of being part of two distinct AZs [2], so
IMHO this request should not be possible.
That said, there are two flaws where I can identify that no validation
is done :
  - when specifying an AZ in nova.conf, the host is overriding the
existing AZ by its own
  - when adding an host to an aggregate without AZ defined, and
afterwards update the aggregate to add an AZ


So, I need direction. Either we consider it is not possible to share 2
AZs for the same host and then we need to fix the two above scenarios,
or we say it's nice to have 2 AZs for the same host and then we both
remove the validation check in the API and we fix the output issue
reported in the original bug [1].


Currently host aggregates are quite general, but the only ways for an 
end-user to make use of them are:


1) By making the host aggregate an availability zones (where each host 
is only supposed to be in one availability zone) and selecting it at 
instance creation time.


2) By booting the instance using a flavor with appropriate metadata 
(which can only be set up by admin).



I would like to see more flexibility available to the end-user, so I 
think we should either:


A) Allow hosts to be part of more than one availability zone (and allow 
selection of multiple availability zones when booting an instance), or


B) Allow the instance boot scheduler hints to interact with the host 
aggregate metadata.


Chris


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] reviewer update march

2014-04-03 Thread Tomas Sedovic
On 03/04/14 13:02, Robert Collins wrote:
> Getting back in the swing of things...
> 
> Hi,
> like most OpenStack projects we need to keep the core team up to
> date: folk who are not regularly reviewing will lose context over
> time, and new folk who have been reviewing regularly should be trusted
> with -core responsibilities.
> 
> In this months review:
>  - Dan Prince for -core
>  - Jordan O'Mara for removal from -core
>  - Jiri Tomasek for removal from -core
>  - Jamomir Coufal for removal from -core
> 
> Existing -core members are eligible to vote - please indicate your
> opinion on each of the three changes above in reply to this email.

+1

> 

> 
> 
> -core that are not keeping up recently... :
> 
> |   tomas-8c8 **  |  310   4   2  25   887.1% |

Duly noted. I've picked up the daily pace again in the last couple of
weeks and will continue doing so.

> 1 (  3.2%)  |
> |marios **|  270   1  17   9   796.3% |
> 3 ( 11.1%)  |
> |   tzumainn **   |  270   3  23   1   488.9% |
> 0 (  0.0%)  |
> |pblaho **|  170   0   4  13   4   100.0% |
> 1 (  5.9%)  |
> |jomara **|   00   0   0   0   1 0.0% |
> 0 (  0.0%)  |
> 
> 
> Please remember - the stats are just an entry point to a more detailed
> discussion about each individual, and I know we all have a bunch of
> work stuff, on an ongoing basis :)
> 
> I'm using the fairly simple metric we agreed on - 'average at least
> three reviews a
> day' as a proxy for 'sees enough of the code and enough discussion of
> the code to be an effective reviewer'. The three review a day thing we
> derived based
> on the need for consistent volume of reviews to handle current
> contributors - we may
> lower that once we're ahead (which may happen quickly if we get more cores... 
> :)
> But even so:
>  - reading three patches a day is a pretty low commitment to ask for
>  - if you don't have time to do that, you will get stale quickly -
> you'll only see under
>33% of the code changes going on (we're doing about 10 commits
>a day - twice as many since december - and hopefully not slowing down!)
> 
> Cheers,
> Rob
> 
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [climate] Energy efficiency BP

2014-04-03 Thread François Rossigneux

Hello,

I am writing a blueprint about energy efficiency:
- Reservation aggregation to minimize the number of active physical hosts
- Standby modes on inactive physical hosts

https://blueprints.launchpad.net/climate/+spec/energy-efficiency

Please feel free to comment it in the Etherpad...
Francois

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] team meeting April 3 1800 UTC [savanna]

2014-04-03 Thread Ruslan Kamaldinov
On Wed, Apr 2, 2014 at 11:13 PM, Dmitry  wrote:
> Hi,
> I'm wondering if you have plans to use Murano for a cluster management?
> Thanks,
> Dmitry

Dmitry,

Sahara is not going to use Murano for cluster management. Sahara uses Heat
for underlying infrastructure management and various Hadoop management tools
(e.g. Ambari from Hortonworks Data Platform) to manage Hadoop clusters. Also
Sahara builds a nice layer of abstraction above different distributions to
define and use cluster templates to provision Hadoop and related software.

On the other hand, Murano might expose Sahara/Hadoop clusters in the app
catalog. Sahara developers already work on implementation of Sahara resource
in Heat. So, it'll be available through Heat templates. Here [0] you can find
an example of Heat template for Sahara.


[0] http://paste.openstack.org/show/64658/

Thanks,
Ruslan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Help in re-running openstack

2014-04-03 Thread Solly Ross
I just wanted to add that if you modify code, you can commit it into a 
temporary commit,
and that will be preserved.

Best Regards,
Solly Ross

- Original Message -
From: "Dolph Mathews" 
To: "OpenStack Development Mailing List (not for usage questions)" 

Sent: Thursday, April 3, 2014 9:51:31 AM
Subject: Re: [openstack-dev] Help in re-running openstack


On Thu, Apr 3, 2014 at 8:45 AM, Anita Kuno < ante...@anteaya.info > wrote: 



On 04/03/2014 07:02 AM, Erno Kuvaja wrote: 
> Hi Shiva, 
> 
> You can get into the screen after you have made the changes stop the 
> process you have changed (ctrl-c on the correct tab) and restart it 
> (arrow up will give you the last command ran which will be the one that 
> has started the process by devstack). 
> 
> - Erno 
> 
> On 03/04/14 11:47, shiva m wrote: 
>> Hi, 
>> 
>> I am trying to modify code in /op/stack/* and did ./unstack.sh and 
>> ./stack.sh. But after ./stack.sh it reloading to previous values. Any 
>> one please help where to modify code and re-run. Say if I modify 
>> some python file or some configurtaion file like /etc/nova/nova.conf, 
>> how do I make these changes get effected. I have ubuntu-havana 
>> devstack setup. 
>> 
>> I am new to openstack code, correct if I am wrong. 
>> 
>> Thanks, 
>> Shiva 
>> 
>> 
>> 
>> 
>> ___ 
>> OpenStack-dev mailing list 
>> OpenStack-dev@lists.openstack.org 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 
> 
> 
> 
> ___ 
> OpenStack-dev mailing list 
> OpenStack-dev@lists.openstack.org 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 
Requests of this nature are considered support requests. 

Although not named, this question is specifically with regard to devstack 
(making this the appropriate list!). 



Support requests belong on the general mailing list: 
openst...@lists.openstack.org . 

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack 

Thank you, 
Anita. 



___ 
OpenStack-dev mailing list 
OpenStack-dev@lists.openstack.org 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Mistral][Taskflow] meet on IRC to talk over TaskFlow/Mistral integration

2014-04-03 Thread Dmitri Zimine
IRC to discuss http://tinyurl.com/k3s2gmy

Joshua, 2000 UTC doesn't quite work for Renat and Kirill (3 am their time). 

The overlap is: 
PST (UTC-7) UTC NOVT (UTC+7)

04pm (16:00)11pm (23:00)6am (06:00)
10pm (22:00)05am (05:00)12pm (12:00)

Kirill's pref is 3am UTC, early is ok, if needed. 
@Joshua can you do 3 am UTC (8 pm local?)
@Renat? 

Can we pencil 3:00 UTC on #openstack-mistral and adjust for Renat if needed?

DZ> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Cinder: Whats the way to do cleanup during service shutdown / restart ?

2014-04-03 Thread Deepak Shetty
Hi,
I am looking to umount the glsuterfs shares that are mounted as part of
gluster driver, when c-vol is being restarted or Ctrl-C'ed (as in devstack
env) or when c-vol service is being shutdown.

I tried to use __del__ in GlusterfsDriver(nfs.RemoteFsDriver) and it didn't
work

 def __del__(self):
LOG.info(_("DPKS: Inside __del__ Hurray!, shares=%s")%
self._mounted_shares)
for share in self._mounted_shares:
mount_path = self._get_mount_point_for_share(share)
command = ['umount', mount_path]
self._do_umount(command, True, share)

self._mounted_shares is defined in the base class (RemoteFsDriver)

   1. ^C2014-04-03 13:29:55.547 INFO cinder.openstack.common.service [-]
   Caught SIGINT, stopping children
   2. 2014-04-03 13:29:55.548 INFO cinder.openstack.common.service [-]
   Caught SIGTERM, exiting
   3. 2014-04-03 13:29:55.550 INFO cinder.openstack.common.service [-]
   Caught SIGTERM, exiting
   4. 2014-04-03 13:29:55.560 INFO cinder.openstack.common.service [-]
   Waiting on 2 children to exit
   5. 2014-04-03 13:29:55.561 INFO cinder.openstack.common.service [-]
   Child 30185 exited with status 1
   6. 2014-04-03 13:29:55.562 INFO cinder.volume.drivers.glusterfs [-]
   DPKS: Inside __del__ Hurray!, shares=[]
   7. 2014-04-03 13:29:55.563 INFO cinder.openstack.common.service [-]
   Child 30186 exited with status 1
   8. Exception TypeError: "'NoneType' object is not callable" in >
   ignored
   9. [stack@devstack-vm tempest]$

So the _mounted_shares is empty ([]) which isn't true since I have 2
glsuterfs shares mounted and when i print _mounted_shares in other parts of
code, it does show me the right thing.. as below...

>From volume/drivers/glusterfs.py @ line 1062:
LOG.debug(_('Available shares: %s') % self._mounted_shares)

which dumps the debugprint  as below...

2014-04-03 13:29:45.414 DEBUG cinder.volume.drivers.glusterfs
[req-2cf69316-cc42-403a-96f1-90e8e77375aa None None]* Available shares:
[u'devstack-vm.localdomain:/gvol1',
u'devstack-vm.localdomain:/gvol1']*from (pid=30185)
_ensure_shares_mounted
/opt/stack/cinder/cinder/volume/drivers/glusterfs.py:1061
 This brings in few Qs ( I am usign devstack env) ...

1) Is __del__ the right way to do cleanup for a cinder driver ? I have 2
gluster backends setup, hence 2 cinder-volume instances, but i see __del__
being called once only (as per above debug prints)
2) I tried atexit and registering a function to do the cleanup. Ctrl-C'ing
c-vol (from screen ) gives the same issue.. shares is empty ([]), but this
time i see that my atexit handler called twice (once for each backend)
3) In general, whats the right way to do cleanup inside cinder volume
driver when a service is going down or being restarted ?
4) The solution should work in both devstack (ctrl-c to shutdown c-vol
service) and production (where we do service restart c-vol)

Would appreciate a response

thanx,
deepak
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] PTL candidacy

2014-04-03 Thread Tristan Cacqueray
confirmed

On 04/03/2014 03:44 PM, James Slagle wrote:
> I'd like to announce my candidacy for TripleO (Deployment) PTL.
> 
> First, a little about myself. I've been involved with OpenStack and
> contributing to TripleO for nearly a year now. I'm currently a developer at 
> Red
> Hat and I've spent much of my career before OpenStack working on various
> systems management tools. Working on Red Hat's Satellite and RHUI offerings
> and rPath's rBuilder are where I've done most of my Python, API, and database
> development in the past.
> 
> To me, the PTL role is about facilitating developers to work on what they want
> to work on. I see that as the best way to bring in new developers and increase
> our number of contributors. That being said, not everything can fit under the
> TripleO umbrella. So, the role of the PTL also should be in assisting in
> decisions about the direction of the project in a way that is best for TripleO
> and OpenStack as a whole. The PTL role is also an organizational role. Not 
> just
> in the day to day tasks, but also in helping to make sure folks are aware of
> what others are working on.  Also, encouraging collaboration towards common
> goals, maintaining a common focus, and building consensus for the project are
> also important.
> 
> Many of the contributions that I've made to TripleO to date have been about
> broadening support for different use cases and adding to stability. When I
> first got involved, I focused primarily on getting TripleO working really well
> on Fedora and doing a lot of bug fixing and enablement in that area. In doing
> so, I've aimed to do it in a way so that it's easier for the next person 
> coming
> along who might like to try something new. I've also championed things like
> package install support and stable branches for some of our projects.
> Additionally, I have aimed to make TripleO easier for newcomers and 
> developers.
> 
> During the Juno cycle, if elected as PTL, I think that TripleO should continue
> to focus on many of the same areas that are focal points today. These items 
> are
> critical to the success and real world usage of TripleO. The 3 biggest items 
> to
> me are:
> 
> - Improving our CI infrastructure to where TripleO is voting and in the gate
> - HA deployments
> - Upgrades
> - Tuskar
> 
> I'd like to see Tuskar continue to develop to the point where it is ready to 
> be
> integrated into TripleO more directly. Specifically, a devtest path that uses
> Tuskar, CI jobs that use Tuskar, and generally driving folks towards trying 
> and
> providing feedback on the Tuskar work flow.
> 
> In addition though, I'd like to focus on some other overarching themes that I
> think are important to the project. If elected, my additional goals for the
> TripleO project would be to work to broaden it's adoption and increase
> contributions.
> 
> The first of these is further enablement of alternative implementations. I
> would like to see TripleO as broadly adopted as possible, and I think that a
> "one size fits all" approach may not be the best way.  To date, I think 
> TripleO
> has done a good job enabling folks to do alternative implementations as long 
> as
> there are people willing to step up and do the work. I would continue that
> sentiment, but further it some as well by really trying to open the doors for
> new developers.
> 
> To that end, I'd like to focus on easier developer setups, even if that means
> leaving out important pieces of the TripleO process for the sake of giving 
> some
> people an easier way to get bootstrapped. Folks that want to work on support
> for their favorite configuration management tool, or additional package 
> support,
> or additional distro support, don't necessarily have to have a complete
> functional TripleO environment with all the bells and whistles.
> 
> Secondly, I think we as a community could have some better examples of how
> different tools might fit into existing processes, and perhaps even bootstrap
> these implementations to a degree. The puppet-openstack is one such effort I'd
> like to see have some integration with TripleO.
> 
> Thirdly, similar to making things easier for developers, I'd like to make 
> things
> easier for operators to try and use TripleO. I think getting real world
> operator feedback for TripleO is critical, especially as we are in the process
> of defining it's future direction. Some specifics in this area would include
> ability to adopt deployments that might be deployed via pre-existing tooling,
> integration with existing deployed configuration management solutions, or
> ways to integrate with existing upgrade mechanisms (possibly via HOT).
> 
> Finally, I'd like to see TripleO become a true default installer for 
> OpenStack.
> I'd like to see an implementation of elements that are not image specific, and
> instead are the reference implementations of how an OpenStack project gets
> installed and configured. I think there is a lot of opportunity to reduce

Re: [openstack-dev] Help in re-running openstack

2014-04-03 Thread Dolph Mathews
On Thu, Apr 3, 2014 at 8:45 AM, Anita Kuno  wrote:

> On 04/03/2014 07:02 AM, Erno Kuvaja wrote:
> > Hi Shiva,
> >
> > You can get into the screen after you have made the changes stop the
> > process you have changed (ctrl-c on the correct tab) and restart it
> > (arrow up will give you the last command ran which will be the one that
> > has started the process by devstack).
> >
> > - Erno
> >
> > On 03/04/14 11:47, shiva m wrote:
> >> Hi,
> >>
> >> I  am trying to modify code in /op/stack/* and did ./unstack.sh and
> >> ./stack.sh. But after ./stack.sh it reloading to previous values. Any
> >> one  please help where to modify code and  re-run. Say if I modify
> >> some python file or some configurtaion file like /etc/nova/nova.conf,
> >> how  do I make these changes get effected. I have ubuntu-havana
> >> devstack setup.
> >>
> >> I am new to openstack code, correct if I am wrong.
> >>
> >> Thanks,
> >> Shiva
> >>
> >>
> >>
> >>
> >> ___
> >> OpenStack-dev mailing list
> >> OpenStack-dev@lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> Requests of this nature are considered support requests.
>

Although not named, this question is specifically with regard to devstack
(making this the appropriate list!).


>
> Support requests belong on the general mailing list:
> openst...@lists.openstack.org.
>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
> Thank you,
> Anita.


> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] minimal scope covered by third-party testing

2014-04-03 Thread Simon Pasquier
Thanks Salvatore and Kyle for your feedback.

Kyle, you're right, my question has been kicked off by the ML2 ODL bug.
I didn't want to point fingers but rather understand the mid/long-term
plan for 3rd party testing. I'm happy to see that this is taken into
account and hopefully the Juno cycle will provide time to implement the
correct level of testing.

Regards,

Simon

On 03/04/2014 15:26, Kyle Mestery wrote:
> I agree 100% on this in fact. One of the other concerns I have with
> the existing 3rd party
> CI systems is that, other than the "audit" review Salvatore mentions,
> who is ensuring
> they continue to run ok? Once they've been given voting rights, is
> anyone auditing these
> to ensure they continue to function ok?
> 
> I suspect also that Simon is referring to the ODL ML2 MechanismDriver,
> which was broken
> with this commit [1] pushed in at the very end of Icehouse, and in
> fact is still broken unless
> you use the wonky workaround of telling Nova that VIF plugging isn't
> fatal and give it a timeout
> to wait. Better CI for ODL would have caught this, but I'm still
> somewhat saddened this was
> merged so late because now ODL is broken by default and the work to
> fix this is turning out
> to be more challenging than initially thought. :(
> 
> Thanks,
> Kyle
> 
> [1] https://review.openstack.org/#/c/75253/
> 
> On Thu, Apr 3, 2014 at 7:56 AM, Salvatore Orlando  wrote:
>> Hi Simon,
>>
>> I agree with your concern.
>> Let me point out however that VMware mine sweeper runs almost all the smoke
>> suite.
>> It's been down a few days for an internal software upgrade, so perhaps you
>> have not seen any recent report from it.
>>
>> I've seen some CI systems testing as little as tempest.api.network.
>> Since a criterion on the minimum set of tests to run was not defined prior
>> to the release cycle, it was also not ok to enforce it once the system went
>> live.
>> The only thing active at the moment is a sort of purpose built lie detector
>> [1].
>>
>> I hope stricter criteria will be enforced for Juno; I personally think every
>> CI should run at least the smoketest suite for L2/L3 services (eg: load
>> balancer scenario will stay optional).
>>
>> Salvatore
>>
>> [1] https://review.openstack.org/#/c/75304/
>>
>>
>>
>> On 3 April 2014 12:28, Simon Pasquier  wrote:
>>>
>>> Hi,
>>>
>>> I'm looking at [1] but I see no requirement of which Tempest tests
>>> should be executed.
>>>
>>> In particular, I'm a bit puzzled that it is not mandatory to boot an
>>> instance and check that it gets connected to the network. To me, this is
>>> the very minimum for asserting that your plugin or driver is working
>>> with Neutron *and* Nova (I'm not even talking about security groups). I
>>> had a quick look at the existing 3rd party CI systems and I found none
>>> running this kind of check (correct me if I'm wrong).
>>>
>>> Thoughts?
>>>
>>> [1] https://wiki.openstack.org/wiki/Neutron_Plugins_and_Drivers
>>> --
>>> Simon Pasquier
>>> Software Engineer (OpenStack Expertise Center)
>>> Bull, Architect of an Open World
>>> Phone: + 33 4 76 29 71 49
>>> http://www.bull.com
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Hosts within two Availability Zones : possible or not ?

2014-04-03 Thread Sylvain Bauza
Hi,

I'm currently trying to reproduce [1]. This bug requires to have the same
host on two different aggregates, each one having an AZ.

IIRC, Nova API prevents hosts of being part of two distinct AZs [2], so
IMHO this request should not be possible.
That said, there are two flaws where I can identify that no validation is
done :
 - when specifying an AZ in nova.conf, the host is overriding the existing
AZ by its own
 - when adding an host to an aggregate without AZ defined, and afterwards
update the aggregate to add an AZ


So, I need direction. Either we consider it is not possible to share 2 AZs
for the same host and then we need to fix the two above scenarios, or we
say it's nice to have 2 AZs for the same host and then we both remove the
validation check in the API and we fix the output issue reported in the
original bug [1].


Your comments are welcome.
Thanks,
-Sylvain


[1] : https://bugs.launchpad.net/nova/+bug/1277230

[2] :
https://github.com/openstack/nova/blob/9d45e9cef624a4a972c24c47c7abd57a72d74432/nova/compute/api.py#L3378
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Question about performance of ceilometer API

2014-04-03 Thread Doug Hellmann
On Thu, Apr 3, 2014 at 6:24 AM, Yuanjing (D)  wrote:
> Hi:
>
>
>
> I have deployed a RDO havana version's openstack environment for a period.
>
>
>
> By chance I tested APIs of ceilometer, in the first I defined a new meter
> like meter_test, then used 'ceilometer sample-list -m yjmeter' command to
> check the result.
>
> I found that the query cost about 13 seconds, why it was so slow with one
> record?
>
>
>
> Now that RDO used mongodb as default DB for ceilometer, I logging into it
> and found that the meter collection had about 2,700,000 records. Maybe the
> mass records slowed the query.
>
>
>
> Take production environemnt into account, If there are 100 hosts , each host
> contains 30 VM.
>
> CeilometerI use default configuration of 11 compute pollsters and the
> pipeline interval is 600s.
>
> Then one day system will generate 4,752,000 record, the formula is:
> 11(meter) * 30(VM) * 6(system run pollsters 6 times an hour) * 24(a day) *
> 100(host) = 4,752,000 records.
>
>
>
> For above case I think it is necessary to restrict number of records by some
> mechanism, below are my immature idea:
>
> 1. Make restriction on max supported records by time or number of samples,
> discard old records.

There is a mechanism to delete old data in place using an age.

>
> 2. Providing API to delete samples by resource_id or some other conditions,
> so the third integration system may call this API to delete related samples
> when delete a resource.

That's not a bad idea, although you wouldn't always want to call it.
Metering data should live longer than the resource being billed.

>
> 3. Running period task of accounting on raw samples, using 1min samples to
> generate 5min statistics samples, using 5min statistics to generate 30min
> statistics samples, and so on. Every period of sample has individual data
> table and has resriction on max supported records .

This sort of roll-up would be useful for monitoring, but would break
the audit trail for metering. So it might be useful, but may not solve
the whole problem.

> I am not a ceilometer programmer and I apologize if I am missing something
> very obvious.

The rate of collection, meters kept, and retention time are all
configurable using the main configuration file or the pipeline YAML
file. That's not to say this isn't a real problem -- I see one or two
summit sessions that may cover performance issues like this, and I
know it is a high priority for both of our PTL candidates.

Doug

>
> Can you give me some help to make me clear about them and how to implement
> my requirement?
>
>
>
> Thanks
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Help in re-running openstack

2014-04-03 Thread Anita Kuno
On 04/03/2014 07:02 AM, Erno Kuvaja wrote:
> Hi Shiva,
> 
> You can get into the screen after you have made the changes stop the
> process you have changed (ctrl-c on the correct tab) and restart it
> (arrow up will give you the last command ran which will be the one that
> has started the process by devstack).
> 
> - Erno
> 
> On 03/04/14 11:47, shiva m wrote:
>> Hi,
>>
>> I  am trying to modify code in /op/stack/* and did ./unstack.sh and
>> ./stack.sh. But after ./stack.sh it reloading to previous values. Any
>> one  please help where to modify code and  re-run. Say if I modify
>> some python file or some configurtaion file like /etc/nova/nova.conf,
>> how  do I make these changes get effected. I have ubuntu-havana
>> devstack setup.
>>
>> I am new to openstack code, correct if I am wrong.
>>
>> Thanks,
>> Shiva
>>
>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
Requests of this nature are considered support requests.

Support requests belong on the general mailing list:
openst...@lists.openstack.org.

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Thank you,
Anita.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] PTL candidacy

2014-04-03 Thread James Slagle
I'd like to announce my candidacy for TripleO (Deployment) PTL.

First, a little about myself. I've been involved with OpenStack and
contributing to TripleO for nearly a year now. I'm currently a developer at Red
Hat and I've spent much of my career before OpenStack working on various
systems management tools. Working on Red Hat's Satellite and RHUI offerings
and rPath's rBuilder are where I've done most of my Python, API, and database
development in the past.

To me, the PTL role is about facilitating developers to work on what they want
to work on. I see that as the best way to bring in new developers and increase
our number of contributors. That being said, not everything can fit under the
TripleO umbrella. So, the role of the PTL also should be in assisting in
decisions about the direction of the project in a way that is best for TripleO
and OpenStack as a whole. The PTL role is also an organizational role. Not just
in the day to day tasks, but also in helping to make sure folks are aware of
what others are working on.  Also, encouraging collaboration towards common
goals, maintaining a common focus, and building consensus for the project are
also important.

Many of the contributions that I've made to TripleO to date have been about
broadening support for different use cases and adding to stability. When I
first got involved, I focused primarily on getting TripleO working really well
on Fedora and doing a lot of bug fixing and enablement in that area. In doing
so, I've aimed to do it in a way so that it's easier for the next person coming
along who might like to try something new. I've also championed things like
package install support and stable branches for some of our projects.
Additionally, I have aimed to make TripleO easier for newcomers and developers.

During the Juno cycle, if elected as PTL, I think that TripleO should continue
to focus on many of the same areas that are focal points today. These items are
critical to the success and real world usage of TripleO. The 3 biggest items to
me are:

- Improving our CI infrastructure to where TripleO is voting and in the gate
- HA deployments
- Upgrades
- Tuskar

I'd like to see Tuskar continue to develop to the point where it is ready to be
integrated into TripleO more directly. Specifically, a devtest path that uses
Tuskar, CI jobs that use Tuskar, and generally driving folks towards trying and
providing feedback on the Tuskar work flow.

In addition though, I'd like to focus on some other overarching themes that I
think are important to the project. If elected, my additional goals for the
TripleO project would be to work to broaden it's adoption and increase
contributions.

The first of these is further enablement of alternative implementations. I
would like to see TripleO as broadly adopted as possible, and I think that a
"one size fits all" approach may not be the best way.  To date, I think TripleO
has done a good job enabling folks to do alternative implementations as long as
there are people willing to step up and do the work. I would continue that
sentiment, but further it some as well by really trying to open the doors for
new developers.

To that end, I'd like to focus on easier developer setups, even if that means
leaving out important pieces of the TripleO process for the sake of giving some
people an easier way to get bootstrapped. Folks that want to work on support
for their favorite configuration management tool, or additional package support,
or additional distro support, don't necessarily have to have a complete
functional TripleO environment with all the bells and whistles.

Secondly, I think we as a community could have some better examples of how
different tools might fit into existing processes, and perhaps even bootstrap
these implementations to a degree. The puppet-openstack is one such effort I'd
like to see have some integration with TripleO.

Thirdly, similar to making things easier for developers, I'd like to make things
easier for operators to try and use TripleO. I think getting real world
operator feedback for TripleO is critical, especially as we are in the process
of defining it's future direction. Some specifics in this area would include
ability to adopt deployments that might be deployed via pre-existing tooling,
integration with existing deployed configuration management solutions, or
ways to integrate with existing upgrade mechanisms (possibly via HOT).

Finally, I'd like to see TripleO become a true default installer for OpenStack.
I'd like to see an implementation of elements that are not image specific, and
instead are the reference implementations of how an OpenStack project gets
installed and configured. I think there is a lot of opportunity to reduce and
reuse code here across projects in this space. Many projects document how they
should be installed, then there are implementations in devstack, and also
implementations now in TripleO.  I'd like to see the elements become more
universal to where they cou

Re: [openstack-dev] [Neutron][LBaaS] Load balancing use cases. Data from Operators needed.

2014-04-03 Thread Vijay Venkatachalam

The document has Vendor  column, it should be from Cloud 
Operator?

Thanks,
Vijay V.


From: Eugene Nikanorov [mailto:enikano...@mirantis.com]
Sent: Thursday, April 3, 2014 11:23 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Load balancing use cases. Data 
from Operators needed.

Stephen,

Agree with you. Basically the page starts looking as requirements page.
I think we need to move to google spreadsheet, where table is organized easily.
Here's the doc that may do a better job for us:
https://docs.google.com/spreadsheet/ccc?key=0Ar1FuMFYRhgadDVXZ25NM2NfbGtLTkR0TDFNUWJQUWc&usp=sharing

Thanks,
Eugene.

On Thu, Apr 3, 2014 at 5:34 AM, Prashanth Hari 
mailto:hvpr...@gmail.com>> wrote:
More additions to the use cases 
(https://wiki.openstack.org/wiki/Neutron/LBaaS/Usecases).
I have updated some of the features we are interested in.



Thanks,
Prashanth


On Wed, Apr 2, 2014 at 8:12 PM, Stephen Balukoff 
mailto:sbaluk...@bluebox.net>> wrote:
Hi y'all--

Looking at the data in the page already, it looks more like a feature wishlist 
than actual usage data. I thought we agreed to provide data based on percentage 
usage of a given feature, the end result of the data collection being that it 
would become more obvious which features are the most relevant to the most 
users, and therefore are more worthwhile targets for software development.

Specifically, I was expecting to see something like the following (using 
hypothetical numbers of course, and where technical people from "Company A" & 
etc. fill out the data for their organization):

== L7 features ==

"Company A" (Cloud operator serving external customers): 56% of load-balancer 
instances use
"Company B" (Cloud operator serving external customers): 92% of load-balancer 
instances use
"Company C" (Fortune 100 company serving internal customers): 0% of 
load-balancer instances use

== SSL termination ==

"Company A" (Cloud operator serving external customers): 95% of load-balancer 
instances use
"Company B" (Cloud operator serving external customers): 20% of load-balancer 
instances use
"Company C" (Fortune 100 company serving internal customers): 50% of 
load-balancer instances use.

== Racing stripes ==

"Company A" (Cloud operator serving external customers): 100% of load-balancer 
instances use
"Company B" (Cloud operator serving external customers): 100% of load-balancer 
instances use
"Company C" (Fortune 100 company serving internal customers): 100% of 
load-balancer instances use


In my mind, a wish-list of features is only going to be relevant to this 
discussion if (after we agree on what the items under consideration ought to 
be) each technical representative presents a prioritized list for their 
organization. :/ A wish-list is great for brain-storming what ought to be 
added, but is less relevant for prioritization.

In light of last week's meeting, it seems useful to list the features most 
recently discussed in that meeting and on the mailing list as being points on 
which we want to gather actual usage data (ie. from what people are actually 
using on the load balancers in their organization right now). Should we start a 
new page that lists actual usage percentages, or just re-vamp the one above?  
(After all, wish-list can be useful for discovering things we're missing, 
especially if we get people new to the discussion to add their $0.02.)

Thanks,
Stephen



On Wed, Apr 2, 2014 at 3:46 PM, Jorge Miramontes 
mailto:jorge.miramon...@rackspace.com>> wrote:
Thanks Eugene,

I added our data onto the requirements page since I was hoping to prioritize 
requirements based on the operator data that gets provided. We can move it over 
to the other page if you think that makes sense. See everyone on the weekly 
meeting tomorrow!

Cheers,
--Jorge

From: Susanne Balle mailto:sleipnir...@gmail.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, April 1, 2014 4:09 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron][LBaaS] Load balancing use cases. Data 
from Operators needed.

I added two more. I am still working on our HA use cases. Susanne

On Tue, Apr 1, 2014 at 4:16 PM, Fox, Kevin M 
mailto:kevin@pnnl.gov>> wrote:
I added our priorities. I hope its formatted well enough. I just took a stab in 
the dark.

Thanks,
Kevin

From: Eugene Nikanorov [enikano...@mirantis.com]
Sent: Tuesday, April 01, 2014 3:02 AM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [Neutron][LBaaS] Load balancing use cases. Data from 
Operators needed.
Hi folks,

On the last meeting we decided to collect usage data so we could prioritize 
features and see what is demanded most.

Here's the blank page to do that (in a free form).

Re: [openstack-dev] [Neutron] minimal scope covered by third-party testing

2014-04-03 Thread Kyle Mestery
I agree 100% on this in fact. One of the other concerns I have with
the existing 3rd party
CI systems is that, other than the "audit" review Salvatore mentions,
who is ensuring
they continue to run ok? Once they've been given voting rights, is
anyone auditing these
to ensure they continue to function ok?

I suspect also that Simon is referring to the ODL ML2 MechanismDriver,
which was broken
with this commit [1] pushed in at the very end of Icehouse, and in
fact is still broken unless
you use the wonky workaround of telling Nova that VIF plugging isn't
fatal and give it a timeout
to wait. Better CI for ODL would have caught this, but I'm still
somewhat saddened this was
merged so late because now ODL is broken by default and the work to
fix this is turning out
to be more challenging than initially thought. :(

Thanks,
Kyle

[1] https://review.openstack.org/#/c/75253/

On Thu, Apr 3, 2014 at 7:56 AM, Salvatore Orlando  wrote:
> Hi Simon,
>
> I agree with your concern.
> Let me point out however that VMware mine sweeper runs almost all the smoke
> suite.
> It's been down a few days for an internal software upgrade, so perhaps you
> have not seen any recent report from it.
>
> I've seen some CI systems testing as little as tempest.api.network.
> Since a criterion on the minimum set of tests to run was not defined prior
> to the release cycle, it was also not ok to enforce it once the system went
> live.
> The only thing active at the moment is a sort of purpose built lie detector
> [1].
>
> I hope stricter criteria will be enforced for Juno; I personally think every
> CI should run at least the smoketest suite for L2/L3 services (eg: load
> balancer scenario will stay optional).
>
> Salvatore
>
> [1] https://review.openstack.org/#/c/75304/
>
>
>
> On 3 April 2014 12:28, Simon Pasquier  wrote:
>>
>> Hi,
>>
>> I'm looking at [1] but I see no requirement of which Tempest tests
>> should be executed.
>>
>> In particular, I'm a bit puzzled that it is not mandatory to boot an
>> instance and check that it gets connected to the network. To me, this is
>> the very minimum for asserting that your plugin or driver is working
>> with Neutron *and* Nova (I'm not even talking about security groups). I
>> had a quick look at the existing 3rd party CI systems and I found none
>> running this kind of check (correct me if I'm wrong).
>>
>> Thoughts?
>>
>> [1] https://wiki.openstack.org/wiki/Neutron_Plugins_and_Drivers
>> --
>> Simon Pasquier
>> Software Engineer (OpenStack Expertise Center)
>> Bull, Architect of an Open World
>> Phone: + 33 4 76 29 71 49
>> http://www.bull.com
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] reviewer update march

2014-04-03 Thread Jordan OMara

On 04/04/14 00:02 +1300, Robert Collins wrote:

Getting back in the swing of things...

Hi,
   like most OpenStack projects we need to keep the core team up to
date: folk who are not regularly reviewing will lose context over
time, and new folk who have been reviewing regularly should be trusted
with -core responsibilities.

In this months review:



- Jordan O'Mara for removal from -core


+1 : focused on horizon/tuskar-ui features now

--
Jordan O'Mara 
Red Hat Engineering, Raleigh 


pgpmywKzp7CMb.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Support for multiple sort keys and sort directions in REST GET APIs

2014-04-03 Thread Steven Kaufer

Duncan,

Thanks for the reply.  The sorting is done in the common
sqlalchemy.utils.paginate_query function, which takes an ORM model class as
an argument
(https://github.com/openstack/oslo-incubator/blob/master/openstack/common/db/sqlalchemy/utils.py#L82).
  The only valid sort columns are attributes on the given model class.  For
example,

In cinder that class is "models.Volume":
https://github.com/openstack/cinder/blob/master/cinder/db/sqlalchemy/api.py#L1331
In nova that class is "models.Instance":
https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/api.py#L1946
In glance that class is "models.Image";
https://github.com/openstack/glance/blob/master/glance/db/sqlalchemy/api.py#L540

This limits the scope of what can be sorted on (ie, it cannot be any
attribute that exists on an item returned from a detailed query).

The blueprint is lacking this level of detail and I will update it
accordingly.

Does this address your concern?

Thanks,
Steven Kaufer

Duncan Thomas  wrote on 04/03/2014 05:18:47 AM:

> From: Duncan Thomas 
> To: "OpenStack Development Mailing List (not for usage questions)"
> ,
> Date: 04/03/2014 05:25 AM
> Subject: Re: [openstack-dev] Support for multiple sort keys and sort
> directions in REST GET APIs
>
> Some of the cinder APIs do weird database joins and double lookups and
> things, making every field sortable might have some serious database
> performance impact and open up a DoS attack. Will need more
> investigation to be sure.
>
> On 2 April 2014 19:42, Steven Kaufer  wrote:
> > I have proposed blueprints in both nova and cinder for supporting
multiple
> > sort keys and sort directions for the GET APIs (servers and volumes).
I am
> > trying to get feedback from other projects in order to have a more
uniform
> > API across services.
> >
> > Problem description from nova proposal:
> >
> > "There is no support for retrieving server data in a specific order, it
is
> > defaulted to descending sort order by the "created date" and "id" keys.
In
> > order to retrieve data in any sort order and direction, the REST APIs
need
> > to accept multiple sort keys and directions.
> >
> > Use Case: A UI that displays a table with only the page of data that it
has
> > retrieved from the server. The items in this table need to be sorted by
> > status first and by display name second. In order to retrieve data in
this
> > order, the APIs must accept multiple sort keys/directions."
> >
> > See nova proposal .rst file (cinder is basically the same) for more
> > information:  https://review.openstack.org/#/c/84451/
> >
> > Most projects have similar GET requests and I am trying to get some
> > consensus on this approach across the various projects; the goal is to
have
> > this type of functionality common across projects (not just nova and
> > cinder).  Note that some projects (ie, cinder) already support a single
sort
> > key and sort direction, see
> > https://github.com/openstack/cinder/blob/master/cinder/api/v2/
> volumes.py#L212-L213
> >
> > Note that the DB layer already accepts multiple sort keys and sort
> > directions (see
> > https://github.com/openstack/oslo-incubator/blob/master/openstack/
> common/db/sqlalchemy/utils.py#L62),
> > the work I am describing here only exposes the sorting options at the
REST
> > API layer.
> >
> > Please provide feedback on this direction.  Specifically, do you see
any
> > issues (and, if so, why) with allowing the caller to specify sort
orders and
> > directions on the GET APIs?
> >
> > Feel free to leave your feedback in the Gerrit review for the
novablueprint
> > or reply to this thread.
> >
> > Thanks,
> >
> > Steven Kaufer
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
>
> --
> Duncan Thomas
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] minimal scope covered by third-party testing

2014-04-03 Thread Salvatore Orlando
Hi Simon,

I agree with your concern.
Let me point out however that VMware mine sweeper runs almost all the smoke
suite.
It's been down a few days for an internal software upgrade, so perhaps you
have not seen any recent report from it.

I've seen some CI systems testing as little as tempest.api.network.
Since a criterion on the minimum set of tests to run was not defined prior
to the release cycle, it was also not ok to enforce it once the system went
live.
The only thing active at the moment is a sort of purpose built lie detector
[1].

I hope stricter criteria will be enforced for Juno; I personally think
every CI should run at least the smoketest suite for L2/L3 services (eg:
load balancer scenario will stay optional).

Salvatore

[1] https://review.openstack.org/#/c/75304/



On 3 April 2014 12:28, Simon Pasquier  wrote:

> Hi,
>
> I'm looking at [1] but I see no requirement of which Tempest tests
> should be executed.
>
> In particular, I'm a bit puzzled that it is not mandatory to boot an
> instance and check that it gets connected to the network. To me, this is
> the very minimum for asserting that your plugin or driver is working
> with Neutron *and* Nova (I'm not even talking about security groups). I
> had a quick look at the existing 3rd party CI systems and I found none
> running this kind of check (correct me if I'm wrong).
>
> Thoughts?
>
> [1] https://wiki.openstack.org/wiki/Neutron_Plugins_and_Drivers
> --
> Simon Pasquier
> Software Engineer (OpenStack Expertise Center)
> Bull, Architect of an Open World
> Phone: + 33 4 76 29 71 49
> http://www.bull.com
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo][Nova][Heat] Sample config generator issue

2014-04-03 Thread Doug Hellmann
On Wed, Apr 2, 2014 at 9:55 PM, Zane Bitter  wrote:
> We have an issue in Heat where the sample config generator from Oslo is
> currently broken (see bug #1288586). Unfortunately it turns out that there
> is no fix to the generator script itself that can do the Right Thing for
> both Heat and Nova.
>
> A brief recap on how the sample config generator works: it goes through all
> of the files specified and finds all the ConfigOpt objects at the top level.
> It then searches for them in the registered options, and returns the name of
> the group in which they are registered. Previously it looked for the
> identical object being registered, but now it just looks for any equivalent
> ones. When you register two or more equivalent options, the second and
> subsequent ones are just ignored by oslo.config.
>
> The situation in Heat is that we have a bunch of equivalent options
> registered in multiple groups. This is because we have a set of options for
> each client library (i.e. python-novaclient, python-cinderclient, &c.), with
> each set containing equivalent options (e.g. every client has an
> "endpoint_type" option for looking up the keystone catalog). This used to
> work, but now that equivalent options (and not just identical options) match
> when searching for them in a group, we just end up with multiple copies of
> each option in the first group to be searched, and none in any of the other
> groups, in the generated sample config.
>
> Nova, on the other hand, has the opposite problem (see bug #1262148). Nova
> adds the auth middleware from python-keystoneclient to its list of files to
> search for options. That middleware imports a file from oslo-incubator that
> registers the option in the default group - a registration that is *not*
> wanted by the keystone middleware, because it registers an equivalent option
> in a different group instead (or, as it turns out, as well). Just to make it
> interesting, Nova uses the same oslo-incubator module and relies on the
> option being registered in the default group. Of course, oslo-incubator is
> not a real library, so it gets registered a second time but ignored (since
> an equivalent one is already present). Crucially, the oslo-incubator file
> from python-keystoneclient is not on the list of extra modules to search in
> Nova, so when the generator script was looking for options identical to the
> ones it found in modules, it didn't see this option at all. Hence the change
> to looking for equivalent options, which broke Heat.
>
> Neither comparing for equivalence nor for identity in the generator script
> can solve both use cases. It's hard to see what Heat could or should be
> doing differently. I think it follows that the fix needs to be in either
> Nova or python-keystoneclient in the first instance.
>
> One option I suggested was for the auth middleware to immediately deregister
> the extra option that had accidentally been registered upon importing a
> module from oslo-incubator. I put up patches to do this, but it seemed to be
> generally agreed by Oslo folks that this was a Bad Idea.
>
> Another option would be to specifically include the relevant module from
> keystoneclient.openstack.common when generating the sample config. This
> seems quite brittle to me.
>
> We could fix it by splitting the oslo-incubator module into one that
> provides the code needed by the auth middleware and one that does the
> registration of options, but this will likely result in cascading changes to
> a whole bunch of projects.
>
> Does anybody have any thoughts on what the right fix looks like here?
> Currently, verification of the sample config is disabled in the Heat gate
> because of this issue, so it would be good to get it resolved.
>
> cheers,
> Zane.

We've seen some similar issues in other projects where the "guessing"
done by the generator is not matching the newer ways we use
configuration options. In those cases, I suggested that projects use
the new entry-point feature that allows them to explicitly list
options within groups, instead of scanning a set of files. This
feature was originally added so apps can include the options from
libraries that use oslo.config (such as oslo.messaging), but it can be
used for options define by the applications as well.

To define an option discovery entry point, create a function that
returns a sequence of (group name, option list) pairs. For an example,
see list_opts() in oslo.messaging [1]. Then define the entry point in
your setup.cfg under the "oslo.config.opts" namespace [2]. If you need
more than one function, register them separately.

Then change the way generate_sample.sh is called for your project so
it passes the -l option [3] once for each name you have given to the
entry points. So if you have just "heat" you would pass "-l heat" and
if you have "heat-core" and "heat-some-driver" you would pass "-l
heat-core -l heat-some-driver".

For application options, you shouldn't mix the -l option with the file
scanner,

  1   2   >